A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making

posted in: reading | 0
“… Based on the observations that (a) explanations can be misused to deceive certain groups of stakeholders (deliberately or unintentionally), and thus, (b) facilitating positive perceptions should not be an unconditional goal of providing explanations, I have introduced the desideratum of appropriate fairness perceptions…”



Abstract
Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for understanding how a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield unfair outcomes because their sanity is challenging to assess and calibrate in the first place — which is particularly worrisome for human decision-subjects. Based on this observation and building upon existing work, I aim to make the following three main contributions through my doctoral thesis: (a) understand how (potential) decision-subjects perceive algorithmic decisions (with varying degrees of transparency of the underlying ADS), as compared to similar decisions made by humans; (b) evaluate different tools for transparent decision-making with respect to their effectiveness in enabling people to appropriately assess the quality and fairness of ADS; and (c) develop human-understandable technical artifacts for fair automated decision-making. Over the course of the first half of my PhD program, I have already addressed substantial pieces of (a) and (c), whereas (b) will be the major focus of the second half.

Ryan Watkins