A growing number of oversight boards and regulatory bodies seek to monitor and govern algorithms that make decisions about people’s lives. Prior work has explored how people believe algorithmic decisions should be made, but there is little understanding of how individual factors like sociodemographics or direct experience with a decision-making scenario may affect their ethical views. We take a first step toward filling this gap by exploring how people’s perceptions of one aspect of procedural algorithmic fairness (the fairness of using particular features in an algorithmic decision) relate to their (i) demographics (age, education, gender, race, political views) and (ii) personal experiences with the algorithmic decision-making scenario. We find that political views and personal experience with the algorithmic decision context significantly influence people’s perceptions about the fairness of using different features for bail decision-making. Drawing on our results, we discuss the implications for algorithmic oversight including the need to consider multiple dimensions of diversity in composing oversight and regulatory bodies.
Latest posts by Ryan Watkins (see all)
- Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging - May 20, 2022
- A Transparency Index Framework for AI in Education - May 20, 2022
- Role of Human-AI Interaction in Selective Prediction - May 19, 2022