A growing number of oversight boards and regulatory bodies seek to monitor and govern algorithms that make decisions about people’s lives. Prior work has explored how people believe algorithmic decisions should be made, but there is little understanding of how individual factors like sociodemographics or direct experience with a decision-making scenario may affect their ethical views. We take a first step toward filling this gap by exploring how people’s perceptions of one aspect of procedural algorithmic fairness (the fairness of using particular features in an algorithmic decision) relate to their (i) demographics (age, education, gender, race, political views) and (ii) personal experiences with the algorithmic decision-making scenario. We find that political views and personal experience with the algorithmic decision context significantly influence people’s perceptions about the fairness of using different features for bail decision-making. Drawing on our results, we discuss the implications for algorithmic oversight including the need to consider multiple dimensions of diversity in composing oversight and regulatory bodies.
Latest posts by Ryan Watkins (see all)
- From Robots to Books: An Introduction to Smart Applications of AI in Education (AIEd) - January 25, 2023
- ChatGPT Strikes at the Heart of the Scientific World View - January 25, 2023
- Updating syllabus for chatGPT - January 14, 2023