Explanations have been framed as an essential feature for better and fairer human-AI decision-making. In the context of fairness, this has not been appropriately studied, as prior works have mostly evaluated explanations based on their effects on people’s perceptions. We argue, however, that for explanations to promote fairer decisions, they must enable humans to discern correct and wrong AI recommendations. To validate our conceptual arguments, we conduct an empirical study to examine the relationship between explanations, fairness perceptions, and reliance behavior. Our findings show that explanations influence people’s fairness perceptions, which, in turn, affect reliance. However, we observe that low fairness perceptions lead to more overrides of AI recommendations, regardless of whether they are correct or wrong. This (i) raises doubts about the usefulness of existing explanations for enhancing distributive fairness, and, (ii) makes an important case for why perceptions must not be confused as a proxy for appropriate reliance.
Latest posts by Ryan Watkins (see all)
- AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap - September 22, 2023
- The Robotic Herd: Using Human-Bot Interactions to Explore Irrational Herding - September 22, 2023
- Human-AI Interactions and Societal Pitfalls - September 19, 2023