Explanations have been framed as an essential feature for better and fairer human-AI decision-making. In the context of fairness, this has not been appropriately studied, as prior works have mostly evaluated explanations based on their effects on people’s perceptions. We argue, however, that for explanations to promote fairer decisions, they must enable humans to discern correct and wrong AI recommendations. To validate our conceptual arguments, we conduct an empirical study to examine the relationship between explanations, fairness perceptions, and reliance behavior. Our findings show that explanations influence people’s fairness perceptions, which, in turn, affect reliance. However, we observe that low fairness perceptions lead to more overrides of AI recommendations, regardless of whether they are correct or wrong. This (i) raises doubts about the usefulness of existing explanations for enhancing distributive fairness, and, (ii) makes an important case for why perceptions must not be confused as a proxy for appropriate reliance.
Latest posts by Ryan Watkins (see all)
- Legal Aspects for Software Developers Interested in Generative AI Applications - April 28, 2024
- Large Language Models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments - April 24, 2024
- A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI - April 24, 2024