Explanations have been framed as an essential feature for better and fairer human-AI decision-making. In the context of fairness, this has not been appropriately studied, as prior works have mostly evaluated explanations based on their effects on people’s perceptions. We argue, however, that for explanations to promote fairer decisions, they must enable humans to discern correct and wrong AI recommendations. To validate our conceptual arguments, we conduct an empirical study to examine the relationship between explanations, fairness perceptions, and reliance behavior. Our findings show that explanations influence people’s fairness perceptions, which, in turn, affect reliance. However, we observe that low fairness perceptions lead to more overrides of AI recommendations, regardless of whether they are correct or wrong. This (i) raises doubts about the usefulness of existing explanations for enhancing distributive fairness, and, (ii) makes an important case for why perceptions must not be confused as a proxy for appropriate reliance.
Latest posts by Ryan Watkins (see all)
- Experimental Evidence That Conversational Artificial Intelligence Can Steer Consumer Behavior Without Detection - September 28, 2024
- Don’t be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration - September 20, 2024
- Implementing New Technology in Educational Systems - September 19, 2024