In this work, we empirically examine human-AI decision-making in the presence of explanations based on estimated outcomes. This type of explanation provides a human decision-maker with expected consequences for each decision alternative at inference time – where the estimated outcomes are typically measured in a problem-specific unit (e.g., profit in U.S. dollars). We conducted a pilot study in the context of peer-to-peer lending to assess the effects of providing estimated outcomes as explanations to lay study participants. Our preliminary findings suggest that people’s reliance on AI recommendations increases compared to cases where no explanation or feature-based explanations are provided, especially when the AI recommendations are incorrect. This results in a hampered ability to distinguish correct from incorrect AI recommendations, which can ultimately affect decision quality in a negative way.
arxiv.org/abs/2208.04181
- From Lived Experience to Insight: Unpacking the Psychological Risks of Using AI Conversational Agents - May 30, 2025
- Leveraging Dual Process Theory in Language Agent Framework for Real-time Simultaneous Human-AI Collaboration - May 29, 2025
- Identifying, Evaluating, and Mitigating Risks of AI Thought Partnerships - May 26, 2025