AI assistance in decision-making has become popular, yet people’s inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of {second opinions} may affect decision-makers’ behavior and performance in AI-assisted decision-making. We find that if both the AI model’s decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer’s second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making.
Latest posts by Ryan Watkins (see all)
- Experimental Evidence for Efficiency Gains on Trust via AI-Mediated Communication - November 28, 2024
- A Computational Method for Measuring “Open Codes” in Qualitative Analysis - November 27, 2024
- Can AI grade your essays? A comparative analysis of large language models and teacher ratings in multidimensional essay scoring - November 26, 2024