The Trust Recovery Journey. The Effect of Timing of Errors on the Willingness to Follow AI Advice.

posted in: reading | 0

Complementing human decision-making with AI advice offers substantial advantages. However, humans do not always trust AI advice appropriately and are overly sensitive to incidental AI errors, even in cases with overall good performance. Today’s research still needs to uncover the underlying aspects of trust decline and recovery over time in repeated human-AI interactions. Our work investigates the consequences of incidental AI error on (self-reported) trust and participants’ reliance on AI advice. Results from our experiment, where 208 participants evaluated 14 legal cases before and after receiving algorithmic advice, showed that trust significantly decreased after early and late errors but was rapidly restored in both scenarios. Reliance significantly dropped only for early errors but not for late errors. In both scenarios, reliance was able to be restored. Results suggest that late (compared to early) errors are less drastic in trust loss and allow quicker recovery. These findings align with an interpretation in which humans can build up trust over time if a system is performing well, making them more tolerant of incidental AI errors.


osf.io/preprints/osf/5awrs









Ryan Watkins