Choosing Between Human and Algorithmic Advisors: The Role of Responsibility Sharing

posted in: reading | 0


Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor’s perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and that the perception of the advisor’s responsibility affected the advice takers’ choice of advisor. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.