With the increased sophistication of AI technology, humans have the possibility to offload a variety of tasks to algorithms (e.g., to Siri or ChatGPT). Whether and to what extent people engage in such cognitive offloading depends on various factors. In the present study, we investigated whether people are willing to offload an attentionally demanding task to an algorithm. Participants performed a multiple object tracking (MOT) task, which required them to track a subset of moving targets among distractors on a computer screen. Participants first performed the MOT task alone and then had the opportunity to offload an unlimited number of targets to a computer partner. If participants decided to offload the entire task to the computer, they could instead perform a bonus task which resulted in additional financial gain – however, this gain was conditional on a high performance accuracy in the MOT task. Thus, participants should only offload if they trusted the computer to perform accurately. We found that participants completely offloaded the MOT task in 50% of all trials (Experiment 1). The willingness to offload increased significantly (up to 80%) if participants were informed beforehand that the computer’s accuracy was flawless (Experiment 2). These results combined with those from our previous study (Wahn et al., 2023), which did not include a bonus task but was identical otherwise, show that the human willingness to offload an attentionally demanding task to an algorithm is boosted both by the knowledge about the algorithm’s capacity and by the availability of a secondary task.
|
Latest posts by Ryan Watkins (see all)
- Whose ChatGPT? Unveiling Real-World Educational Inequalities Introduced by Large Language Models - October 30, 2024
- The illusion of information adequacy - October 17, 2024
- Curating Content for Course AI Chatbots: Ethical Considerations for Educators - October 17, 2024