When a human receives a prediction or recommended course of action from an intelligent agent, what additional information, beyond the prediction or recommendation itself, does the human require from the agent to decide whether to trust or reject the prediction or recommendation? In this paper we survey literature in the area of trust between a single human supervisor and a single agent subordinate to determine the nature and extent of this additional information and to characterize it into a taxonomy that can be leveraged by future researchers and intelligent agent practitioners. By examining this question from a human-centered, information-focused point of view, we can begin to compare and contrast different implementations and also provide insight and directions for future work.
arxiv.org/abs/2205.02987
- Causal Claims — Tell Me Your (Cognitive) Budget and I’ll Tell You What You Value - March 30, 2024
- Persistent interaction patterns across social media platforms and over time - March 22, 2024
- Using Digital Nudges To Enhance Collective Intelligence In Online Collaboration: Insights From Unexpected Outcomes. - March 15, 2024