https://psyarxiv.com/qhwvx/
Designing trustworthy algorithmic decision-making systems is a central goal in system design. Additionally, it is crucial that external parties can adequately assess the trustworthiness of systems. Ultimately, this should lead to calibrated trust: trustors adequately trust and distrust the system. But the process through which trustors assess actual trustworthiness of a system to end up at their perceived trustworthiness of a system remains underexplored. Transferring from psychological theory about interpersonal assessment of human characteristics, we outline a “trustworthiness assessment” model with two levels. On the micro level, trustors assess system trustworthiness utilizing cues. On the macro level, trustworthiness assessments proliferate between different trustors – one stakeholder’s trustworthiness assessment of a system affects others’ trustworthiness assessments of the same system. This paper contributes a theoretical model that advances understanding of trustworthiness assessment processes when confronted with algorithmic systems. It can be used to inspire system design, stakeholder training, and regulation.
Latest posts by Ryan Watkins (see all)
- Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents - December 1, 2023
- AI-Augmented Surveys: Leveraging Large Language Models and Surveys for Opinion Prediction [imputation] - November 29, 2023
- Enhancing Human Persuasion With Large Language Models - November 29, 2023