Designing trustworthy algorithmic decision-making systems is a central goal in system design. Additionally, it is crucial that external parties can adequately assess the trustworthiness of systems. Ultimately, this should lead to calibrated trust: trustors adequately trust and distrust the system. But the process through which trustors assess actual trustworthiness of a system to end up at their perceived trustworthiness of a system remains underexplored. Transferring from psychological theory about interpersonal assessment of human characteristics, we outline a “trustworthiness assessment” model with two levels. On the micro level, trustors assess system trustworthiness utilizing cues. On the macro level, trustworthiness assessments proliferate between different trustors – one stakeholder’s trustworthiness assessment of a system affects others’ trustworthiness assessments of the same system. This paper contributes a theoretical model that advances understanding of trustworthiness assessment processes when confronted with algorithmic systems. It can be used to inspire system design, stakeholder training, and regulation.
Latest posts by Ryan Watkins (see all)
- On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making - September 27, 2022
- Adverse effects of information personalization on human learning - September 26, 2022
- eXtended [Reality] Artificial Intelligence: New Prospects of Human-AI Interaction Research - September 19, 2022