Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.
Latest posts by Ryan Watkins (see all)
- Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs - February 13, 2025
- Identifying trust cues: How trust in science is mediated in content about science - February 11, 2025
- The Importance of Intellectual Humility for Fostering Effective Communication Between Scientists and the Public - February 9, 2025