Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.
Latest posts by Ryan Watkins (see all)
- From Robots to Books: An Introduction to Smart Applications of AI in Education (AIEd) - January 25, 2023
- ChatGPT Strikes at the Heart of the Scientific World View - January 25, 2023
- Updating syllabus for chatGPT - January 14, 2023