This paper examines how individuals perceive the credibility of content originating from human authors versus content generated by large language models, like the GPT language model family that powers ChatGPT, in different user interface versions. Surprisingly, our results demonstrate that regardless of the user interface presentation, participants tend to attribute similar levels of credibility. While participants also do not report any different perceptions of competence and trustworthiness between human and AI-generated content, they rate AI-generated content as being clearer and more engaging. The findings from this study serve as a call for a more discerning approach to evaluating information sources, encouraging users to exercise caution and critical thinking when engaging with content generated by AI systems.
|
Latest posts by Ryan Watkins (see all)
- Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs - February 13, 2025
- Identifying trust cues: How trust in science is mediated in content about science - February 11, 2025
- The Importance of Intellectual Humility for Fostering Effective Communication Between Scientists and the Public - February 9, 2025