The rapid proliferation of advanced AI chatbots and large language models (LLMs), such as ChatGPT, have coincided with increased calls to use psychological tools to understand how people perceive interactions with these AI systems. Decades of research into how people perceive minds and anthropomorphize non-humans have led to the development of several prominent frameworks for investigating how people perceive minds in AI systems. Yet, due to the novelty of programs like ChatGPT, mind perception frameworks have not been applied to better understand how exposure to ChatGPT influences the perception of mind in AI. Furthermore, there is an absence of knowledge regarding how individual differences may moderate changes in mind perception as a function of exposure. Here, we reveal the results of a brief exposure manipulation to ChatGPT and its subsequent effect on changing mind perception ratings. We find that even brief exposure significantly increased people’s perceptions of agency and experience in ChatGPT. Moreover, individuals with a higher propensity to anthropomorphize were also more likely to show changes to experiential attributions to ChatGPT (i.e., its ability to feel). These findings suggest that as LLMs like ChatGPT grow in popularity, and people are exposed to them to a greater extent, the degree to which people attribute qualities of mind to AI systems will also increase. This study expands the field’s current understanding of how exposure to LLMs and individual differences may influence attribution of mind to AI.
Latest posts by Ryan Watkins (see all)
- Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents - December 1, 2023
- AI-Augmented Surveys: Leveraging Large Language Models and Surveys for Opinion Prediction [imputation] - November 29, 2023
- Enhancing Human Persuasion With Large Language Models - November 29, 2023