Abstract
Many online content providers use “personalization” algorithms to generate recommendations that are individually fine-tuned for users’ interests. However, these algorithms have also been criticized because tailoring content to specific users necessarily restricts the diversity of content, leading to reinforcement of existing beliefs. Here, we investigated the degree to which personalization can hinder learning in the context of learning about novel categories. Our results show that personalized learners developed inaccurate representations about the categories and reported inflated confidence about incorrect decisions for what they have never or rarely studied before relative to non-personalized learners. The results suggest that personalization algorithms may contribute to distorted representations of information, which in turn causes inaccurate generalizations, such as stereotypes.
- Legal Aspects for Software Developers Interested in Generative AI Applications - April 28, 2024
- Large Language Models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments - April 24, 2024
- A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI - April 24, 2024