Many online content providers use “personalization” algorithms to generate recommendations that are individually fine-tuned for users’ interests. However, these algorithms have also been criticized because tailoring content to specific users necessarily restricts the diversity of content, leading to reinforcement of existing beliefs. Here, we investigated the degree to which personalization can hinder learning in the context of learning about novel categories. Our results show that personalized learners developed inaccurate representations about the categories and reported inflated confidence about incorrect decisions for what they have never or rarely studied before relative to non-personalized learners. The results suggest that personalization algorithms may contribute to distorted representations of information, which in turn causes inaccurate generalizations, such as stereotypes.
|
Latest posts by Ryan Watkins (see all)
- Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task - June 19, 2025
- The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI - June 13, 2025
- Artificial Intelligence Software to Accelerate Screening for Living Systematic Reviews - June 13, 2025