Many online content providers use “personalization” algorithms to generate recommendations that are individually fine-tuned for users’ interests. However, these algorithms have also been criticized because tailoring content to specific users necessarily restricts the diversity of content, leading to reinforcement of existing beliefs. Here, we investigated the degree to which personalization can hinder learning in the context of learning about novel categories. Our results show that personalized learners developed inaccurate representations about the categories and reported inflated confidence about incorrect decisions for what they have never or rarely studied before relative to non-personalized learners. The results suggest that personalization algorithms may contribute to distorted representations of information, which in turn causes inaccurate generalizations, such as stereotypes.