As researchers rush to investigate the potential of AI tools like ChatGPT to enhance learning, a number of well-known and well-documented pitfalls threaten the validity of this emerging research. Issues of media comparison research, where the confounding of instructional methods and technological affordances are unrecognized, render effects uninterpretable. Using a recent meta-analysis by Deng et al. (2025) as example, we revisit key insights from the media/methods debate to highlight recurring conceptual challenges in studying AI for education. We identify three considerations needed to interpret this research: the precise nature of the experimental treatment, the activities of the control group, and the validity of the outcome measures as indicators of learning. We contrast the nascent research on ChatGPT with the well-established literature on Intelligent Tutoring Systems (ITS). Our analysis underscores the importance of defining clear research questions, ensuring methodological rigor, and resisting the allure of “fast science.”
osf.io/preprints/psyarxiv/t6uzy_v2
Latest posts by Ryan Watkins (see all)
- Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task - June 19, 2025
- The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI - June 13, 2025
- Artificial Intelligence Software to Accelerate Screening for Living Systematic Reviews - June 13, 2025