Artificial intelligence (AI) may be the new-new-norm in a post-pandemic learning environment. There is a growing number of university students using AI like ChatGPT and Bard to support their academic experience. Much of the AI in higher education research to date has focused on academic integrity and matters of authorship; yet, there may be unintended consequences beyond these concerns for students. That is, there may be people who reduce their formal social interactions while using these tools. This study evaluates 387 university students and their relationship to – and with – artificial intelligence large-language model-based tools. Using structural equation modelling, the study finds evidence that while AI chatbots designed for information provision may be associated with student performance, when social support, psychological wellbeing, loneliness, and sense of belonging are considered it has a net negative effect on achievement. This study tests an AI-specific form of social support, and the cost it may pose to student success, wellbeing, and retention. Indeed, while AI chatbot usage may be associated with poorer social outcomes, human-substitution activity that may be occurring when a student chooses to seek support from an AI rather than a human (e.g. a librarian, professor, or student advisor) may pose interesting learning and teaching policy implications. We explore the implications of this from the lens of student success and belonging.
|
Latest posts by Ryan Watkins (see all)
- Experimental Evidence for Efficiency Gains on Trust via AI-Mediated Communication - November 28, 2024
- A Computational Method for Measuring “Open Codes” in Qualitative Analysis - November 27, 2024
- Can AI grade your essays? A comparative analysis of large language models and teacher ratings in multidimensional essay scoring - November 26, 2024