Another theme that emerged was related to experience level and trust. Students who were less familiar with these models or who had early negative experiences were much less likely to want to use them. This is partially explained by the concept of calibrated trust [2, 71] where early negative experiences calibrated students to distrust the models. This is further exacerbated by the fact that models can perform well at times, while also hallucinating incorrect information and struggling on easy multiple choice questions [60, 61]. Less experienced students described being especially apprehensive about receiving wrong answers and being unable to discern between correct and incorrect responses. This skepticism is a promising finding given the widespread fears about students blindly relying on these tools [3, 70]. Conversely, experienced students were more lenient with the models. Students mentioned the necessity of applying their own domain knowledge to evaluate the correctness of the model’s responses; hence, more knowledgeable students were better equipped to filter through incorrect responses and find the bits that were valuable or could “guide” their next steps. Across experienced and inexperienced students, distrust did not necessarily mean students failed to receive value from them as we saw most students using the models to varying extents…
|
- Exploring Student Behaviors and Motivations using AI TAs with Optional Guardrails - April 16, 2025
- AI-University: An LLM-based platform for instructional alignment to scientific classrooms - April 15, 2025
- Interaction-Required Suggestions for Control, Ownership, and Awareness in Human-AI Co-Writing - April 14, 2025