Another theme that emerged was related to experience level and trust. Students who were less familiar with these models or who had early negative experiences were much less likely to want to use them. This is partially explained by the concept of calibrated trust [2, 71] where early negative experiences calibrated students to distrust the models. This is further exacerbated by the fact that models can perform well at times, while also hallucinating incorrect information and struggling on easy multiple choice questions [60, 61]. Less experienced students described being especially apprehensive about receiving wrong answers and being unable to discern between correct and incorrect responses. This skepticism is a promising finding given the widespread fears about students blindly relying on these tools [3, 70]. Conversely, experienced students were more lenient with the models. Students mentioned the necessity of applying their own domain knowledge to evaluate the correctness of the model’s responses; hence, more knowledgeable students were better equipped to filter through incorrect responses and find the bits that were valuable or could “guide” their next steps. Across experienced and inexperienced students, distrust did not necessarily mean students failed to receive value from them as we saw most students using the models to varying extents…
|
- The Essentials of AI for Life and Society: An AI Literacy Course for the University Community - January 14, 2025
- A Novel Approach to Scalable and Automatic Topic-Controlled Question Generation in Education - January 11, 2025
- Engineering of Inquiry: The “Transformation” of Social Science through Generative AI - January 10, 2025