Another theme that emerged was related to experience level and trust. Students who were less familiar with these models or who had early negative experiences were much less likely to want to use them. This is partially explained by the concept of calibrated trust [2, 71] where early negative experiences calibrated students to distrust the models. This is further exacerbated by the fact that models can perform well at times, while also hallucinating incorrect information and struggling on easy multiple choice questions [60, 61]. Less experienced students described being especially apprehensive about receiving wrong answers and being unable to discern between correct and incorrect responses. This skepticism is a promising finding given the widespread fears about students blindly relying on these tools [3, 70]. Conversely, experienced students were more lenient with the models. Students mentioned the necessity of applying their own domain knowledge to evaluate the correctness of the model’s responses; hence, more knowledgeable students were better equipped to filter through incorrect responses and find the bits that were valuable or could “guide” their next steps. Across experienced and inexperienced students, distrust did not necessarily mean students failed to receive value from them as we saw most students using the models to varying extents…
|
- Experimental Evidence That Conversational Artificial Intelligence Can Steer Consumer Behavior Without Detection - September 28, 2024
- Don’t be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration - September 20, 2024
- Implementing New Technology in Educational Systems - September 19, 2024