The Effects of Generative AI on Computing Students’ Help-Seeking Preferences

posted in: reading | 0

From Discussion:

Another theme that emerged was related to experience level and trust. Students who were less familiar with these models or who had early negative experiences were much less likely to want to use them. This is partially explained by the concept of calibrated trust [2, 71] where early negative experiences calibrated students to distrust the models. This is further exacerbated by the fact that models can perform well at times, while also hallucinating incorrect information and struggling on easy multiple choice questions [60, 61]. Less experienced students described being especially apprehensive about receiving wrong answers and being unable to discern between correct and incorrect responses. This skepticism is a promising finding given the widespread fears about students blindly relying on these tools [3, 70]. Conversely, experienced students were more lenient with the models. Students mentioned the necessity of applying their own domain knowledge to evaluate the correctness of the model’s responses; hence, more knowledgeable students were better equipped to filter through incorrect responses and find the bits that were valuable or could “guide” their next steps. Across experienced and inexperienced students, distrust did not necessarily mean students failed to receive value from them as we saw most students using the models to varying extents…

Help-seeking is a critical way for students to learn new concepts, acquire new skills, and get unstuck when problem-solving in their computing courses. The recent proliferation of generative AI tools, such as ChatGPT, offers students a new source of help that is always available on-demand. However, it is unclear how this new resource compares to existing help-seeking resources along dimensions of perceived quality, latency, and trustworthiness. In this paper, we investigate the help-seeking preferences and experiences of computing students now that generative AI tools are available to them. We collected survey data (n=47) and conducted interviews (n=8) with computing students. Our results suggest that although these models are being rapidly adopted, they have not yet fully eclipsed traditional help resources. The help-seeking resources that students rely on continue to vary depending on the task and other factors. Finally, we observed preliminary evidence about how help-seeking with generative AI is a skill that needs to be developed, with disproportionate benefits for those who are better able to harness the capabilities of LLMs. We discuss potential implications for integrating generative AI into computing classrooms and the future of help-seeking in the era of generative AI.

Ryan Watkins