Rapid developments in artificial intelligence and the mainstreaming of generative AI have raised vital questions about the future of science. AI techniques present significant potential for enhancing scientific research, but also risks of bias, inequity, and eroding originality. Approaching AI as a methodology rather than a technology can help manage the tension between promise and peril, anchoring AI efficacy and ethics in processes of AI design and use. An “AI thinking” perspective can help scientists achieve richer analysis, more open science, and constructive use of generative AI while helping manage risk of harm. Adopting AI thinking requires diverse efforts in education, AI tools, and new public narratives, but these efforts will be rewarded with new approaches to AI as a fundamental toolbox for contemporary science.
Latest posts by Ryan Watkins (see all)
- Learning activities in technology-enhanced learning: A systematic review of meta-analyses and second-order meta-analysis in higher education - April 29, 2024
- Legal Aspects for Software Developers Interested in Generative AI Applications - April 28, 2024
- Large Language Models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments - April 24, 2024