Large language models of artificial intelligence (AI) such as ChatGPT find remarkable but controversial applicability in science and research. This paper reviews epistemological challenges, ethical and integrity risks in science conduct. This is with the aim to lay new timely foundations for a high-quality research ethics review in the era of AI. The role of AI language models as a research instrument and subject is scrutinized along with ethical implications for scientists, participants and reviewers. Ten recommendations shape a response for a more responsible research conduct with AI language models.
Latest posts by Ryan Watkins (see all)
- AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap - September 22, 2023
- The Robotic Herd: Using Human-Bot Interactions to Explore Irrational Herding - September 22, 2023
- Human-AI Interactions and Societal Pitfalls - September 19, 2023