The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that their ethical competence may degrade over time — and what to do about it.
|
Ryan Watkins, Ph.D.
▲ Interim, Associate Dean of Research, Graduate School of Education and Human Development
▲ Interim, Associate Dean of Research, Graduate School of Education and Human Development
▲ Professor, George Washington University
Program Faculty: Educational Technology Leadership MA Program
www.RyanRWatkins.com (resume, books, articles, etc.)
www.LLMinScience.com (hub for using LLMs in research)
SciencePods (create a podcast about your research)
go.gwu.edu/CodingProjects (coding projects by discipline)
www.WeShareScience.com (an online video science fair)
go.gwu.edu/GWcoders (weekly meet up for connecting with others who code)
www.NeedsAssessment.org (all things needs assessment)
www.NeedsAssessment.org (all things needs assessment)
Your workday may not be the same as my workday, so please respond during your workday.
Latest posts by Ryan Watkins (see all)
- The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI - June 13, 2025
- Artificial Intelligence Software to Accelerate Screening for Living Systematic Reviews - June 13, 2025
- Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce - June 13, 2025