When working with generative artificial intelligence (AI), users may see productivity gains, but the AI-generated content may not match their preferences exactly. To study this effect, we introduce a Bayesian framework in which heterogeneous users choose how much information to share with the AI, facing a trade-off between output fidelity and communication cost. We show that the interplay between these individual-level decisions and AI training may lead to societal challenges. Outputs may become more homogenized, especially when the AI is trained on AI-generated content. And any AI bias may become societal bias. A solution to the homogenization and bias issues is to improve human-AI interactions, enabling personalized outputs without sacrificing productivity.
|
Ryan Watkins, Ph.D.
▲ Professor, George Washington University
▲ Professor, George Washington University
Program Director: Educational Technology Leadership MA Program
www.RyanRWatkins.com (resume, books, articles, etc.)
www.LLMinScience.com (hub for using LLMs in research)
SciencePods (create a podcast about your research)
go.gwu.edu/CodingProjects (coding projects by discipline)
www.WeShareScience.com (an online video science fair)
go.gwu.edu/GWcoders (weekly meet up for connecting with others who code)
www.NeedsAssessment.org (all things needs assessment)
www.NeedsAssessment.org (all things needs assessment)
Your workday may not be the same as my workday, so please respond during your workday.
Latest posts by Ryan Watkins (see all)
- Exploring Student Behaviors and Motivations using AI TAs with Optional Guardrails - April 16, 2025
- AI-University: An LLM-based platform for instructional alignment to scientific classrooms - April 15, 2025
- Interaction-Required Suggestions for Control, Ownership, and Awareness in Human-AI Co-Writing - April 14, 2025