Human-AI Interactions and Societal Pitfalls

posted in: reading | 0

When working with generative artificial intelligence (AI), users may see productivity gains, but the AI-generated content may not match their preferences exactly. To study this effect, we introduce a Bayesian framework in which heterogeneous users choose how much information to share with the AI, facing a trade-off between output fidelity and communication cost. We show that the interplay between these individual-level decisions and AI training may lead to societal challenges. Outputs may become more homogenized, especially when the AI is trained on AI-generated content. And any AI bias may become societal bias. A solution to the homogenization and bias issues is to improve human-AI interactions, enabling personalized outputs without sacrificing productivity.












Ryan Watkins, Ph.D.
▲ Professor, George Washington University
www.RyanRWatkins.com (resume, books, articles, etc.)
www.LLMinScience.com (hub for using LLMs in research)
SciencePods (create a podcast about your research)
go.gwu.edu/CodingProjects (coding projects by discipline)
www.WeShareScience.com (an online video science fair)
go.gwu.edu/GWcoders (weekly meet up for connecting with others who code)
www.NeedsAssessment.org (all things needs assessment)

Your workday may not be the same as my workday, so please respond during your workday.












Ryan Watkins