Engineering of Inquiry: The “Transformation” of Social Science through Generative AI

posted in: reading | 0
“We increasingly read that generative AI will “transform” the social sciences, but little to no work has conceptualized the conditions necessary to fulfill such a promise. We review recent research on generative AI and evaluate its potential to reshape research practices. As the technology advances, generative AI could support various research tasks, including idea generation, data collection, and analysis. However, we discuss three challenges to an optimistic outlook that focuses solely on accelerating research through practical tools and reducing costs through inexpensive “synthetic” data. First, generative AI raises severe concerns about the validity of conclusions drawn from synthetic data about human populations. Second, possible efficiency gains in the research process may be partially offset by new problems introduced by the technology. Third, applications of generative AI have so far focused on enhancing existing methods, with limited efforts to harness the technology’s unique potential to simulate human behavior in social environments. Sociologists could use sociological theories and methods to develop “generative agents.” A new “trading zone” could emerge where social scientists, statisticians, and computer scientists develop new methodologies to facilitate innovative lines of inquiry and produce scientifically valid conclusions.”

Near the end: “Increased reliance on AI tools could lead researchers to focus on questions that these tools can address, limiting the scope of inquiry (Messeri and Crockett 2024). A softer version of this argument is that technology influences the theories we study and the type of research we conduct. The increased reliance on online surveys exemplifies this concern. Anderson et al. (2019)claim that the frequency of studies tailored to online experiments has increased, a phenomenon which the authors call “MTurkification.” One might also worry that the generative agents may create an “illusion of objectivity,” concealing the fact that theirability to mimic a typical person with a given set of characteristics highly depends on training data and modifications by their developers (Messeri and Crockett 2024). Thus, entering the “trading zone” of generative agents introduces risks and problems (Gao et al. 2024), prompting many questions about why sociologists might choose to engage with it.”
Ryan Watkins