Qualitative analysis is critical to understanding human datasets in many social science disciplines. Open coding is an inductive qualitative process that identifies and interprets “open codes” from datasets. Yet, meeting methodological expectations (such as “as exhaustive as possible”) can be challenging. While many machine learning (ML)/generative AI (GAI) studies have attempted to support open coding, few have systematically measured or evaluated GAI outcomes, increasing potential bias risks. Building on Grounded Theory and Thematic Analysis theories, we present a computational method to measure and identify potential biases from “open codes” systematically. Instead of operationalizing human expert results as the “ground truth,” our method is built upon a team-based approach between human and machine coders. We experiment with two HCI datasets to establish this method’s reliability by 1) comparing it with human analysis, and 2) analyzing its output stability. We present evidence-based suggestions and example workflows for ML/GAI to support open coding.
Latest posts by Ryan Watkins (see all)
- Experimental Evidence for Efficiency Gains on Trust via AI-Mediated Communication - November 28, 2024
- A Computational Method for Measuring “Open Codes” in Qualitative Analysis - November 27, 2024
- Can AI grade your essays? A comparative analysis of large language models and teacher ratings in multidimensional essay scoring - November 26, 2024