When presenting evidence of validity, social scientists tend to rely on popular methods such as reliability, regression, and confirmatory factor analysis. The quality of the model output is often determined by fairly arbitrary cutoff values, some of which arise from simulation studies that do not generalize to the user’s model subspace. Despite decades of warnings about improper use and the general inadequacy of relying solely on quantitative evidence of validity, quantitative methods continue to dominate the validation literature. In this paper, I argue that this over-reliance on quantitative evidence may have inadvertently caused psychologists to design survey instruments to fit psychometric models rather than psychometric theories. This may have unintentionally caused the structure of most psychological properties to be thought of as continuous, and made unidimensionality and conditional independence desirable properties of latent variables and corresponding survey items. Further, the quantitative imperative in publishing may have resulted in a dearth of qualitative evidence, such as testing items for interpretability and meaning in the population of interest. I encourage researchers to revisit the evidence that has been presented in support of instrument validation and use and to allow theory to guide psychological measurement.
|
Latest posts by Ryan Watkins (see all)
- The Essentials of AI for Life and Society: An AI Literacy Course for the University Community - January 14, 2025
- A Novel Approach to Scalable and Automatic Topic-Controlled Question Generation in Education - January 11, 2025
- Engineering of Inquiry: The “Transformation” of Social Science through Generative AI - January 10, 2025