The problem with over-relying on quantitative evidence of validity

posted in: reading | 0
When presenting evidence of validity, social scientists tend to rely on popular methods such as reliability, regression, and confirmatory factor analysis. The quality of the model output is often determined by fairly arbitrary cutoff values, some of which arise from simulation studies that do not generalize to the user’s model subspace. Despite decades of warnings about improper use and the general inadequacy of relying solely on quantitative evidence of validity, quantitative methods continue to dominate the validation literature. In this paper, I argue that this over-reliance on quantitative evidence may have inadvertently caused psychologists to design survey instruments to fit psychometric models rather than psychometric theories. This may have unintentionally caused the structure of most psychological properties to be thought of as continuous, and made unidimensionality and conditional independence desirable properties of latent variables and corresponding survey items. Further, the quantitative imperative in publishing may have resulted in a dearth of qualitative evidence, such as testing items for interpretability and meaning in the population of interest. I encourage researchers to revisit the evidence that has been presented in support of instrument validation and use and to allow theory to guide psychological measurement.











Ryan Watkins