The difficulty of conducting research with human subjects often entails limited sample sizes and small empirical effects. We demonstrate that this problem can yield patterns of results that are practically indistinguishable from flipping a coin to determine the direction of treatment effects. We use this idea of random conclusions to establish a baseline for interpreting effect size estimates, in turn producing more stringent thresholds for hypothesis testing and statistical power calculations. An examination of recent meta-analyses in psychology, neuroscience, and medicine confirms that, even if all considered effects are real, results involving small effects are indeed indistinguishable from random conclusions.
Latest posts by Ryan Watkins (see all)
- Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents - December 1, 2023
- AI-Augmented Surveys: Leveraging Large Language Models and Surveys for Opinion Prediction [imputation] - November 29, 2023
- Enhancing Human Persuasion With Large Language Models - November 29, 2023