The difficulty of conducting research with human subjects often entails limited sample sizes and small empirical effects. We demonstrate that this problem can yield patterns of results that are practically indistinguishable from flipping a coin to determine the direction of treatment effects. We use this idea of random conclusions to establish a baseline for interpreting effect size estimates, in turn producing more stringent thresholds for hypothesis testing and statistical power calculations. An examination of recent meta-analyses in psychology, neuroscience, and medicine confirms that, even if all considered effects are real, results involving small effects are indeed indistinguishable from random conclusions.
Latest posts by Ryan Watkins (see all)
- Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging - May 20, 2022
- A Transparency Index Framework for AI in Education - May 20, 2022
- Role of Human-AI Interaction in Selective Prediction - May 19, 2022