Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in (AI)-based decision tools. Psychology’s more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the , those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual used to support all model developer claims.
- Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging - May 20, 2022
- A Transparency Index Framework for AI in Education - May 20, 2022
- Role of Human-AI Interaction in Selective Prediction - May 19, 2022