Whereas current norms around the framing of engineering problems tend to ignore contexts and communities into which algorithms will be integrated as “solutions,” the “traps” approach argues that negative outcomes await developers who fail to include context as a key consideration in ML development. Selbst et al. identify six traps. In the Framing Trap, the authors argue that ML solutions can cause harm if the humans and communities who will be using ML tools are not adequately considered as a key part of ML solution framing. In the Portability Trap, the authors points out that in the pursuit of scaled-up solutions, ML systems are often implemented into contexts for which they were not designed, and that this mismatch may create unanticipated negative outcomes. In the Formalism Trap, the authors suggest that some concepts, such as fairness, are highly contingent social ideas, not easily reduced to formal mathematical concepts, yet such reductions all too often take place in exactly this fashion, to the detriment of impacted communities. In the Ripple Effect Trap, the authors recognize that ML solutions, indeed any significant technological tool, are not situated in a vacuum, and that their implementation can provoke changes in a context which can ripple outward in surprising ways. Finally, in the Solutionism Trap, perhaps the most endemic in ML development, authors argue that technological tools may not be the best solution for a problem at all. Qualitative analysis methods can assist in addressing all of these traps, including helping model developers pay greater attention to the context, social conditions, and human beings who will eventually use and be impacted by algorithmic systems, in the process of selecting variables, data, and model architecture.
- Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents - December 1, 2023
- AI-Augmented Surveys: Leveraging Large Language Models and Surveys for Opinion Prediction [imputation] - November 29, 2023
- Enhancing Human Persuasion With Large Language Models - November 29, 2023