Whereas current norms around the framing of engineering problems tend to ignore contexts and communities into which algorithms will be integrated as “solutions,” the “traps” approach argues that negative outcomes await developers who fail to include context as a key consideration in ML development. Selbst et al. identify six traps. In the Framing Trap, the authors argue that ML solutions can cause harm if the humans and communities who will be using ML tools are not adequately considered as a key part of ML solution framing. In the Portability Trap, the authors points out that in the pursuit of scaled-up solutions, ML systems are often implemented into contexts for which they were not designed, and that this mismatch may create unanticipated negative outcomes. In the Formalism Trap, the authors suggest that some concepts, such as fairness, are highly contingent social ideas, not easily reduced to formal mathematical concepts, yet such reductions all too often take place in exactly this fashion, to the detriment of impacted communities. In the Ripple Effect Trap, the authors recognize that ML solutions, indeed any significant technological tool, are not situated in a vacuum, and that their implementation can provoke changes in a context which can ripple outward in surprising ways. Finally, in the Solutionism Trap, perhaps the most endemic in ML development, authors argue that technological tools may not be the best solution for a problem at all. Qualitative analysis methods can assist in addressing all of these traps, including helping model developers pay greater attention to the context, social conditions, and human beings who will eventually use and be impacted by algorithmic systems, in the process of selecting variables, data, and model architecture.
- Exploring Student Behaviors and Motivations using AI TAs with Optional Guardrails - April 16, 2025
- AI-University: An LLM-based platform for instructional alignment to scientific classrooms - April 15, 2025
- Interaction-Required Suggestions for Control, Ownership, and Awareness in Human-AI Co-Writing - April 14, 2025