Whereas current norms around the framing of engineering problems tend to ignore contexts and communities into which algorithms will be integrated as “solutions,” the “traps” approach argues that negative outcomes await developers who fail to include context as a key consideration in ML development. Selbst et al. identify six traps. In the Framing Trap, the authors argue that ML solutions can cause harm if the humans and communities who will be using ML tools are not adequately considered as a key part of ML solution framing. In the Portability Trap, the authors points out that in the pursuit of scaled-up solutions, ML systems are often implemented into contexts for which they were not designed, and that this mismatch may create unanticipated negative outcomes. In the Formalism Trap, the authors suggest that some concepts, such as fairness, are highly contingent social ideas, not easily reduced to formal mathematical concepts, yet such reductions all too often take place in exactly this fashion, to the detriment of impacted communities. In the Ripple Effect Trap, the authors recognize that ML solutions, indeed any significant technological tool, are not situated in a vacuum, and that their implementation can provoke changes in a context which can ripple outward in surprising ways. Finally, in the Solutionism Trap, perhaps the most endemic in ML development, authors argue that technological tools may not be the best solution for a problem at all. Qualitative analysis methods can assist in addressing all of these traps, including helping model developers pay greater attention to the context, social conditions, and human beings who will eventually use and be impacted by algorithmic systems, in the process of selecting variables, data, and model architecture.
Human-centered artificial intelligence (AI) posits that machine learning and AI should be developed and applied in a socially aware way. In this article, we argue that qualitative analysis (QA) can be a valuable tool in this process, supplementing, informing, and extending the possibilities of AI models. We show this by describing how QA can be integrated in the current prediction paradigm of AI, assisting scientists in the process of selecting data, variables, and model architectures. Furthermore, we argue that QA can be a part of novel paradigms towards Human Centered AI. QA can support scientists and practitioners in practical problem solving and situated model development. It can also promote participatory design approaches, reveal understudied and emerging issues in AI systems, and assist policy making.
Latest posts by Ryan Watkins (see all)