As a technical sub-field of artificial intelligence (AI), explainable AI (XAI) has produced a vast collection of algorithms in recent years. However, explainability is an inherently human-centric property and the field is starting to embrace inter-disciplinary perspectives and human-centered approaches. As researchers and practitioners begin to leverage XAI algorithms to build XAI applications, explainability has moved beyond a demand by data scientists or researchers to comprehend the models they are developing, to become an essential requirement for people to trust and adopt AI deployed in numerous domains. Human-computer interaction (HCI) research and user experience (UX) design in this area are therefore increasingly important. In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey recent HCI work that takes human-centered approaches to design, evaluate, provide conceptual and methodological tools for XAI. We ask the question “what are human-centered approaches doing for XAI” and highlight three roles that they should play in shaping XAI technologies: to drive technical choices by understanding users’ explainability needs, to uncover pitfalls of existing XAI methods through empirical studies and inform new methods, and to provide conceptual frameworks for human-compatible XAI.
- Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging - May 20, 2022
- A Transparency Index Framework for AI in Education - May 20, 2022
- Role of Human-AI Interaction in Selective Prediction - May 19, 2022