Recently we see a rising number of methods in the field of eXplainable Artificial Intelligence. To our surprise, their development is driven by model developers rather than a study of needs for human end users. The analysis of needs, if done, takes the form of an A/B test rather than a study of open questions. To answer the question “What would a human operator like to ask the ML model?” we propose a conversational system explaining decisions of the predictive model. In this experiment, we developed a chatbot called dr_ant to talk about machine learning model trained to predict survival odds on Titanic. People can talk with dr_ant about different aspects of the model to understand the rationale behind its predictions. Having collected a corpus of 1000+ dialogues, we analyse the most common types of questions that users would like to ask. To our knowledge, it is the first study which uses a conversational system to collect the needs of human operators from the interactive and iterative dialogue explorations of a predictive model.
Latest posts by Ryan Watkins (see all)
- A more-than-human approach to researching AI at work: Alternative narratives for AI and networked learning - December 3, 2021
- On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System - December 3, 2021
- Learning from learning machines: a new generation of AI technology to meet the needs of science - November 30, 2021
You Might Also Enjoy...