Understanding the recommendation of an artificially intelligent (AI) assistant for decision-making is especially important in high-risk tasks, such as deciding whether a mushroom is edible or poisonous. To foster user understanding and appropriate trust in such systems, we tested the effects of explainable artificial intelligence (XAI) methods and an educational intervention on AI-assisted decision-making behavior in a 2×2 between subjects online experiment with N = 410 participants. We developed a novel use case in which users go on a virtual mushroom hunt and are tasked with picking only edible mushrooms but leaving poisonous ones. Additionally, users were provided with an AI-based app that shows classification results of mushroom images. For the manipulation of explainability, one subgroup additionally received attribution-based and example-based explanations of the AI’s predictions, and for the educational intervention one subgroup received additional information on how the AI worked. We found that the group with explanations outperformed the group without explanations and showed more appropriate trust levels. Contrary to our expectation, we did not find effects for the educational intervention, domain-specific knowledge, or AI knowledge on performance. We discuss practical implications and introduce the mushroom-picking task as a promising use case for XAI research.
Latest posts by Ryan Watkins (see all)
- AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap - September 22, 2023
- The Robotic Herd: Using Human-Bot Interactions to Explore Irrational Herding - September 22, 2023
- Human-AI Interactions and Societal Pitfalls - September 19, 2023