State of the art Artificial Intelligence (AI) techniques have reached an impressive complexity. Consequently, researchers are discovering more and more methods to use them in real-world applications. However, the complexity of such systems requires the introduction of methods that make those transparent to the human user. The AI community is trying to overcome the problem by introducing the Explainable AI (XAI) field, which is tentative to make AI algorithms less opaque. However, in recent years, it became clearer that XAI is much more than a computer science problem: since it is about communication, XAI is also a Human-Agent Interaction problem. Moreover, AI came out of the laboratories to be used in real life. This implies the need for XAI solutions tailored to non-expert users. Hence, we propose a user-centred framework for XAI that focuses on its social-interactive aspect taking inspiration from cognitive and social sciences’ theories and findings. The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
Latest posts by Ryan Watkins (see all)
- A more-than-human approach to researching AI at work: Alternative narratives for AI and networked learning - December 3, 2021
- On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System - December 3, 2021
- Learning from learning machines: a new generation of AI technology to meet the needs of science - November 30, 2021
You Might Also Enjoy...