State of the art Artificial Intelligence (AI) techniques have reached an impressive complexity. Consequently, researchers are discovering more and more methods to use them in real-world applications. However, the complexity of such systems requires the introduction of methods that make those transparent to the human user. The AI community is trying to overcome the problem by introducing the Explainable AI (XAI) field, which is tentative to make AI algorithms less opaque. However, in recent years, it became clearer that XAI is much more than a computer science problem: since it is about communication, XAI is also a Human-Agent Interaction problem. Moreover, AI came out of the laboratories to be used in real life. This implies the need for XAI solutions tailored to non-expert users. Hence, we propose a user-centred framework for XAI that focuses on its social-interactive aspect taking inspiration from cognitive and social sciences’ theories and findings. The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
Latest posts by Ryan Watkins (see all)
- On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making - September 27, 2022
- Adverse effects of information personalization on human learning - September 26, 2022
- eXtended [Reality] Artificial Intelligence: New Prospects of Human-AI Interaction Research - September 19, 2022