State of the art Artificial Intelligence (AI) techniques have reached an impressive complexity. Consequently, researchers are discovering more and more methods to use them in real-world applications. However, the complexity of such systems requires the introduction of methods that make those transparent to the human user. The AI community is trying to overcome the problem by introducing the Explainable AI (XAI) field, which is tentative to make AI algorithms less opaque. However, in recent years, it became clearer that XAI is much more than a computer science problem: since it is about communication, XAI is also a Human-Agent Interaction problem. Moreover, AI came out of the laboratories to be used in real life. This implies the need for XAI solutions tailored to non-expert users. Hence, we propose a user-centred framework for XAI that focuses on its social-interactive aspect taking inspiration from cognitive and social sciences’ theories and findings. The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
Latest posts by Ryan Watkins (see all)
- Experimental Evidence That Conversational Artificial Intelligence Can Steer Consumer Behavior Without Detection - September 28, 2024
- Don’t be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration - September 20, 2024
- Implementing New Technology in Educational Systems - September 19, 2024