This work proposes a framework that incorporates trust in an ad hoc teamwork scenario with human-agent teams, where an agent must collaborate with a human to perform a task. During the task, the agent must infer, through interactions and observations, how much the human trusts it and adapt its behaviour to maximize the team’s performance. To achieve this, we propose collecting data from human participants in experiments to define different settings (based on trust levels) and learning optimal policies for each of them. Then, we create a module to infer the current setting (depending on the amount of trust). Finally, we validate this framework in a real-world scenario and analyse how this adaptable behaviour affects trust.
Latest posts by Ryan Watkins (see all)
- Limitations of the LLM-as-a-Judge Approach for Evaluating LLM Outputs in Expert Knowledge Tasks - June 7, 2025
- Neural and Cognitive Impacts of AI: The Influence of Task Subjectivity on Human-LLM Collaboration - June 5, 2025
- From Lived Experience to Insight: Unpacking the Psychological Risks of Using AI Conversational Agents - May 30, 2025