In this paper, we develop a network of Bayesian agents that collectively model a team’s mental states from the team’s observed communication. We make two contributions. First, we show that our agent could generate interventions that improve the collective intelligence of a human-AI team beyond what humans alone would achieve. Second, we use a generative computational approach to cognition to develop a real-time measure of human’s theory of mind ability and test theories about human cognition. We use data collected from an online experiment in which 145 individuals in 29 human-only teams of five communicate through a chat-based system to solve a cognitive task. We find that humans (a) struggle to fully integrate information from teammates into their decisions, especially in central network positions, and (b) have cognitive biases which lead them to underweight certain useful, but ambiguous, information. Our theory of mind ability measure predicts both individual and team-level performance. Observing teams’ first 25% of messages explains about 8% of variation in final team performance, a 270% improvement compared to the current state of the art.
Latest posts by Ryan Watkins (see all)
- Exploring Student Behaviors and Motivations using AI TAs with Optional Guardrails - April 16, 2025
- AI-University: An LLM-based platform for instructional alignment to scientific classrooms - April 15, 2025
- Interaction-Required Suggestions for Control, Ownership, and Awareness in Human-AI Co-Writing - April 14, 2025