In 2021 the Johns Hopkins University Applied Physics Laboratory held an internal challenge to develop artificially intelligent (AI) agents that could excel at the collaborative card game Hanabi. Agents were evaluated on their ability to play with human players whom the agents had never previously encountered. This study details the development of the agent that won the challenge by achieving a human-play average score of 16.5, outperforming the current state-of-the-art for human-bot Hanabi scores. The winning agent’s development consisted of observing and accurately modeling the author’s decision making in Hanabi, then training with a behavioral clone of the author. Notably, the agent discovered a human-complementary play style by first mimicking human decision making, then exploring variations to the human-like strategy that led to higher simulated human-bot scores. This work examines in detail the design and implementation of this human compatible Hanabi teammate, as well as the existence and implications of human-complementary strategies and how they may be explored for more successful applications of AI in human machine teams.
Latest posts by Ryan Watkins (see all)
- Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging - May 20, 2022
- A Transparency Index Framework for AI in Education - May 20, 2022
- Role of Human-AI Interaction in Selective Prediction - May 19, 2022