AI and humans bring complementary skills to group deliberations. Modeling this group decision making is especially challenging when the deliberations include an element of risk and an exploration-exploitation process of appraising the capabilities of the human and AI agents. To investigate this question, we presented a sequence of intellective issues to a set of human groups aided by imperfect AI agents. A group’s goal was to appraise the relative expertise of the group’s members and its available AI agents, evaluate the risks associated with different actions, and maximize the overall reward by reaching consensus. We propose and empirically validate models of human-AI team decision making under such uncertain circumstances, and show the value of socio-cognitive constructs of prospect theory, influence dynamics, and Bayesian learning in predicting the behavior of human-AI groups.
Latest posts by Ryan Watkins (see all)
- POTATO: The Portable Text Annotation Tool - March 27, 2023
- Decision-aid or Controller? Steering Human Decision Makers with Algorithms - March 27, 2023
- Human Uncertainty in Concept-Based AI Systems - March 24, 2023