AI and humans bring complementary skills to group deliberations. Modeling this group decision making is especially challenging when the deliberations include an element of risk and an exploration-exploitation process of appraising the capabilities of the human and AI agents. To investigate this question, we presented a sequence of intellective issues to a set of human groups aided by imperfect AI agents. A group’s goal was to appraise the relative expertise of the group’s members and its available AI agents, evaluate the risks associated with different actions, and maximize the overall reward by reaching consensus. We propose and empirically validate models of human-AI team decision making under such uncertain circumstances, and show the value of socio-cognitive constructs of prospect theory, influence dynamics, and Bayesian learning in predicting the behavior of human-AI groups.
Latest posts by Ryan Watkins (see all)
- Teaching Computational Social Science for All - January 25, 2022
- Computational Thinking in PreK-5: Empirical Evidence for Integration and Future Directions - January 22, 2022
- Teaching Humans When To Defer to a Classifier via Exemplars - January 20, 2022
You Might Also Enjoy...