Nudging is a behavioral strategy aimed at influencing people’s thoughts and actions. Nudging techniques can be found in many situations in our daily lives, and these nudging techniques can targeted at human fast and unconscious thinking, e.g., by using images to generate fear or the more careful and effortful slow thinking, e.g., by releasing information that makes us reflect on our choices. In this paper, we propose and discuss a value-based AI-human collaborative framework where AI systems nudge humans by proposing decision recommendations. Three different nudging modalities, based on when recommendations are presented to the human, are intended to stimulate human fast thinking, slow thinking, or meta-cognition. Values that are relevant to a specific decision scenario are used to decide when and how to use each of these nudging modalities. Examples of values are decision quality, speed, human upskilling and learning, human agency, and privacy. Several values can be present at the same time, and their priorities can vary over time. The framework treats values as parameters to be instantiated in a specific decision environment.
|
Latest posts by Ryan Watkins (see all)
- Learning activities in technology-enhanced learning: A systematic review of meta-analyses and second-order meta-analysis in higher education - April 29, 2024
- Legal Aspects for Software Developers Interested in Generative AI Applications - April 28, 2024
- Large Language Models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments - April 24, 2024