Surveys around the world show that the public perceives artificial intelligence (AI) as a double-edged sword: As a risk but also as an opportunity. However, if and how this ambiguous perception of AI relates to people’s willingness to use it has yet to be investigated. Thus, the present research investigated how people’s risk and opportunity perception influences their willingness to use AI. Additionally, we examined people’s confidence in their risk and opportunity perception as a possible moderator. To this end, we conducted two online experiments with N = 246 and N = 495 (representative) participants to assess (i) risk and opportunity perception of AI, (ii) confidence in risk and opportunity perception of AI, and (iii) willingness to use AI. As hypothesized, risk-opportunity perception (opportunity minus risk perception) of AI correlated positively with the probability to use AI. Contrary to our hypothesis, the strength of this association was not significantly moderated by people’s confidence in their risk and opportunity perception. Exploratory analyses indicated that the probability to use AI also depended on the context of the AI use (medicine, transport, media, psychology). This research expands existing behavioral research by including opportunity and not solely risk perception as a predictor variable for behavior and by investigating the relationship of risk and opportunity perception to ambiguous and not solely risk-taking behavior. Additionally, the experimental results motivate the investigation of causal-effect relations as well as further moderating cognitive variables (e.g., AI knowledge) and underline the need to gain a deeper understanding of the stability of risk and opportunity perception across different contexts of AI use.
Latest posts by Ryan Watkins (see all)
- Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents - December 1, 2023
- AI-Augmented Surveys: Leveraging Large Language Models and Surveys for Opinion Prediction [imputation] - November 29, 2023
- Enhancing Human Persuasion With Large Language Models - November 29, 2023