The growing integration of AI tools in communication promises enhanced efficiency and productivity. However, concerns about the erosion of trust and authenticity have fueled debates over the disclosure and regulation ofAI-mediated communication. It remains unclear a) how AI mediation influences trust, b) whether the efficiency gains outweigh potential compromises in trust, and c) how proposed transparency policies would impact these trade-offs. To address these questions, we conducted two online studies (total N = 1637). We adapted the well-established incentivized trust game to measure objective behavioral trust. Additionally, we use subjective, self-reported trust ratings. Before each game, some participants could write a message with or without an AI assistant to elicit trust. We also systematically varied actual and expecteddisclosure of AI assistance. Results reveal that AI assistance had little to no consequences for trust, even when AI use was transparent. However, participants using AI assistance composed their messages more quickly, achieving higher returns of trust per writing time invested. Importantly, this efficiency boost was not suppressed by expected or actual disclosure of AI involvement. Contrary to prevailing concerns, these results suggest that AI-mediated communication could enhance efficiency and facilitate trust-building rather than undermine it.
Latest posts by Ryan Watkins (see all)
- Experimental Evidence for Efficiency Gains on Trust via AI-Mediated Communication - November 28, 2024
- A Computational Method for Measuring “Open Codes” in Qualitative Analysis - November 27, 2024
- Can AI grade your essays? A comparative analysis of large language models and teacher ratings in multidimensional essay scoring - November 26, 2024