A more-than-human approach to researching AI at work: Alternative narratives for AI and networked learning

posted in: reading | 0
Artificial intelligence (AI) is increasingly manifest in everyday work, learning, and living. Reports attempting to gauge public perception suggest that amidst exaggerated expectations and fears about AI, citizens are sceptical and lack understanding of what AI is and does (Archer et al., 2018). Professional workers practice at the intersection of such public perceptions, the AI industry, and regulatory frameworks. Yet, there is limited understanding of the day-to-day interactions and predicaments between workers, AI systems, and the publics they serve. This includes how these interactions and predicaments generate opportunities for learning and highlight new digital fluencies needed. We bring social and computing science perspectives to begin to examine the prevailing AI narratives in professional work and learning practices. Some AIs (such as deep machine learning systems) are so sophisticated that a human-understandable explanation of how it works may not be possible. This raises questions about what professional practitioners are able to know about the AI systems they use: their new digital co-workers. We argue that a co-constitutive human-AI perspective could provide useful insights on questions such as: (1) How is professional expertise and judgment re-distributed as workers negotiate and learn with AI systems? (2) What trust and confidence in new AI-infused work practices is needed or possible and how is this mediated? (3) What are the implications for professional learning: both learning within work and the workplace and more formal curriculum? Given the early stages of this field of inquiry, our aim is to evoke discussion of alternative human-AI narratives suited for the messy—and often unseen—realities of everyday practices.


More by the author (several of which sound interesting):

https://www.stir.ac.uk/people/256690#outputs

Ryan Watkins