Trusting artificial intelligence at work
What influences a worker’s resistance to machine-generated advice?
What is this research about?
This project examines human-machine interactions to enhance our understanding of arising risks as ‘thinking machines’ continue to be introduced in our workplaces.
We will be exploring what factors workers report influence their acceptance (or rejection) of machine-generated advice, and what types of tasks or workplaces are likely to elicit this response.
We aim to unpack and gain an understanding of the relationship between users’ attitudes towards advanced technology concepts (e.g., artificial intelligence, machine learning, robots) and their willingness to accept or reject machine-generated advice.
What will the researchers do?
The project will identify and propose ways to address the psychological barriers to accepting advice from 'thinking machines' in the workplace. The research will occur in three phases:
Conduct interviews with workers who are currently (or are expected to be) operating in settings in which there is significant use of automated decision technologies. We aim to capture a snapshot of firsthand perceptions of those currently working with ‘thinking machines’ in real-world work settings.
Use innovation to build a simulation of interactions with automated decision technology and conduct behavioural experiments with workers in a controlled environment.
Establish a set of design principles that will guide developers in the construction of systems, and the workplaces that introduce them, to inform safe and effective human-machine interactions.
Research partners and stakeholders
Charles Sturt University
Project commenced: June 2020
Project completion: Late 2021
Want to know more?
To work with the Centre, or stay up to date with our research, head to our Engage with us page.