Article

Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation

Jean-François Bonnefon, Fatimah Ishowo-Oloko, Zakariyah Soroye, Jacob W. Crandall, Iyad Rahwan et Tahal Rahwan

Résumé

Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex—an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human–machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner’s dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature.

Voir aussi

Publié dans

Nature Machine Intelligence, vol. 1, novembre 2019, p. 517–521