Article

Drivers are blamed more than their automated cars when both make mistakes

Edmond Awad, Sydney Levine, Max Kleiman-Weiner, Sohan Dsouza, Joshua Tenenbaum, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan

Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human-machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

Published in

Nature Human Behaviour, vol. 4, n. 2, February 2020, pp. 134–143