Article

Why people judge humans differently from machines? The role of agency and experience

Jingling Zhang, Jane Conway, and César Hidalgo

Abstract

People are known to judge artificial intelligence using a utilitarian moral philosophy and humans using a moral philosophy emphasizing perceived intentions. But why do people judge humans and machines differently? Psychology suggests that people may have different mind perception models for humans and machines, and thus, will treat human-like robots more similarly to the way they treat humans. Here we present a randomized experiment where we manipulated people's perception of machines to explore whether people judge more human-like machines more similarly to the way they judge humans. We find that people's judgments of machines become more similar to that of humans when they perceive machines as having more agency (e.g. ability to plan, act), but not more experience (e.g. ability to feel). Our findings indicate that people's use of different moral philosophies to judge humans and machines can be explained by a progression of mind perception models where the perception of agency plays a prominent role. These findings add to the body of evidence suggesting that people's judgment of machines becomes more similar to that of humans motivating further work on differences in the judgment of human and machine actions.

Published in

Proceedings of the 14th IEEE International Conference on Cognitive Infocommunications – CogInfoCom 2023, Proceedings of the 14th IEEE International Conference on Cognitive Infocommunications, 2023