Working paper

Lie-detection algorithms attract few users but vastly increase accusation rates

Alicia von Schenk, Victor Klockmann, Jean-François Bonnefon, Iyad Rahwan, and Nils Köbis

Abstract

People are not very good at detecting lies, which may explain why they refrain from accusing others of lying, given the social costs attached to false accusations — both for the accuser and the accused. Here we consider how this social balance might be disrupted by the availability of lie-detection algorithms powered by Artificial Intelligence (AI). Will people elect to use lie-detection AI that outperforms humans, and if so, will they show less restraint in their accusations? To find out, we built a machine learning classifier whose accuracy (66.86%) was significantly better than human accuracy (46.47%) lie-detection task. We conducted an incentivized lie-detection experiment (N = 2040) in which we measured participants’ propensity to use the algorithm, as well as the impact of that use on accusation rates and accuracy. Our results reveal that (a) requesting predictions from the lie-detection AI and especially (b) receiving AI predictions that accuse others of lying increase accusation rates. Due to the low uptake of the algorithm (31.76% requests), we do not see an improvement in accuracy when the AI prediction becomes available for purchase.

See also

Published in

IAST Working Paper, n. 23-155, June 2023