May 10, 2022, 12:45–13:45
Room Auditorium 4
There is substantial concern about the harmful potential of social bots in online social media in recent years. While fear exists for their ability to propagate falsehoods that harm public opinion formation and disrupt democratic decision making, empirical evidence is mixed. There appears to be a mismatch between macro-level patterns and micro-level behavior: alarmingly large numbers of bots seem to be able to quickly spread content through entire platforms, yet bots’ ability to contact human users tends to be very limited. In this talk, I turn to social influence models to show how they can help to study the dissemination of information tied to beliefs. I provide a rough introduction into social influence models and their applications, and present a theoretical model that solves the puzzling mismatch of bot effectiveness in online social media through indirect influence. The model implies that bots with limited direct impact on humans may be more and not less effective in spreading their views in the social network, because their direct contacts keep exerting influence on users that the bot does not reach directly. Highly active and well-connected bots, in contrast, may have a strong impact on their direct contacts, but these contacts grow too dissimilar from their network neighbors to further spread the bot’s content. To demonstrate this effect, we included stubborn bot agents in Axelrod’s seminal model of the dissemination of cultures and conducted simulation experiments demonstrating the strength of weak bots. A series of analyses show that the finding is robust, in particular when the model is tailored to the context of online social networks. Finally, I will reflect on the model implications and present an outlook for future work in this area.