Does Online Hate Speech Cause Real-Life Violence?

18-Nov-2019 | Jacqueline Lacroix

No Hate Sign

Jacqueline Lacroix is an analyst and consultant, working most recently with PeaceTech Lab managing projects collecting data on and analyzing online hate speech in Libya and Yemen.

Social media has become ubiquitous in present-day society and, as with all technological tools and platforms, it reflects both the good and bad aspects of human nature and decision making. Put another way, social media platforms are neither the pure-intentioned bastions of community-building espoused by their evangelists nor are they the source of destruction of civil engagement and bipartisan discussion. Rather, social media platforms amplify and echo what is already present within society. From this perspective, the presence of hate speech and misinformation is unsurprising. What is unique when it comes to online conversation is the physical and mental distance and anonymity with which perpetrators of hate speech can operate as well as the potential speed and reach with which hateful messages can be spread.

As an analyst and consultant with PeaceTech Lab over the past two years, I managed projects collecting data on and analyzing online hate speech in Libya and Yemen. Through this work, I inevitably became occupied with a question that many others in the peacebuilding and conflict prevention field are also deeply concerned about: what is the relationship between online hate speech and offline violence? And from here, if the two are linked, what can and should be done about it?

Over the past several years, there has been a growing number of anecdotal and data-backed examples of the connection between online hate speech and offline violence. One of the most compelling examples comes out of Germany: a study of anti-refugee violence between 2015 and 2017 by two PhD students found a correlation between increased numbers of Facebook posts espousing anti-migrant sentiment and increased anti-refugee incidents (mainly violent crimes). What makes this study unique is the clear evidence demonstrating that “when people in places that usually see more anti-refugee attacks have limited access to the platform [Facebook], because of internet outages or service disruptions, the violence sharply decreases.” Karsten Müller and Carlo Schwarz, the authors of the study, have stated that they believe this supports a causal link; however, other researchers are not as quick to move from correlation to causation. In any case, it is an excellent methodological starting point from which to launch additional studies looking at how hateful rhetoric online may influence offline violence in other contexts.

Another study in a similar vein was published this year in The British Journal of Criminology. It examined online hate speech and hate crimes in London and found “a general temporal and spatial association between online hate speech targeting race and religion and offline racially and religiously aggravated crimes independent of ‘trigger’ events [emphasis in original].” This study does not posit causation, but argues the importance of social media data to improve understanding of hate crimes as a process. Additionally, while advising caution around the potential biases of social media data and the frequently changing and contextual nature of online hate speech, the study suggests that the correlation they have identified can be useful in predicting and responding to on-the-ground hate crimes. Indeed, the team behind the study went on to create an Online Hate Speech Platform for use by the UK police due to concerns of potentially increased hate speech around the ongoing Brexit process. Spikes in online hate speech in certain locations can be flagged for local police forces and the data is also reportedly being used to inform counter-messaging campaigns. A police spokesperson indicated that the platform will be used to identify flashpoints and respond early to reduce tensions. However, beyond this, it is unclear how this data platform will be employed operationally to combat either hate speech or potential hate crimes connected to the hate speech being monitored. The study that prompted the creation of this dashboard acknowledged the dangers and necessary precautions around predictive policing, so I am hopeful that these are being taken into consideration with the piloting of the platform.

Cases out of Sri Lanka, South Sudan, Myanmar, and other locations add more anecdotal evidence to the body of research assessing the correlation between online hate speech -- paired in many instances with inflammatory misinformation -- and offline violence. A growing awareness of the influence of Facebook, Twitter, and other online platforms on the perceptions and interactions of communities worldwide has increased the attention being paid to the potential negative influence that these platforms have on real-world events. Many more people have begun paying attention to the question: how are online hate speech and offline violence connected? Much more research is needed on this question, with particular attention paid to the fact that differing political and social contexts undoubtedly play a role -- there is likely no “one-size-fits-all” pattern to be identified.

As a final note at this point in time, I would like to pose the following question in regards to the application of such research to practical considerations within the peacebuilding and conflict prevention space: clearly, correlation doesn’t equal causation, but when it comes to the relationship between online hate speech and offline violence, does it matter? Whether hateful language online directly leads to violence or if it’s simply a reflection of the real-life tensions and conditions that lead to violence, this doesn’t affect the usefulness of hate speech data in algorithms and tools for anticipating and predicting outbreaks of violence.

Where the question of correlation and causation does matter is in regards to the question of if and how to respond to online hate speech. If hate speech in itself is leading to an increased likelihood of violence, then combating online hate speech, either by identifying and removing it or by engaging in counter-messaging, is worthy of effort as it has the potential to reduce real-world violence. On the other hand, if online hate speech is merely an indicator of attitudes and tensions and does not contribute to the likelihood of violence occurring, then any efforts to combat hate speech are, at best, lacking meaningful impact or, at worst, removing evidence that can be used to anticipate and respond to potential violence. As noted in a recent report from the Communities Overcoming Extremism: The After Charlottesville Project (COE), removing hateful content or de-platforming users because of hateful content may, in a sense, backfire. As the report argues, “studies have shown that [de-platformed] individuals can move to other, more lenient online spaces, such as Gab or 8Chan…. Panelists noted that the similarity in users’ views could exacerbate the challenge of echo chambers and radicalize individuals even further. It also makes it harder to track the presence or potential plans of extremists.”

Given the complicated and context-specific nature of online hate speech, it is reasonable to assume that effective efforts to reduce hate and/or prevent violence will require an assemblage of diverse approaches involving many cross-sectoral stakeholders and focusing consistently on local insights and expertise. And if I may draw on the example of the COE coalition to offer some semblance of optimism in this persistently bleak field, there are many deeply engaged actors from the public and private sectors who are working to find solutions to these problems and I believe that it is these alliances and collaborative efforts that will result in the most significant impacts.

Photo courtesy of T. Chick McClure


Read more stories