Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Can Artificial Intelligence Predict The Spread Of Online Hate Speech?

2 July 2021

The rise in online hate speech and the way it is reflected in the offline world is a hot topic in politics right now.

The internet has given everyone a voice, which clearly has positive implications for the way citizens can publicly challenge authority and debate issues. On the other hand, when challenge and debate spill over into attacks on minorities or vulnerable people, there’s obviously a potential for harm.

It’s fairly commonly assumed that this form of hate speech, particularly when encountered alongside other factors such as social deprivation or mental illness, has the potential to radicalise individuals in dangerous ways, and inspire them to commit illegal and violent acts.

Just as terrorist organisations like ISIS can be seen using hate speech in videos and propaganda material intended to incite violence, racist and anti-Islamic material is thought to have inspired killers like Anders Breivik, who killed 69 youths in a 2011 shooting spree, and the 2019 Christchurch mosque shooting in which 51 died.

So far these links between online and real-world actions, though common sense tells us they are likely to exist, have been difficult to prove scientifically. However, a piece of the puzzle fell into place thanks to research carried out by the UN and the Universitat Pompeu Fabra, and co-ordinated by IBM.

IBM principle researcher Kush Varshney tells me “I think the main message was that this was the first study of its kind looking at the relationship between online and offline behaviours, and most importantly it demonstrates why we should be taking this technical approach to studying that relationship.”

Researchers began by compiling a list of keywords and phrases considered by governmental agencies and NGOs to be indicators of hate speech. These included expressions found in both Islamic-extremist and anti-Islamic posts made on Twitter and Reddit. As the researchers validated that these words and phrases were indeed common by searching across those platforms, they came across other co-occurring terms that were also added to the list. Along with news reports of Islamic terrorism or anti-Islamic violence, this list was the primary sources of data for the investigation.

This user-generated content – over 50 million tweets and 300,000 Reddit posts, made by around 15 million users – containing these words and phrases were then classified according to factors including their stance (Islamic-extremist or anti-Islamic), as well as the severity of the message. The scale of severity ranged from simple use of discriminatory language to outright incitement to violence, including genocide.

The study also considered the framing of the comments – whether the point of the post was to define a problem (“Muslims are likely to be terrorists”), diagnose a causes (“Immigration leads to increased terrorism”), make a moral judgement (“Christianity is an evil religion”) or proposes a solution such as carrying out terrorist attacks to achieve political aims.

After the dataset was compiled and classified, a timeline analysis was carried out, using machine learning to draw a picture of the correlation between the number of hate speech messages appearing online, and a number of real-world incidents including the 2016 Orlando nightclub shooting, the 2016 Istanbul airport attack, the 2016 Finsbury Park, London, vehicular attack and the 2016 Olethe, Kansas shooting. All of the incidents involved Muslims or Arabs as either victims or perpetrators, and took place within 19 months.

Previously, the majority of machine learning analysis around the concept of hate speech has focussed on building algorithms to determine whether or not particular posts or pieces of content are hateful.

Varshney tells me “A lot of people in the machine learning community are tackling the problem of classifying whether speech is offensive or hateful – we decided it wasn’t important for us to tackle that problem, and often it’s a question of where you draw the line, if something is verging on being hateful.

“What we were looking at is what’s the relationship between things that happen in the online world, and things that happen in the real world.”

The study found that, yes, following high-profile incidents of both Islamophobic or Islamic-extremist violence, incidents of online hate-speech do indeed increase. This didn’t really come as a surprise to anyone as it was commonly held to be true based on casual observation. But what was far more interesting was the fact that, in the case of Islamist-extremist violence it wasn’t just Muslims who faced an increase in hate speech against them, but attacks were frequently broadened to other minority groups.

Varshney told me, “The severity of the attacks also increases, so people are much more likely to incite violence … and the target of the online messages also broadens so other groups that have nothing to do with anything that’s happened in the real world also experience an increase in hate speech. It could be any other group, such as homosexuals … those were some interesting findings.”

So, is online hate speech and real-world violence a circular problem? It’s been shown that one (real-world violence) causes the other – but is the reverse also true, creating a vicious, self-feeding circle of hatred and violence?

Currently, that remains unclear. But proving the causal relationship between hate speech and violence is a natural next-step for research in the field, Varshney says.

Proving this reverse relationship is likely to be more problematic, however, for a number of reasons. Including the fact that the process of online radicalization itself is not yet well understood from a scientific perspective. The question of how much exposure to hateful material is needed to push a person to commit violence, over what period of time, and how the mental health of the individual plays its part, have yet to be answered.

Varshey told me “That would probably be an even more important study to do – we didn’t get into it in this particular project, as some of the causal relationships require techniques that we don’t yet have.

“That inspires us to do more technical work though – and this direction is clearly a next-step for the work, that should be done, for sure.”

The research, which can be viewed in full here, was carried out as part of IBM’s Science for Social Good, which aimed to apply machine learning to 17 issues identified by the UN as Sustainable Development Goals. 

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

The 7 Most Successful Business Models Of The Digital Era

The first two decades of this century are characterized by digital entrepreneurs upending traditional business models in search of new ways of creating revenue and serving customers.[...]

Unleashing the Power Of AI: 14 More Mind-Blowing Tools Beyond ChatGPT

AI is still the hottest topic in tech due to the massive success of ChatGPT, now thought to have the fastest-growing user base of any application in history.[...]

The Top 4 Examples Of How ChatGPT Can Be Used In Telecom

The telecom industry has experienced a lot of change and challenges in recent years, and with that comes a need for more efficient and effective communication systems.[...]

ChatGPT: What Are Hallucinations And Why Are They A Problem For AI Systems

In recent years, the rapid development of artificial intelligence (AI) has led to the rise of sophisticated language models, with OpenAI's ChatGPT at the forefront[...]

Top 10 Use Cases For ChatGPT In The Banking Industry

Banks have often been at the forefront of adopting cutting-edge technology to provide better customer service and meet compliance requirements.[...]

The 7 Best Examples Of How ChatGPT Can Be Used In Human Resources (HR)

Human Resources (HR) departments play a critical role in managing an organization's most valuable asset — its people.[...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media

Yearly Views


View Podcasts