Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Battling AI Fakes: Are Social Platforms Doing Enough?

16 September 2024

Since generative AI went mainstream, the amount of fake content and misinformation spread via social media has increased exponentially.

Today, anyone with access to a computer and a few hours to spend studying tutorials can make it appear that anyone has said or done just about anything.

While some countries have passed laws attempting to curtail this, their effectiveness is limited by the ability to post content anonymously.

And what can be done when even candidates in the U.S. presidential election are reposting AI fakes?

To a large extent, social media companies are responsible for policing the content posted on their own networks. In recent years, we’ve seen most of them implement policies designed to mitigate the dangers of AI-generated fake news.

But how far do they go, and will it be enough? And is there a risk that these measures themselves could harm society by impinging on rights such as free speech?

Battling AI Fakes: Are Social Platforms Doing Enough? | Bernard Marr

Why Is AI-Generated Fake Content A Big Problem?

We’ve seen a huge increase in the use of AI to create fake information with the aim of undermining trust in democratic processes, such as elections.

AI-generated deepfakes can appear highly realistic. Video and audio content has been widely used to damage reputations and manipulate public opinion. The vast reach of social media makes it possible for this fake content to go viral very quickly, reaching a great many people.

For example, this year, thousands of Democrat-registered voters in New Hampshire received calls urging them to abstain from voting. The voice, purporting to be that of President Joe Biden, informed recipients that the upcoming state primary was going to be an easy victory and that they should instead save their vote for future polls that would be more closely contested.

And in Bangladesh, deepfaked videos of two female opposition politicians wearing swimming costumes in public caused controversy in a society where women are expected to dress modestly.

This is just the tip of the iceberg — researchers estimate that more than half a million deepfaked videos were in circulation on social media in 2023. And with access to the technology widening, it’s a problem that’s only going to get worse.

What Are Social Media Networks Doing About It?

Most of the big social media companies have said that they have implemented measures designed to protect against the rising tide of fake content and disinformation.

Meta, owner of Facebook and Instagram, employs a mix of technological and human-based solutions. It uses algorithms to scan every piece of uploaded content, and those that are flagged as AI-generated are automatically labeled as such. This involves adding an “AI Info” tag warning that the content might not be everything it purports to be.

The company also employs humans and third-party fact-checking services to manually check and flag content. And it prioritizes reputable and trusted sources when recommending content in users' feeds on the basis that established news organizations are less likely to allow their reputations to be damaged by publishing fake content.

X (formerly Twitter), on the other hand, takes a user-generated approach. It relies on a system it calls Community Notes, which allows users with a paid subscription to flag and annotate content they feel is misleading. X has also, on occasion, taken action to ban users from the platform who have been found to be misrepresenting politicians. Its Synthetic And Manipulated Media Policy states that users must not share synthetic (AI-generated) content that may deceive or confuse people.

YouTube, owned by Google, states that it actively removes misleading or deceptive content that poses a risk of harm. With “borderline” content, which may not explicitly break the rules but still poses a risk, it takes steps to reduce the likelihood that it will appear in lists of videos that it recommends users to watch. As with Meta, this is policed through a combination of human reviewers and machine learning algorithms.

And TikTok, owned by ByteDance, uses technology it calls Content Credentials to detect AI-generated content in text, video or audio format, and automatically apply warnings when it appears in users’ feeds. Users must also self-certify any content they upload that contains realistic-looking AI video, images or audio, stating that it is not intended to mislead or cause harm.

Is It Working And What Else Can Be Done?

Despite these efforts, AI-generated content clearly designed to mislead is still widely distributed on all of the major platforms, so it’s clear that there is some way to go.

While the technological and regulatory solutions implemented by the networks and governments are essential, I think it’s unlikely that they alone will solve the problem.

I believe education will ultimately be more important if we don’t want to live in a “post-truth” world where we can no longer trust what we see with our eyes.

Developing the critical thinking skills necessary to determine whether the content is likely to be real or is probably designed to deceive us will be a key part of the puzzle.

The fight against fake content and disinformation is an ongoing battle that will require collaboration between content providers, platform operators, legislators, educators and ourselves as users and consumers of online information.

It’s certainly worrying that even more sophisticated AI tools, capable of more convincing fakery and deception, are sure to be on the horizon. This means that developing effective methods to counter these risks will be one of the most pressing challenges facing society in the coming years.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

Apple’s New AI Revolution: Why ‘Apple Intelligence’ Could Change Everything

Apple's announcement of 'Apple Intelligence' marks a seismic shift in how we interact with our devices.[...]

Why AI Models Are Collapsing And What It Means For The Future Of Technology

Artificial intelligence has revolutionized everything from customer service to content creation, giving us tools like ChatGPT and Google Gemini, which can generate human-like text or images with remarkable accuracy.[...]

Where Will Artificial Intelligence Take Us In The Future?

Just a few years back, if you had been told that by 2024, you would be able to have a conversation with a computer that would seem almost completely human, would you have believed it?[...]

AI: Overhyped Fantasy Or Truly The Next Industrial Revolution?

The term “fourth industrial revolution” has been used in recent years to describe the transformative impact that many believe AI and automation will have on human society.[...]

The World On Edge: 5 Global Mega Threats That Could Reshape Our Future

In an era of unprecedented global interconnectedness, humanity faces a perfect storm of challenges that threaten to reshape our world.[...]

The Biggest Healthcare Trends Of The Next 10 Years

Although my work usually involves advising businesses on changes and trends that are just around the corner, sometimes it’s also interesting to look a little further ahead.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts