Battling AI Fakes: Are Social Platforms Doing Enough?
16 September 2024
Since generative AI went mainstream, the amount of fake content and misinformation spread via social media has increased exponentially.
Today, anyone with access to a computer and a few hours to spend studying tutorials can make it appear that anyone has said or done just about anything.
While some countries have passed laws attempting to curtail this, their effectiveness is limited by the ability to post content anonymously.
And what can be done when even candidates in the U.S. presidential election are reposting AI fakes?
To a large extent, social media companies are responsible for policing the content posted on their own networks. In recent years, we’ve seen most of them implement policies designed to mitigate the dangers of AI-generated fake news.
But how far do they go, and will it be enough? And is there a risk that these measures themselves could harm society by impinging on rights such as free speech?
Why Is AI-Generated Fake Content A Big Problem?
We’ve seen a huge increase in the use of AI to create fake information with the aim of undermining trust in democratic processes, such as elections.
AI-generated deepfakes can appear highly realistic. Video and audio content has been widely used to damage reputations and manipulate public opinion. The vast reach of social media makes it possible for this fake content to go viral very quickly, reaching a great many people.
For example, this year, thousands of Democrat-registered voters in New Hampshire received calls urging them to abstain from voting. The voice, purporting to be that of President Joe Biden, informed recipients that the upcoming state primary was going to be an easy victory and that they should instead save their vote for future polls that would be more closely contested.
And in Bangladesh, deepfaked videos of two female opposition politicians wearing swimming costumes in public caused controversy in a society where women are expected to dress modestly.
This is just the tip of the iceberg — researchers estimate that more than half a million deepfaked videos were in circulation on social media in 2023. And with access to the technology widening, it’s a problem that’s only going to get worse.
What Are Social Media Networks Doing About It?
Most of the big social media companies have said that they have implemented measures designed to protect against the rising tide of fake content and disinformation.
Meta, owner of Facebook and Instagram, employs a mix of technological and human-based solutions. It uses algorithms to scan every piece of uploaded content, and those that are flagged as AI-generated are automatically labeled as such. This involves adding an “AI Info” tag warning that the content might not be everything it purports to be.
The company also employs humans and third-party fact-checking services to manually check and flag content. And it prioritizes reputable and trusted sources when recommending content in users' feeds on the basis that established news organizations are less likely to allow their reputations to be damaged by publishing fake content.
X (formerly Twitter), on the other hand, takes a user-generated approach. It relies on a system it calls Community Notes, which allows users with a paid subscription to flag and annotate content they feel is misleading. X has also, on occasion, taken action to ban users from the platform who have been found to be misrepresenting politicians. Its Synthetic And Manipulated Media Policy states that users must not share synthetic (AI-generated) content that may deceive or confuse people.
YouTube, owned by Google, states that it actively removes misleading or deceptive content that poses a risk of harm. With “borderline” content, which may not explicitly break the rules but still poses a risk, it takes steps to reduce the likelihood that it will appear in lists of videos that it recommends users to watch. As with Meta, this is policed through a combination of human reviewers and machine learning algorithms.
And TikTok, owned by ByteDance, uses technology it calls Content Credentials to detect AI-generated content in text, video or audio format, and automatically apply warnings when it appears in users’ feeds. Users must also self-certify any content they upload that contains realistic-looking AI video, images or audio, stating that it is not intended to mislead or cause harm.
Is It Working And What Else Can Be Done?
Despite these efforts, AI-generated content clearly designed to mislead is still widely distributed on all of the major platforms, so it’s clear that there is some way to go.
While the technological and regulatory solutions implemented by the networks and governments are essential, I think it’s unlikely that they alone will solve the problem.
I believe education will ultimately be more important if we don’t want to live in a “post-truth” world where we can no longer trust what we see with our eyes.
Developing the critical thinking skills necessary to determine whether the content is likely to be real or is probably designed to deceive us will be a key part of the puzzle.
The fight against fake content and disinformation is an ongoing battle that will require collaboration between content providers, platform operators, legislators, educators and ourselves as users and consumers of online information.
It’s certainly worrying that even more sophisticated AI tools, capable of more convincing fakery and deception, are sure to be on the horizon. This means that developing effective methods to counter these risks will be one of the most pressing challenges facing society in the coming years.
Related Articles
Will AI Solve The World’s Inequality Problem – Or Make It Worse?
We are standing on the cusp of a new technological revolution. AI is increasingly permeating every aspect of our lives, with intelligent machines transforming the way we live and work.[...]
How You Become Irreplaceable In The Age Of AI
In a world where artificial intelligence is rapidly advancing, many of us are left wondering: Will AI take our jobs?[...]
Why Apple Intelligence Sets A New Gold Standard For AI Privacy
In the rapidly evolving world of artificial intelligence, privacy concerns have become a hot-button issue.[...]
Can Your Device Run Apple Intelligence? What You Need To Know
Apple's announcement of Apple Intelligence has sent waves of excitement through the tech world.[...]
10 Amazing Things You Can Do With Apple Intelligence On Your IPhone
Apple Intelligence is poised to revolutionize the iPhone experience, offering a suite of AI-powered tools that promise to make your digital life easier, more productive, and more creative.[...]
Agentic AI: The Next Big Breakthrough That’s Transforming Business And Technology
The world of artificial intelligence is evolving at a breakneck pace, and just when you thought you'd wrapped your head around generative AI, along comes another game-changing concept: agentic AI.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media