How To Tell Reality From Fiction Amid The AI-Driven Truth Crisis
16 September 2024
The artificial intelligence narrative swings between utopian dreams and dystopian nightmares, often overshadowing the nuanced reality of its current capabilities and limitations. Among the myriad concerns surrounding AI, one particularly unsettling claim is that it might lead to a world in which it’s impossible to distinguish truth from fabrication.
This fear isn’t unfounded; the rise of sophisticated technologies like deepfakes and generative AI has democratized the creation of deceptively realistic content, putting powerful tools of manipulation within reach of the average user.
But does this technological leap truly herald an era when reality becomes indistinguishable from fiction? And if so, what are the ramifications for a society whose democratic foundations rest upon the bedrock of informed decision making?
The Age Of Lies
When AI deepfake technology is used in Hollywood to de-age actors—for example, enabling Harrison Ford to once again play a young Indiana Jones—it’s harmless fun. But when the same technology is used to make it appear that political figures have spoken or acted in ways they never would, it’s far more worrying.
Recent examples include deepfakes of both Kamala Harris and Donald Trump. As we approach this year’s U.S. elections, the dangers of this are obvious.
In another example, audio deepfakes have been used to “robocall” potential voters, urging them not to take part in elections, in a clear attempt to subvert democracy.
AI can also undermine the credibility of true information by making us wonder if it’s really a lie. The "liar's dividend" refers to the advantage gained by sowing doubt and confusion, where the mere suggestion that something could be fake undermines trust, even in the absence of evidence.
For instance, when Donald Trump recently questioned whether his opponent had artificially inflated the size of the crowd at her rally using AI, it didn’t matter whether he genuinely believed it or was simply casting doubt—the mere suggestion alone was enough to create uncertainty.
This interference is by no means limited to the U.S. elections. Deepfake footage of a candidate in last year’s Taiwanese election appearing to endorse his rivals was used in an attempt to discredit him.
Fake footage also appeared of a Moldovan election candidate threatening to make a popular drink illegal in order to protect the environment.
And in Bangladesh, deepfake videos showed an opposition politician wearing a bikini in public—an act that would likely be considered offensive in the Muslim-majority country.
This rise in AI-driven misinformation clearly has the potential to damage the public’s trust in democratic processes, and as these tools become more accessible and sophisticated, we can expect the problem to grow.
Reality Check
So, there’s obviously some truth to the claim that AI can blur the boundaries between truth and fiction. But does this necessarily mean that we’re headed toward a future when anything and everything we see online is potentially deceptive?
Well, while AI can obviously create convincing fakes, there are technological limitations to what it can do. On close inspection, it’s often possible to detect where manipulation has taken place. There are often telltale signs such as unrealistic lighting, reflections or movements, or speech patterns and mannerisms that we can detect as irregular.
And while we may not always spot these at first glance, there are also technological solutions that can pick up on more subtle clues—for example, when video has been stitched together from different sources or generated entirely from scratch by algorithms. While the technology used to create deepfakes will undoubtedly become more sophisticated, so will the tools capable of detecting them.
It’s also possible that regulation may play a role in reducing the threat posed by reality-bending AI. AI laws recently passed in both the EU and China, for example, effectively criminalize deepfakes when they are used to impersonate or spread disinformation. This is likely to be carried across into laws passed in other jurisdictions as time goes on.
But it’s likely that the best defense against the threat will come from education and a growing public awareness of the risks. Humans, after all, are a remarkably adaptable species, and our ability to critically assess what we see is likely to evolve as we become more used to being bombarded with fake content and disinformation.
Simple practices like checking facts and researching the credibility of sources before deciding whether we believe or disbelieve something we see online can go a long way to protecting against the threat that AI poses to the truth.
A little common sense can also go a long way—for example, asking ourselves, “Would this person really have said or done that?”
Navigating Truth And Fiction In An AI Future
While I believe that AI has the potential to make it harder to tell the truth from lies, the idea it will make it impossible is somewhat overblown.
Certainly, the risk that some people will act or base their beliefs on AI-generated misinformation is very real. There’s a need for continued vigilance and the ongoing development of methods—technological, legislative and sociological—to augment our ability to recognize what is real and what is likely to be fake.
I can see that it might become necessary to start teaching these critical thinking skills at an early age, perhaps in schools. After all, it seems likely that AI will play an increasingly important role in education, and it would make sense to ensure that identifying and understanding the risks is a part of that curriculum.
With the right tools, oversight and awareness, it should be possible to navigate the challenges that AI poses to truth, although it may mean making some changes to the way we think about and assess what we see and hear.
Related Articles
The Employees Secretly Using AI At Work
Imagine walking into your office and noticing your colleague Sarah effortlessly breezing through her tasks with uncanny efficiency.[...]
Battling AI Fakes: Are Social Platforms Doing Enough?
Since generative AI went mainstream, the amount of fake content and misinformation spread via social media has increased exponentially.[...]
Creating The Universal AI Employee Of The Future
Imagine a world where your most productive employee never sleeps, never takes a vacation, and can seamlessly adapt to any role you need.[...]
20 Generative AI Tools For Creating Synthetic Data
The AI revolution that we’re currently living through is a direct result of the explosion in the amount of data that’s available to be mined and analyzed for insights.[...]
7 Ways To Turn The ‘Bring Your Own AI’ Threat Into An Opportunity
As AI tools become increasingly accessible, companies face a new trend: BYOAI, or bring your own AI.[...]
AI Gone Wild: How Grok-2 Is Pushing The Boundaries Of Ethics And Innovation
As AI continues to evolve at breakneck speed, Elon Musk's latest creation, Grok-2, is making waves in the tech world.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media