Deepfakes – The Good, The Bad, And The Ugly
24 January 2022
Over the holidays, millions of us saw The Beatles miraculously restored to living color in the Disney+ documentary Get Back. But how many of them realized that the technology used to bring John, Paul, George, and Ringo to our screens is also being used for much more sinister purposes?
The algorithms used to create “deepfakes” – as artificial intelligence (AI)-generated imitations are known – are widely considered by cyber security experts to be a major challenge society will face in coming years.
Websites already exist that can create pornographic images of real people from any regular images of videos – clearly an invasion of privacy that has the potential to cause huge embarrassment. And in the political sphere, we have seen (and heard) fakers put words into the mouth of Barack Obama. While this was done for educational purposes – and other famous examples like TikTok Tom Cruise are clearly made for entertainment – there’s certainly a potential for misuse.
At the same time, the technology also has the potential to create value. And, much like Pandora’s box of Greek mythology, now that it’s been opened, it can’t be closed again – over 80 open source deepfake applications exist on Github. As I discussed during a recent conversation with Experian’s Eric Haller, I’ve even used it myself via a service called Overdub created by Descript that lets me put words in the moth of my own virtual avatar.
Haller – who, among other roles, heads up Experian's identity services – told me that in many ways, the creation of deepfakes can be thought of as the latest development in the ever-ongoing war between business and counterfeiting.
“You can think back to old spy movies – and spies trying to figure out if the video they are watching is real or not – all those things still happen today – it’s not a new notion,” he told me.
What is real, however, is that – unlike the Beatles documentary, where AI was just used to touch up and restore missing detail, like color – today’s fraud investigators may need to investigate material that is 100% created by computers.
With technology where it is today, there’s a fairly limited chance that deepfake technology would be successful enough to fool someone who knew the subject of the fake. For example, my own AI-voiced avatar might do a good enough job of putting words in my mouth for the purposes of creating a virtual presentation or webinar. However, someone who knows me well might be able to pick up small differences in intonation and delivery that give away the fact the content is computer-generated.
Haller points out that even the Tom Cruise deepfakes – perhaps the most widely shared viral examples of the phenomena – involved the work of a skilled actor or impersonator, able to mimic Cruise’s mannerisms to an impressive extent. The AI – using algorithms known as generative adversarial networks (GANs) – then simply “blurs” the audio and visual data to align it even more closely with the Hollywood star. Even after all of this, I would say the result is a piece of work that is very nearly good enough, though not quite good enough, to fool most people. Indeed, anecdotally, I would say the most common reaction on viewing the footage is "that's a very convincing fake," rather than "That's Tom Cruise!"
The danger, of course, comes from the fact that we are clearly only getting started in terms of what is achievable with AI. In five or ten years' time, it’s highly plausible that technology like this will create fakes that are indistinguishable from reality.
There have already been instances of criminals creating faked voices in order to fool banking systems and transfer money between accounts – in one case, to the tune of $35 million. Creating technological defenses against these attacks is one of the responsibilities of Haller and others in his role.
“It could be someone who’s completely fictitious,” he tells me – “It’s probably a lower bar to create someone who does not exist than to simulate someone that does exist and have an interactive dialogue with them.
“My greatest fear … is the interaction that actually fools somebody that knows the individual they are interacting with – I think we’re a long way from there … that requires a confluence of technologies that all need to develop further from where they are right now. But the lower bar. That’s very credible today.”
As with other forms of AI-driven fraud detection employed by financial services organizations, those developing the technology have found it more fruitful to focus on examining incidental and circumstantial details of the interaction rather than the interaction itself. So, rather than attempting to determine whether a voice on the phone is computer-generated, an investigation may center on how the communication is being made, where it is coming from, what time it is taking place, and whether the parties involved are at risk of being targets of fraud. In this respect, the technology can be thought of as similar to that used by mobile carriers to flag up potential spam phone calls or phishing texts when they arrive at customers’ phones.
The dangers only become more apparent when we consider the fast pace at which we are moving our lives online and the impending arrival of even deeper integration between our lives and the digital universe heralded by concepts such as the metaverse.
“I’ve heard colleagues saying we’ve probably seen a 10-year acceleration into digital in the last 18 months because of the pandemic – my 95-year-old father-in-law orders his groceries online now, a year and a half ago that would never have happened”, Haller says.
With more interactions taking place via Zoom call – from business meetings to consultations with doctors – the scope for impersonation will clearly only grow, which is why the work of identity professionals like Haller will be increasingly important to society.
At the same time, we shouldn’t overlook the positive benefits that this technology will enable. Beyond bringing beloved movie stars back from the grave, or allowing us to enjoy older stars as they were in their younger days, creative (or generative) AI has the potential to cut down on the amount of boring and repetitive work humans have to do. It’s also very useful for creating “synthetic data," allowing us to train AI and robots to become more accurate using data that may otherwise be difficult or dangerous to come by. This could include training autonomous driving algorithms without the risk involved with real road journeys or conducting medical trials without putting patients or animals in danger.
The potential for AI to simulate (or fake) elements of the real world is clearly one of the most powerful aspects of the transformative impact it can have on society. Ensuring this potential is realized in a safe way, without causing harm, is an important task for those who are developing and deploying this technology.
You can watch my fascinating conversation with Eric Haller, EVP and General Manager, Experian DataLabs, where we also cover several other ways that AI is being deployed at Experian.
Related Articles
The Rise Of AI-Enabled Virtual Pets: Why Millions Are Raising Digital Companions
Remember Tamagotchis? Those tiny digital pets that had millions of kids frantically pressing buttons to keep their virtual companions alive in the 1990s?[...]
The Dark Side Of AI: How Deepfakes And Disinformation Are Becoming A Billion-Dollar Business Risk
Every week, I talk to business leaders who believe they're prepared for AI disruption. But when I ask them about their defense strategy against AI-generated deepfakes and disinformation, I'm usually met with blank stares.[...]
Why You Should Be Polite To ChatGPT And Other AIs
In my latest conversation with ChatGPT, I caught myself saying "please" and "thank you." My wife, overhearing this, couldn't help but laugh at my politeness toward a machine.[...]
The 7 Revolutionary Cloud Computing Trends That Will Define Business Success In 2025
Picture this: A world where quantum computing is as accessible as checking your email, where AI automatically optimizes your entire cloud infrastructure, and where edge computing seamlessly melds with cloud services to deliver lightning-fast responses.[...]
AI And The Global Economy: A Double-Edged Sword That Could Trigger Market Meltdowns
The stock market's current AI euphoria, driven by companies like NVIDIA developing powerful processors for machine learning, might mask a more troubling reality.[...]
How The 2025 Presidential Election Could Transform The Future Of AI In America And Beyond
The clock is ticking toward what might be the most consequential technological crossroads in American history.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media