Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Deepfakes – The Good, The Bad, And The Ugly

24 January 2022

Over the holidays, millions of us saw The Beatles miraculously restored to living color in the Disney+ documentary Get Back. But how many of them realized that the technology used to bring John, Paul, George, and Ringo to our screens is also being used for much more sinister purposes?

Deepfakes – The Good, The Bad, And The Ugly | Bernard Marr

The algorithms used to create “deepfakes” – as artificial intelligence (AI)-generated imitations are known – are widely considered by cyber security experts to be a major challenge society will face in coming years.

Websites already exist that can create pornographic images of real people from any regular images of videos – clearly an invasion of privacy that has the potential to cause huge embarrassment. And in the political sphere, we have seen (and heard) fakers put words into the mouth of Barack Obama. While this was done for educational purposes – and other famous examples like TikTok Tom Cruise are clearly made for entertainment – there’s certainly a potential for misuse.

At the same time, the technology also has the potential to create value. And, much like Pandora’s box of Greek mythology, now that it’s been opened, it can’t be closed again – over 80 open source deepfake applications exist on Github. As I discussed during a recent conversation with Experian’s Eric Haller, I’ve even used it myself via a service called Overdub created by Descript that lets me put words in the moth of my own virtual avatar.

Haller – who, among other roles, heads up Experian's identity services – told me that in many ways, the creation of deepfakes can be thought of as the latest development in the ever-ongoing war between business and counterfeiting.

“You can think back to old spy movies – and spies trying to figure out if the video they are watching is real or not – all those things still happen today – it’s not a new notion,” he told me.

What is real, however, is that – unlike the Beatles documentary, where AI was just used to touch up and restore missing detail, like color – today’s fraud investigators may need to investigate material that is 100% created by computers.

With technology where it is today, there’s a fairly limited chance that deepfake technology would be successful enough to fool someone who knew the subject of the fake. For example, my own AI-voiced avatar might do a good enough job of putting words in my mouth for the purposes of creating a virtual presentation or webinar. However, someone who knows me well might be able to pick up small differences in intonation and delivery that give away the fact the content is computer-generated.

Haller points out that even the Tom Cruise deepfakes – perhaps the most widely shared viral examples of the phenomena – involved the work of a skilled actor or impersonator, able to mimic Cruise’s mannerisms to an impressive extent. The AI – using algorithms known as generative adversarial networks (GANs) – then simply “blurs” the audio and visual data to align it even more closely with the Hollywood star. Even after all of this, I would say the result is a piece of work that is very nearly good enough, though not quite good enough, to fool most people. Indeed, anecdotally, I would say the most common reaction on viewing the footage is "that's a very convincing fake," rather than "That's Tom Cruise!"

The danger, of course, comes from the fact that we are clearly only getting started in terms of what is achievable with AI. In five or ten years' time, it’s highly plausible that technology like this will create fakes that are indistinguishable from reality.

There have already been instances of criminals creating faked voices in order to fool banking systems and transfer money between accounts – in one case, to the tune of $35 million. Creating technological defenses against these attacks is one of the responsibilities of Haller and others in his role.

“It could be someone who’s completely fictitious,” he tells me – “It’s probably a lower bar to create someone who does not exist than to simulate someone that does exist and have an interactive dialogue with them.

“My greatest fear … is the interaction that actually fools somebody that knows the individual they are interacting with – I think we’re a long way from there … that requires a confluence of technologies that all need to develop further from where they are right now. But the lower bar. That’s very credible today.”

As with other forms of AI-driven fraud detection employed by financial services organizations, those developing the technology have found it more fruitful to focus on examining incidental and circumstantial details of the interaction rather than the interaction itself. So, rather than attempting to determine whether a voice on the phone is computer-generated, an investigation may center on how the communication is being made, where it is coming from, what time it is taking place, and whether the parties involved are at risk of being targets of fraud. In this respect, the technology can be thought of as similar to that used by mobile carriers to flag up potential spam phone calls or phishing texts when they arrive at customers’ phones.

The dangers only become more apparent when we consider the fast pace at which we are moving our lives online and the impending arrival of even deeper integration between our lives and the digital universe heralded by concepts such as the metaverse.

“I’ve heard colleagues saying we’ve probably seen a 10-year acceleration into digital in the last 18 months because of the pandemic – my 95-year-old father-in-law orders his groceries online now, a year and a half ago that would never have happened”, Haller says.

With more interactions taking place via Zoom call – from business meetings to consultations with doctors – the scope for impersonation will clearly only grow, which is why the work of identity professionals like Haller will be increasingly important to society.

At the same time, we shouldn’t overlook the positive benefits that this technology will enable. Beyond bringing beloved movie stars back from the grave, or allowing us to enjoy older stars as they were in their younger days, creative (or generative) AI has the potential to cut down on the amount of boring and repetitive work humans have to do. It’s also very useful for creating “synthetic data," allowing us to train AI and robots to become more accurate using data that may otherwise be difficult or dangerous to come by. This could include training autonomous driving algorithms without the risk involved with real road journeys or conducting medical trials without putting patients or animals in danger.

The potential for AI to simulate (or fake) elements of the real world is clearly one of the most powerful aspects of the transformative impact it can have on society. Ensuring this potential is realized in a safe way, without causing harm, is an important task for those who are developing and deploying this technology.

You can watch my fascinating conversation with Eric Haller, EVP and General Manager, Experian DataLabs, where we also cover several other ways that AI is being deployed at Experian.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

Tim Cook Says The Metaverse Isn’t The Future Because People Don’t Understand It – They Might Not Have To

It seems like everyone has spent the last year or so falling over themselves to tell us what the metaverse is.[...]

Digital Pop Stars, Virtual Influencers And The Future Of Music And Celebrities In The Metaverse

Polar may be a rising star of social media with 1.6 million followers on TikTok and millions of views on YouTube, but to "meet" her, you have to step into the metaverse.[...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts