Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

The Best (And Scariest) Examples Of AI-Enabled Deepfakes

2 July 2021

There are positive uses for deepfake technology like making digital voices for people who lost theirs or updating film footage instead of reshooting it if actors trip over their lines. However, the potential for malicious use is of grave concern, especially as the technology gets more refined. There has been tremendous progress in the quality of deepfakes since only a few years ago when the first products of the technology circulated. Since that time, many of the scariest examples of artificial intelligence (AI)-enabled deepfakes have technology leaders, governments, and media talking about the perils it could create for communities.

The first exposure to deepfakes for most of the general public happened in 2017. This was when an anonymous user of Redditor posted videos that showed celebrities such as Scarlett Johansson in compromising sexual situations. But, it wasn’t real-life footage—it was the combination of the celebrity’s face, and the body of a porn actor fused together using deepfake technology to make it appear that something happened in real life even though it was faked. Celebrities and public figures were originally the ones susceptible to the charade since algorithms required ample video footage to be able to create a deepfake, and that was available for celebrities and politicians.

When researchers at the University of Washington posted a deepfake of President Barack Obama and then circulated it on the Internet, it was clear how such technology could be abused. The researchers were able to make the video of President Obama say whatever they wanted it to say. Imagine what could transpire if nefarious actors presented a deepfake of a world leader as a real communication. It could be a threat to world security. With cries of “fake news” commonplace, a deepfake could be created to support any agenda to fool others into believing the deepfake is an authentic representation of what someone wants to communicate.

Other high-profile examples of manipulated video include an altered video of House Speaker Nancy Pelosi, that was retweeted by President Trump as real, that made it look like she was drunkenly stumbling over her words. In this case, the timing of the video was altered to create the effect, but many believed it was a true depiction. Two British artists created a deepfake of Facebook CEO Mark Zuckerberg talking to CBS News about the “truth of Facebook and who really owns the future.” This video was widely circulated on Instagram and ultimately went viral.

Deepfake Technology Rapidly Improving

Deepfake technology is improving faster than many believed it would. In fact, researchers have created a new software tool that allows users to edit the transcript of a video to alter the words—add, change, or delete—coming out of someone’s mouth. This technology isn’t available to consumers—yet—but examples of what has been done illustrate the ease with which the tool can be used to alter videos.

Deep Video Portraits, a system developed at Stanford University, can manipulate not only facial expressions such as can be seen in the President Obama deepfake, but also myriad movements including full 3D head positions, eye gaze and blinking, and head rotation by using generative neural networks. Even though these videos aren’t perfect, they are incredibly photorealistic. This could be super beneficial for audio dubbing a film into another language and, as the researchers realise, could be abused as well.

Samsung’s AI lab made Mona Lisa smile and created a “living portrait” of Salvador Dali, Marilyn Monroe, and others using machine learning to create realistic videos from a single image. The system only requires a few photographs of real faces to create the living portrait which could be cause for concern for “ordinary people” who thought that they might be immune to deepfakes because there isn’t enough video footage of them to train the algorithms. Samsung’s AI shows that it can make realistic videos with more general video footage of a wide range of people rather than only use video specific to the “star” of the deepfake.

There are even more disturbing capabilities out there. A programmer launched a free, easy-to-use app called DeepNude that would take an image of a fully clothed woman and remove her clothes to create nonconsensual porn. Just days after the app’s release, the anonymous programmer shut it down. It’s hard to imagine anything but misuse for this app.

So, now that we know it’s out there and getting even more realistic and easy to use, what do we need to do to protect ourselves and others from misuse? That’s a huge question with no easy answers.

Should social media companies be forced to remove videos that are deepfakes from their networks? Does it matter what the intent of the video is? Is there any way to separate entertainment from maliciousness?

Some researchers suggest that it’s better for ethical developers to continue to push the envelope when it comes to this technology so they can warn what’s possible to encourage more critical analysis of video content. Others argue that this work just makes it easier for unethical people to extrapolate the learnings for their own misuse.

Also, AI might be behind deepfakes, but it can also be very instrumental in helping humans detect a deepfake. For example, software company Adobe has developed an AI-enabled tool that can now spot deepfakes of images.

However, we can’t merely rely on software to do the job for us. As deepfake technology is here and getting better every day, it would be prudent for us all to remember to critically assess the authenticity of videos we consume to understand their real intent. This means not just relying on the quality of the video as an indicator of authenticity but also assessing the social context in which it was discovered—who shared it (people and institutions) and what they said about it.


Data Strategy Book | Bernard Marr

Related Articles

How Do We Use Artificial Intelligence Ethically | Bernard Marr

How Do We Use Artificial Intelligence Ethically?

I’m hugely passionate about artificial intelligence (AI), and I'm proud to say that I help companies use AI to do amazing things in the world [...]

How Artificial Intelligence Can Help Small Businesses | Bernard Marr

How Artificial Intelligence Can Help Small Businesses

Small and medium-sized businesses all over the world are benefiting from artificial intelligence and machine learning – and integrating AI into core business functions and processes is getting more accessible and more affordable every day. [...]

What Really Is The Tesla Bot And How Much Will It Cost | Bernard Marr

What Really Is The Tesla Bot And How Much Will It Cost?

Elon Musk has just announced that Tesla will begin developing a humanoid robot called the Tesla Bot that is designed to perform “unsafe, repetitive, or boring” tasks. [...]

Should I Choose Machine Learning or Big Data | Bernard Marr

Should I Choose Machine Learning or Big Data?

Big Data and Machine Learning are two exciting applications of technology that are often mentioned together in the space of the same breath [...]

What Is The Next Level Of AI Technology | Bernard Marr

What Is The Next Level Of AI Technology?

Artificial Intelligence (AI) has permeated all aspects of our lives – from the way we communicate to how we work, shop, play, and do business. [...]

The 7 Biggest Ethical Challenges of Artificial Intelligence | Bernard Marr

The 7 Biggest Ethical Challenges of Artificial Intelligence

Today, artificial intelligence is essential across a wide range of industries, including healthcare, retail, manufacturing, and even government. [...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media

0
Followers
0
Likes
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Followers
0
Readers

Podcasts

View Podcasts