Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

The Best (And Scariest) Examples Of AI-Enabled Deepfakes

2 July 2021

There are positive uses for deepfake technology like making digital voices for people who lost theirs or updating film footage instead of reshooting it if actors trip over their lines. However, the potential for malicious use is of grave concern, especially as the technology gets more refined. There has been tremendous progress in the quality of deepfakes since only a few years ago when the first products of the technology circulated. Since that time, many of the scariest examples of artificial intelligence (AI)-enabled deepfakes have technology leaders, governments, and media talking about the perils it could create for communities.

The first exposure to deepfakes for most of the general public happened in 2017. This was when an anonymous user of Redditor posted videos that showed celebrities such as Scarlett Johansson in compromising sexual situations. But, it wasn’t real-life footage—it was the combination of the celebrity’s face, and the body of a porn actor fused together using deepfake technology to make it appear that something happened in real life even though it was faked. Celebrities and public figures were originally the ones susceptible to the charade since algorithms required ample video footage to be able to create a deepfake, and that was available for celebrities and politicians.

When researchers at the University of Washington posted a deepfake of President Barack Obama and then circulated it on the Internet, it was clear how such technology could be abused. The researchers were able to make the video of President Obama say whatever they wanted it to say. Imagine what could transpire if nefarious actors presented a deepfake of a world leader as a real communication. It could be a threat to world security. With cries of “fake news” commonplace, a deepfake could be created to support any agenda to fool others into believing the deepfake is an authentic representation of what someone wants to communicate.

Other high-profile examples of manipulated video include an altered video of House Speaker Nancy Pelosi, that was retweeted by President Trump as real, that made it look like she was drunkenly stumbling over her words. In this case, the timing of the video was altered to create the effect, but many believed it was a true depiction. Two British artists created a deepfake of Facebook CEO Mark Zuckerberg talking to CBS News about the “truth of Facebook and who really owns the future.” This video was widely circulated on Instagram and ultimately went viral.

Deepfake Technology Rapidly Improving

Deepfake technology is improving faster than many believed it would. In fact, researchers have created a new software tool that allows users to edit the transcript of a video to alter the words—add, change, or delete—coming out of someone’s mouth. This technology isn’t available to consumers—yet—but examples of what has been done illustrate the ease with which the tool can be used to alter videos.

Deep Video Portraits, a system developed at Stanford University, can manipulate not only facial expressions such as can be seen in the President Obama deepfake, but also myriad movements including full 3D head positions, eye gaze and blinking, and head rotation by using generative neural networks. Even though these videos aren’t perfect, they are incredibly photorealistic. This could be super beneficial for audio dubbing a film into another language and, as the researchers realise, could be abused as well.

Samsung’s AI lab made Mona Lisa smile and created a “living portrait” of Salvador Dali, Marilyn Monroe, and others using machine learning to create realistic videos from a single image. The system only requires a few photographs of real faces to create the living portrait which could be cause for concern for “ordinary people” who thought that they might be immune to deepfakes because there isn’t enough video footage of them to train the algorithms. Samsung’s AI shows that it can make realistic videos with more general video footage of a wide range of people rather than only use video specific to the “star” of the deepfake.

There are even more disturbing capabilities out there. A programmer launched a free, easy-to-use app called DeepNude that would take an image of a fully clothed woman and remove her clothes to create nonconsensual porn. Just days after the app’s release, the anonymous programmer shut it down. It’s hard to imagine anything but misuse for this app.

So, now that we know it’s out there and getting even more realistic and easy to use, what do we need to do to protect ourselves and others from misuse? That’s a huge question with no easy answers.

Should social media companies be forced to remove videos that are deepfakes from their networks? Does it matter what the intent of the video is? Is there any way to separate entertainment from maliciousness?

Some researchers suggest that it’s better for ethical developers to continue to push the envelope when it comes to this technology so they can warn what’s possible to encourage more critical analysis of video content. Others argue that this work just makes it easier for unethical people to extrapolate the learnings for their own misuse.

Also, AI might be behind deepfakes, but it can also be very instrumental in helping humans detect a deepfake. For example, software company Adobe has developed an AI-enabled tool that can now spot deepfakes of images.

However, we can’t merely rely on software to do the job for us. As deepfake technology is here and getting better every day, it would be prudent for us all to remember to critically assess the authenticity of videos we consume to understand their real intent. This means not just relying on the quality of the video as an indicator of authenticity but also assessing the social context in which it was discovered—who shared it (people and institutions) and what they said about it.


Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

How Panini Is Using Web3 To Create Digital Markets And Collectibles

Globally, Panini is the biggest name in the sports trading card business – a household name in its own right, with partnerships in place with global brands, including FIFA, Disney, and NASCAR.[...]

5 Reasons Why You Should Care About Web3

Web3 has the potential to disrupt pretty much everything we know about life online and who controls it.[...]

Universal Studios, The Metaverse, And The Future of Theme Parks

Universal Studios theme parks are constantly evolving to keep up with changing technology — and one of the most exciting recent developments has been the integration of metaverse technologies into Universal’s attractions.[...]

From Diagnosis To Treatment: 10 Ways AI Is Transforming Healthcare

AI is poised to revolutionize how we approach and address global health challenges. Dive into this post to explore the top 10 ways AI is positively impacting the healthcare landscape.[...]

Should We Stop Developing AI For The Good Of Humanity?

Almost 30,000 people have signed a petition calling for an “immediate pause” to the development of more powerful artificial intelligence (AI) systems.[...]

5 Amazing Ways How Meta (Facebook) Is Using Generative AI

Less than two years ago, Meta – the parent company of Facebook – announced plans to go "all in" on virtual reality and the metaverse.[...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts