Artificial Intelligence Is Creating A Fake World — What Does That Mean For Humans?
2 July 2021
“Seeing is believing” or is it? There once was a time when we could have confidence that what we saw depicted in photos and videos was real. Even when Photoshopping images became popular, we still knew that the images started as originals. Now, with advances in artificial intelligence, the world is becoming more artificial, and you can’t be sure what you see or hear is real or a fabrication of artificial intelligence and machine learning. In many cases, this technology is used for good, but now that it exists, it can also be used to deceive.
When AI Fabrication is Acceptable
Typically, viewers will accept the fabrications of artificial intelligence if they are aware of it. Through the years many of us have come to accept, for the benefit of entertainment, representations of real-life on movie and television screens. However, now Hollywood is getting an AI assist for script writing. With the growth of machine learning, algorithms can sort through extensive amounts of data to understand what elements are more likely to make a movie an award-winner, a commercial success or more popular with viewers. This is just another example of AI helping make the creative process more efficient for humans even though in some cases AI is creating all on its own.
Synthetic Voices
It’s now possible with just snippets of audio for machine learning to mimic someone’s voice thereby blurring the line between real and fake. This would certainly be helpful in some instances such as to fix flubbed lines in a movie without recalling the actor back on location to re-record, but an opportunity for abuse is also possible and easily imagined.
Smart Content
The advent of personalised or smart content is dual-edged, and just like any other AI manipulation, it should be transparent to the user, so they are empowered by the technology rather than misled. Smart content is when the content itself changes based on who is seeing, reading, watching or listening to it and it’s being tested by Netflix and TikTok, a short-form video app, among others. We are accustomed to our search and recommendation engines providing ideas to us based on who we are, but up until now the pieces of content would be the same for every individual who reviews it. Smart content allows every user to get a different experience.
Deepfake Text
An AI model called GPT2 created by OpenAI, a nonprofit research organisation backed by Elon Musk and others, is a text generator capable of creating content in the style/tone of the data it was fed whether it’s for a newsfeed, a work of fiction or another form of writing. The group did not release its research publicly since the results were so realistic they had fear of the technology being misused.
Fake Images So Real They’ll Fool You
In an effort to raise awareness of how powerful AI technology has become, Phillip Wang created the website “This Person Does Not Exist.” Every human face represented on the site looks like a real human, but they are all AI generated. Another similar site, Whichfaceisreal.com was created to also raise awareness that the technology is so good it’s easy to fool people as to what is real and what’s artificial. Both sites intended to represent the power of the technology developed by software engineers at NVIDIA Corporation. They used a General Adversarial Network where two neural networks compete as they create artificial images and see if the other one can figure it out. Although there are a few tell-tale signs in some of the human faces that were generated that allow you to know they are artificial, many of them are quite convincing.
Fake Videos Could Be Dangerous
At first blush, the capabilities of artificial intelligence to create realistic voices, images and videos that are so lifelike it’s difficult to tell they are artificial is exciting, intriguing and mind-boggling. But before getting too caught up with how amazing this technology is, we must pause to consider the more nefarious uses.
“Deepfake” technology uses computer-generated audio and video to present something that didn’t actually occur. It has been used to swap Scarlett Johansson’s head and other notable figures on a pornographic film to make it seem like they were the stars performing in them.
Aside from the personal misrepresentation and possible damage to individuals, some lawmakers are concerned about the misinformation this technology could spread on the internet.
This video of President Obama shows the possibilities for the audio/video to be manipulated to give the appearance a person of authority said something they in fact never did. This deceit could have negative consequences for national security and in elections in addition to impacting personal reputations.
Xinhua, China’s state-run press agency, has already created AI anchors who appear like regular humans as they report the day’s news. To the general population and even experienced experts, these AI anchors appear real so viewers would assume they are human unless told otherwise.
As AI continues to get more sophisticated, it will become even more challenging to determine between what is real and what is artificial. If “fake” information whether it’s conveyed in text, photos, audio or video is dispersed and seen as real, it could be used for evil purposes. We can no longer be certain “seeing is believing.”
Related Articles
The Third Wave Of AI Is Here: Why Agentic AI Will Transform The Way We Work
The chess pieces of artificial intelligence are being dramatically rearranged. While previous iterations of AI focused on making predictions or generating content, we're now witnessing the emergence of something far more sophisticated: AI agents that can independently perform complex tasks and make decisions.[...]
How Generative AI Will Change Jobs In Cybersecurity
Ensuring robust cybersecurity measures are in place is more important than ever when it comes to protecting organizations and even governments and nations from digital threats.[...]
The 10 Most Important Banking And Financial Technology Trends That Will Shape 2025
As technological disruption and economic uncertainty continue to reshape the financial landscape, alongside dramatic shifts in consumer behavior and regulatory requirements, 2025 promises to be both challenging and opportunistic for banking and financial services.[...]
The 6 Most Powerful AI Marketing Trends That Will Transform Your Business In 2025
The quiet hum of AI servers is rapidly drowning out the traditional drumbeat of marketing departments worldwide.[...]
AI Everywhere – Scaling AI In The Cloud With Intel® Xeon®6
Today, the omnipresent AI that we’re starting to take for granted has become a critical tool for business.[...]
4 Smartphones Leading The AI Revolution
As enterprises increasingly rely on company-issued smartphones as primary computing devices, these mobile devices are becoming the frontline of workplace AI integration.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media