Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

How Can You Detect If Content Was Created By ChatGPT And Other AIs?

11 June 2023

Artificial Intelligence (AI) is capable of producing increasingly human-like writing, pictures, music, and video. There have been reports of students using it for cheating, and an industry has emerged around AI-authored books claimed by people as their own work.

However, there is also at least one reported case of a teacher (apparently ineptly) using AI to incorrectly “prove” his students had cheated – leading him to fail all of them.

How Can You Detect If Content Was Created By ChatGPT And Other AIs? | Bernard Marr

There is also a recent case of a photographer winning a competition by submitting an AI-generated picture rather than one he took himself. In this case, the photographer had good intentions and returned his award after exposing what he had done.

Fortunately, some fairly accurate – for the moment – methods exist for detecting where works have been created with the help of AI. In this article, I will look at what tools exist, how they work, as well as why they could be vital for security and for protecting academic and artistic integrity.

Why Is AI Content Detection Important?

As AI-created content becomes more commonplace, its potential to cause disruptive and potentially harmful consequences increases. A great example is the phenomenon of “deep fakes,” where realistic images of videos or real people appearing to do or say things they have never done can be made. There have already been examples of this being used to fake pornographic content of people without their consent and to put words in the mouths of politicians, including Barack Obama. You can find a video of Trump being arrested (even before he was) and Joe Biden singing Baby Shark (which, as far as I know, he has never done!)

Some of this might seem funny, but there’s the potential for it to have damaging consequences for the people involved – or for society at large if it influences democratic processes.

AI has been used to clone human voices to commit fraud. In one case, it was used to attempt to trick a family into believing that their daughter had been kidnapped in order to extort ransom money. In another, a company executive was persuaded to transfer more than $240,000 via a deep-faked voice that he believed to be his boss.

If it’s used by students to cheat on essays and exams, it could damage the integrity of education systems and the reputations of schools and colleges. This could result in students being inadequately prepared for the careers they hope to enter and the devaluation of diplomas and certificates.

All of this highlights the importance of robust countermeasures to educate the public on the dangers of AI and, where possible, detect or even prevent it. Without this issue being addressed, AI could lead to widespread disinformation, manipulation, and damage. So, what exactly can be done?

Methods for Detecting AI-Generated Content

Fortunately, there are a number of methods available for detecting AI-generated content.

Firstly, there are digital tools that use their own AI algorithms to attempt to determine whether a piece of text, an image, or a video was created using AI.

You can find several AI text detectors freely available online. The AI Content Detector claims to be 97.8% reliable and can examine any piece of text for signs that it wasn’t written by a human. This is done by training the detector on the methods and patterns used by tools like ChatGPT and other Large Language Models when they create text. It then matches this information against the submitted text to attempt to determine if it is natural human writing or AI-created text.

This is possible because, to a computer, AI content is relatively predictable, being based on probabilities. This means that a concept called “perplexity” can be used to work out whether the text uses language that is highly probable or not. If it consistently uses the most probable language, there’s a higher chance it’s created by AI.

If it’s important that you know with a high degree of assurance, you can check it against multiple AI detectors. Other useful tools are the Writer AI Content Detector and Crossplag.

For detecting Deepfakes, companies including Facebook and Microsoft are collaborating on the Deepfake Detection Challenge. This project regularly releases datasets that can be used to train detection algorithms. It’s also inspired a contest on the collaborative data science portal Kaggle, with users competing to find the most effective algorithms.

Recognizing the threat that AI-generated video and images could pose to national security, military organizations have joined the fight too. The US Defense Department Advanced Research Projects Agency (DARPA) has created tools that aim to determine whether images have been created or manipulated by AI. One of the tools, known as MediFor, works by comparing the AI-generated image to real-world images, looking for telltale signs such as variations in the effect of lighting and coloring that don’t correspond with reality. Another, known as SemaFor, analyzes the context between pictures and text captions or news stories accompanying them.

Finally, we shouldn’t overlook the role that human judgment and critical thinking can play in AI content detection. Humans have a sense of “gut instinct” that – while certainly not infallible – can help us when it comes to determining authenticity. Casting a critical eye and applying what we know – is Joe Biden really likely to create a video of himself singing along to Baby Shark? – is essential, rather than delegating all responsibility to machines.

The Future of AI Detection – An Arms Race?

It’s likely we are only witnessing the very early stages of what will be an “arms race” scenario as AI becomes more efficient at creating lifelike content, and the creators of detection tools race to keep up.

This isn’t a battle that will be fought only between technologists. As the implications for society become clearer, governments and citizen’s groups will find they have an important role as legislators, educators, and custodians of “the truth.” If we discover that we are no longer able to trust what we read, watch, see and hear, our ability to make informed decisions in every walk of life, from politics to science, will be compromised.

Bringing together technological solutions, human judgment, and the informed oversight and intervention, when necessary, of regulators and lawmakers will be our best defense against these emerging challenges.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

The 20 Generative AI Coding Tools Every Programmer Should Know About

It shouldn’t come as any surprise to learn that today’s generative AI large language models (LLMs) like ChatGPT and Google Gemini are just as fluent in Python, Javascript and C++ as they are in English, Spanish or Arabic![...]

Can Generative AI Solve The Data Overwhelm Problem?

Data is arguably the most valuable asset for today’s businesses.[...]

How Generative AI Will Change Jobs In Healthcare

Generative AI has the potential to change the way professionals work in every industry.[...]

The Crucial Difference Between AI And AGI

Artificial Intelligence (AI) is a transformative force that is reshaping industries from healthcare to finance today.[...]

Examples That Illustrate Why Transparency Is Crucial In AI

AI is rapidly transforming the world of business as it becomes increasingly woven into the fabric of organizations and the day-to-day lives of customers.[...]

The Top ESG Trends Reshaping Business Over The Next Decade

As we approach the year 2035, businesses will have more to contend with than simply maximizing revenues and profits.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts