Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

ChatGPT: What Are Hallucinations And Why Are They A Problem For AI Systems

22 March 2023

In recent years, the rapid development of artificial intelligence (AI) has led to the rise of sophisticated language models, with OpenAI’s ChatGPT at the forefront. While these powerful tools have revolutionized various industries, they also come with their own set of challenges. One of the most concerning issues is the phenomenon of hallucination. In this article, we will demystify this term, explore why it’s a problem, and discuss ongoing efforts to mitigate it.

ChatGPT: What Are Hallucinations And Why Are They A Problem For AI Systems | Bernard Marr

What is hallucination in AI?

Hallucination in AI refers to the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context. These outputs often emerge from the AI model's inherent biases, lack of real-world understanding, or training data limitations. In other words, the AI system "hallucinates" information that it has not been explicitly trained on, leading to unreliable or misleading responses.

Examples of AI hallucinations?

Here are two examples of what hallucinations in ChatGPT might look like:

User input: "When did Leonardo da Vinci paint the Mona Lisa?" AI-generated response: "Leonardo da Vinci painted the Mona Lisa in 1815." (Incorrect: The Mona Lisa was painted between 1503 and 1506, or perhaps continuing until 1517.)

User input: "Tell me a fact about George Washington." AI-generated response: "George Washington was known for inventing the cotton gin." (Unrelated: Eli Whitney, not George Washington, invented the cotton gin.)

Why is hallucination a problem?

  1. Erosion of trust: When AI systems produce incorrect or misleading information, users may lose trust in the technology, hampering its adoption across various sectors.
  2. Ethical concerns: Hallucinated outputs can potentially perpetuate harmful stereotypes or misinformation, making AI systems ethically problematic.
  3. Impact on decision-making: AI systems are increasingly used to inform critical decisions in fields such as finance, healthcare, and law. Hallucinations can lead to poor choices with serious consequences.
  4. Legal implications: Inaccurate or misleading outputs may expose AI developers and users to potential legal liabilities.

Efforts to address hallucination in AI

There are various ways these models can be improved to reduce hallucinations, these include:

  1. Improved training data: Ensuring that AI systems are trained on diverse, accurate, and contextually relevant datasets can help minimize the occurrence of hallucinations.
  2. Red teaming: AI developers can simulate adversarial scenarios to test the AI system's vulnerability to hallucinations and iteratively improve the model.
  3. Transparency and explainability: Providing users with information on how the AI model works and its limitations can help them understand when to trust the system and when to seek additional verification.
  4. Human-in-the-loop: Incorporating human reviewers to validate the AI system's outputs can mitigate the impact of hallucinations and improve the overall reliability of the technology.

As ChatGPT and similar AI systems become more prevalent, addressing the phenomenon of hallucination is essential for realizing the full potential of these technologies. By understanding the causes of hallucination and investing in research to mitigate its occurrence, AI developers and users can help ensure that these powerful tools are used responsibly and effectively.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

The Biggest Workplace Tech Trends In The Next 10 Years

What will the world look like ten years from now? Given the current pace of technological change, not to mention ongoing economic, environmental and[...]

How Generative AI Will Change The Jobs Of Architects And Civil Engineers

Architects and civil engineers are the shapers of our urban landscape. Their work involves balancing meticulous research and design with the ability to innovate and create.[...]

How Generative AI Will Change The Jobs Of Lawyers

Generative AI tools like ChatGPT are changing everything about the way we carry out knowledge-based work.[...]

How Generative AI Will Change The Jobs Of Doctors And Healthcare Professionals

The roles of professionals in society are shifting thanks to the development of truly useful and powerful generative artificial intelligence.[...]

Worried AI Will Take Your Job? How To Stay Relevant In The GenAI Era

Almost 40% of all global employment may be affected by AI, and in advanced economies, the figure could be as high as 60%, according to analysis by the International Monetary Fund.[...]

How Stitch Fix Is Using Generative AI To Help Us Dress Better

Business leaders looking to harness generative AI – the technology made famous by ChatGPT – are facing some major strategic questions.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts