Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Is Artificial Intelligence (AI) Dangerous And Should We Regulate It Now?

2 July 2021

Now that artificial intelligence (AI) is no longer just a what-if scenario that gets tech gurus frenzied with the possibilities, but it’s in use and impacting our everyday lives, there is renewed discussion about how this exciting—yet powerful and potentially problematic—technology should be regulated. On one side of the issue are those that feel it’s premature to begin the discussion while at the other end of the spectrum there are those who feel as if the discussion is dreadfully behind.

No matter where you stand on the issue, we can all agree that AI is here to stay so regardless of how challenging it is to figure out the regulatory rabbit hole, it is now time to start seriously considering what needs to be in place at a national and international level to regulate AI.

Why do we need artificial intelligence (AI) regulation?

Proponents of AI regulation such as Stephen Hawking fear that AI could destroy humanity if we aren’t proactive to avoid the risks of unfettered AI such as “powerful autonomous weapons, or new ways for the few to oppress the many.” He sees regulation as the key to allowing AI and humankind to co-exist in a peaceful and productive future. Bill Gates is also “concerned about super intelligence” and doesn’t “understand why some people are not concerned.” The trifecta is complete with Elon Musk who states we should regulate artificial intelligence “before it’s too late.” In 2015, 20,000 people including robotics and AI researchers, intellectuals, activists, Stephen Hawking and Elon Musk signed an open letter and presented it at the International Conference on Artificial Intelligence that called for the United Nations to ban further development of weaponised AI that could operate “beyond meaningful control.”

Our society is already impacted by the explosion of AI algorithms that are deployed in financial institutions, employers, government systems, police and more. These AI algorithms can and do make decisions that create significant and serious issues in people’s lives. A school teacher who in the past had received rave performance reviews got fired after her district implemented an algorithm to assess teacher performance. The school couldn’t explain why this happened except for others were “gaming” the system. An anti-terrorism facial recognition program revoked the driver’s license of an innocent man when it confused him for another driver.

Artificial intelligence is maturing quickly, while government and regulation decisions move at a very slow pace. What Musk and others believe is that the time is now to start debating how and what AI regulation will look like so that we aren’t too far behind when regulation is actually passed. If nothing else, regulatory bodies and oversight agencies should form even if regulation isn’t instituted so that they can get properly informed and be prepared to make decisions when it’s necessary.

Why is it premature to regulate artificial intelligence (AI)?

Many people feel it’s premature to talk about AI regulation because there is nothing specific that requires regulation yet. Even though there have been tremendous innovations from the AI world, the field is very much in its infancy. Regulation could stifle innovation in an industry that is exploding, while Trouva co-founder Alex Loizou believes we need to understand its full potential before it’s regulated.

One study from Stanford University found “that attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”

What could artificial regulation (AI) look like?

In 2018, Britain along with other European member states are taking their first foray into artificial intelligence legislation that allows automated decisions to be challenged. The General Data Protection Regulation (GDPR) law represents an initial step into trying to create laws around how AI can be challenged and to hopefully prevent the perils of profiling, discrimination and allow people the right to find out what logic was involved in making decisions against them known as the “right to explanation.” There is a realisation that if any type of standard or guideline will have any power, oversight for people to follow the guidelines will need to be assigned to a governing body.

Currently, those that are debating what AI regulations might look like have considered some of the following:

  • AI should not be weaponised.
  • There should be an impenetrable “off-switch” that can be deployed by humans.
  • AI should be governed under the same rules as humans.
  • Manufacturers should agree to abide by general ethical guidelines mandated by international regulation.
  • There should be understanding of how AI logic and decisions are made.
  • Should AI be liable if something goes wrong?

These are complex questions with no easy answer.

Do you think we need AI regulation now or is it too early? Give me your thoughts in the comments below.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

A Short History Of ChatGPT: How We Got To Where We Are Today

Picture an AI that truly speaks your language — and not just your words and syntax. Imagine an AI that understands context, nuance, and even humor.[...]

Hustle GPT: Can You Really Run A Business Just Using ChatGPT?

When a new technology emerges, it doesn't usually take long before people start looking for ways to make money – and generative AI has proven to be no exception.[...]

20+ Amazing (And Free) Data Sources Anyone Can Use To Build AIs

When we talk about artificial intelligence (AI) in business and society today, what we really mean is machine learning (ML).[...]

How Will The Metaverse Really Affect Business?

The metaverse is no longer just a buzzword – it's the future of business, and the possibilities are limitless. From creating value in virtual economies to transforming the way we work [...]

The Danger Of AI Content Farms

Using artificial intelligence (AI) to write content and news reports is nothing new at this stage.[...]

5 Bad ChatGPT Mistakes You Must Avoid

Generative AI applications like ChatGPT and Stable Diffusion are incredibly useful tools that can help us with many day-to-day tasks. Many of us have already found that when used effectively, they can make us more efficient, productive, and creative.[...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media

Yearly Views


View Podcasts