Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

Is Artificial Intelligence (AI) Dangerous And Should We Regulate It Now?

2 July 2021

Now that artificial intelligence (AI) is no longer just a what-if scenario that gets tech gurus frenzied with the possibilities, but it’s in use and impacting our everyday lives, there is renewed discussion about how this exciting—yet powerful and potentially problematic—technology should be regulated. On one side of the issue are those that feel it’s premature to begin the discussion while at the other end of the spectrum there are those who feel as if the discussion is dreadfully behind.

No matter where you stand on the issue, we can all agree that AI is here to stay so regardless of how challenging it is to figure out the regulatory rabbit hole, it is now time to start seriously considering what needs to be in place at a national and international level to regulate AI.

Why do we need artificial intelligence (AI) regulation?

Proponents of AI regulation such as Stephen Hawking fear that AI could destroy humanity if we aren’t proactive to avoid the risks of unfettered AI such as “powerful autonomous weapons, or new ways for the few to oppress the many.” He sees regulation as the key to allowing AI and humankind to co-exist in a peaceful and productive future. Bill Gates is also “concerned about super intelligence” and doesn’t “understand why some people are not concerned.” The trifecta is complete with Elon Musk who states we should regulate artificial intelligence “before it’s too late.” In 2015, 20,000 people including robotics and AI researchers, intellectuals, activists, Stephen Hawking and Elon Musk signed an open letter and presented it at the International Conference on Artificial Intelligence that called for the United Nations to ban further development of weaponised AI that could operate “beyond meaningful control.”

Our society is already impacted by the explosion of AI algorithms that are deployed in financial institutions, employers, government systems, police and more. These AI algorithms can and do make decisions that create significant and serious issues in people’s lives. A school teacher who in the past had received rave performance reviews got fired after her district implemented an algorithm to assess teacher performance. The school couldn’t explain why this happened except for others were “gaming” the system. An anti-terrorism facial recognition program revoked the driver’s license of an innocent man when it confused him for another driver.

Artificial intelligence is maturing quickly, while government and regulation decisions move at a very slow pace. What Musk and others believe is that the time is now to start debating how and what AI regulation will look like so that we aren’t too far behind when regulation is actually passed. If nothing else, regulatory bodies and oversight agencies should form even if regulation isn’t instituted so that they can get properly informed and be prepared to make decisions when it’s necessary.

Why is it premature to regulate artificial intelligence (AI)?

Many people feel it’s premature to talk about AI regulation because there is nothing specific that requires regulation yet. Even though there have been tremendous innovations from the AI world, the field is very much in its infancy. Regulation could stifle innovation in an industry that is exploding, while Trouva co-founder Alex Loizou believes we need to understand its full potential before it’s regulated.

One study from Stanford University found “that attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”

What could artificial regulation (AI) look like?

In 2018, Britain along with other European member states are taking their first foray into artificial intelligence legislation that allows automated decisions to be challenged. The General Data Protection Regulation (GDPR) law represents an initial step into trying to create laws around how AI can be challenged and to hopefully prevent the perils of profiling, discrimination and allow people the right to find out what logic was involved in making decisions against them known as the “right to explanation.” There is a realisation that if any type of standard or guideline will have any power, oversight for people to follow the guidelines will need to be assigned to a governing body.

Currently, those that are debating what AI regulations might look like have considered some of the following:

  • AI should not be weaponised.
  • There should be an impenetrable “off-switch” that can be deployed by humans.
  • AI should be governed under the same rules as humans.
  • Manufacturers should agree to abide by general ethical guidelines mandated by international regulation.
  • There should be understanding of how AI logic and decisions are made.
  • Should AI be liable if something goes wrong?

These are complex questions with no easy answer.

Do you think we need AI regulation now or is it too early? Give me your thoughts in the comments below.


Data Strategy Book | Bernard Marr

Related Articles

How Do We Use Artificial Intelligence Ethically | Bernard Marr

How Do We Use Artificial Intelligence Ethically?

I’m hugely passionate about artificial intelligence (AI), and I'm proud to say that I help companies use AI to do amazing things in the world [...]

How Artificial Intelligence Can Help Small Businesses | Bernard Marr

How Artificial Intelligence Can Help Small Businesses

Small and medium-sized businesses all over the world are benefiting from artificial intelligence and machine learning – and integrating AI into core business functions and processes is getting more accessible and more affordable every day. [...]

What Really Is The Tesla Bot And How Much Will It Cost | Bernard Marr

What Really Is The Tesla Bot And How Much Will It Cost?

Elon Musk has just announced that Tesla will begin developing a humanoid robot called the Tesla Bot that is designed to perform “unsafe, repetitive, or boring” tasks. [...]

Should I Choose Machine Learning or Big Data | Bernard Marr

Should I Choose Machine Learning or Big Data?

Big Data and Machine Learning are two exciting applications of technology that are often mentioned together in the space of the same breath [...]

What Is The Next Level Of AI Technology | Bernard Marr

What Is The Next Level Of AI Technology?

Artificial Intelligence (AI) has permeated all aspects of our lives – from the way we communicate to how we work, shop, play, and do business. [...]

The 7 Biggest Ethical Challenges of Artificial Intelligence | Bernard Marr

The 7 Biggest Ethical Challenges of Artificial Intelligence

Today, artificial intelligence is essential across a wide range of industries, including healthcare, retail, manufacturing, and even government. [...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media

0
Followers
0
Likes
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Followers
0
Readers

Podcasts

View Podcasts