Is Artificial Intelligence (AI) Dangerous And Should We Regulate It Now?

Is Artificial Intelligence (AI) Dangerous And Should We Regulate It Now?

Now that artificial intelligence (AI) is no longer just a what-if scenario that gets tech gurus frenzied with the possibilities, but it’s in use and impacting our everyday lives, there is renewed discussion about how this exciting—yet powerful and potentially problematic—technology should be regulated. On one side of the issue are those that feel it’s premature to begin the discussion while at the other end of the spectrum there are those who feel as if the discussion is dreadfully behind.

No matter where you stand on the issue, we can all agree that AI is here to stay so regardless of how challenging it is to figure out the regulatory rabbit hole, it is now time to start seriously considering what needs to be in place at a national and international level to regulate AI.

Is Artificial Intelligence (AI) Dangerous And Should We Regulate It Now?

Why do we need artificial intelligence (AI) regulation?

Proponents of AI regulation such as Stephen Hawking fear that AI could destroy humanity if we aren’t proactive to avoid the risks of unfettered AI such as “powerful autonomous weapons, or new ways for the few to oppress the many.” He sees regulation as the key to allowing AI and humankind to co-exist in a peaceful and productive future. Bill Gates is also “concerned about super intelligence” and doesn’t “understand why some people are not concerned.” The trifecta is complete with Elon Musk who states we should regulate artificial intelligence “before it’s too late.” In 2015, 20,000 people including robotics and AI researchers, intellectuals, activists, Stephen Hawking and Elon Musk signed an open letter and presented it at the International Conference on Artificial Intelligence that called for the United Nations to ban further development of weaponised AI that could operate “beyond meaningful control.”

Our society is already impacted by the explosion of AI algorithms that are deployed in financial institutions, employers, government systems, police and more. These AI algorithms can and do make decisions that create significant and serious issues in people’s lives. A school teacher who in the past had received rave performance reviews got fired after her district implemented an algorithm to assess teacher performance. The school couldn’t explain why this happened except for others were “gaming” the system. An anti-terrorism facial recognition program revoked the driver’s license of an innocent man when it confused him for another driver.

Artificial intelligence is maturing quickly, while government and regulation decisions move at a very slow pace. What Musk and others believe is that the time is now to start debating how and what AI regulation will look like so that we aren’t too far behind when regulation is actually passed. If nothing else, regulatory bodies and oversight agencies should form even if regulation isn’t instituted so that they can get properly informed and be prepared to make decisions when it’s necessary.

Why is it premature to regulate artificial intelligence (AI)?

Many people feel it’s premature to talk about AI regulation because there is nothing specific that requires regulation yet. Even though there have been tremendous innovations from the AI world, the field is very much in its infancy. Regulation could stifle innovation in an industry that is exploding, while Trouva co-founder Alex Loizou believes we need to understand its full potential before it’s regulated.

One study from Stanford University found “that attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”

What could artificial regulation (AI) look like?

In 2018, Britain along with other European member states are taking their first foray into artificial intelligence legislation that allows automated decisions to be challenged. The General Data Protection Regulation (GDPR) law represents an initial step into trying to create laws around how AI can be challenged and to hopefully prevent the perils of profiling, discrimination and allow people the right to find out what logic was involved in making decisions against them known as the “right to explanation.” There is a realisation that if any type of standard or guideline will have any power, oversight for people to follow the guidelines will need to be assigned to a governing body.

Currently, those that are debating what AI regulations might look like have considered some of the following:

  • AI should not be weaponised.
  • There should be an impenetrable “off-switch” that can be deployed by humans.
  • AI should be governed under the same rules as humans.
  • Manufacturers should agree to abide by general ethical guidelines mandated by international regulation.
  • There should be understanding of how AI logic and decisions are made.
  • Should AI be liable if something goes wrong?

These are complex questions with no easy answer.

Do you think we need AI regulation now or is it too early? Give me your thoughts in the comments below.


Related Articles

 


 

Written by

Bernard Marr

Bernard Marr is a bestselling author, keynote speaker, and advisor to companies and governments. He has worked with and advised many of the world's best-known organisations. LinkedIn has recently ranked Bernard as one of the top 10 Business Influencers in the world (in fact, No 5 - just behind Bill Gates and Richard Branson). He writes on the topics of intelligent business performance for various publications including Forbes, HuffPost, and LinkedIn Pulse. His blogs and SlideShare presentation have millions of readers.

Some Of Our Customers

Connect with Bernard Marr