Is Artificial Intelligence (AI) Dangerous And Should We Regulate It Now?
2 July 2021
Now that artificial intelligence (AI) is no longer just a what-if scenario that gets tech gurus frenzied with the possibilities, but it’s in use and impacting our everyday lives, there is renewed discussion about how this exciting—yet powerful and potentially problematic—technology should be regulated. On one side of the issue are those that feel it’s premature to begin the discussion while at the other end of the spectrum there are those who feel as if the discussion is dreadfully behind.
No matter where you stand on the issue, we can all agree that AI is here to stay so regardless of how challenging it is to figure out the regulatory rabbit hole, it is now time to start seriously considering what needs to be in place at a national and international level to regulate AI.
Why do we need artificial intelligence (AI) regulation?
Proponents of AI regulation such as Stephen Hawking fear that AI could destroy humanity if we aren’t proactive to avoid the risks of unfettered AI such as “powerful autonomous weapons, or new ways for the few to oppress the many.” He sees regulation as the key to allowing AI and humankind to co-exist in a peaceful and productive future. Bill Gates is also “concerned about super intelligence” and doesn’t “understand why some people are not concerned.” The trifecta is complete with Elon Musk who states we should regulate artificial intelligence “before it’s too late.” In 2015, 20,000 people including robotics and AI researchers, intellectuals, activists, Stephen Hawking and Elon Musk signed an open letter and presented it at the International Conference on Artificial Intelligence that called for the United Nations to ban further development of weaponised AI that could operate “beyond meaningful control.”
Our society is already impacted by the explosion of AI algorithms that are deployed in financial institutions, employers, government systems, police and more. These AI algorithms can and do make decisions that create significant and serious issues in people’s lives. A school teacher who in the past had received rave performance reviews got fired after her district implemented an algorithm to assess teacher performance. The school couldn’t explain why this happened except for others were “gaming” the system. An anti-terrorism facial recognition program revoked the driver’s license of an innocent man when it confused him for another driver.
Artificial intelligence is maturing quickly, while government and regulation decisions move at a very slow pace. What Musk and others believe is that the time is now to start debating how and what AI regulation will look like so that we aren’t too far behind when regulation is actually passed. If nothing else, regulatory bodies and oversight agencies should form even if regulation isn’t instituted so that they can get properly informed and be prepared to make decisions when it’s necessary.
Why is it premature to regulate artificial intelligence (AI)?
Many people feel it’s premature to talk about AI regulation because there is nothing specific that requires regulation yet. Even though there have been tremendous innovations from the AI world, the field is very much in its infancy. Regulation could stifle innovation in an industry that is exploding, while Trouva co-founder Alex Loizou believes we need to understand its full potential before it’s regulated.
One study from Stanford University found “that attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”
What could artificial regulation (AI) look like?
In 2018, Britain along with other European member states are taking their first foray into artificial intelligence legislation that allows automated decisions to be challenged. The General Data Protection Regulation (GDPR) law represents an initial step into trying to create laws around how AI can be challenged and to hopefully prevent the perils of profiling, discrimination and allow people the right to find out what logic was involved in making decisions against them known as the “right to explanation.” There is a realisation that if any type of standard or guideline will have any power, oversight for people to follow the guidelines will need to be assigned to a governing body.
Currently, those that are debating what AI regulations might look like have considered some of the following:
- AI should not be weaponised.
- There should be an impenetrable “off-switch” that can be deployed by humans.
- AI should be governed under the same rules as humans.
- Manufacturers should agree to abide by general ethical guidelines mandated by international regulation.
- There should be understanding of how AI logic and decisions are made.
- Should AI be liable if something goes wrong?
These are complex questions with no easy answer.
Do you think we need AI regulation now or is it too early? Give me your thoughts in the comments below.
Related Articles
The 5 Biggest Technology Trends For 2025 Everyone Must Be Ready For Now
Unbelievable as it seems, we’re rapidly approaching 2025. This means it’s time for me to once again pick the trends that I believe will be most important over the coming year.[...]
The Amazing Ways Amazon Is Using AI Robots
Amazon, the e-commerce giant, has long been at the forefront of technological innovation.[...]
The Geopolitics Of AI
Artificial intelligence (AI) is likely to be one of the most transformative technologies of the century.[...]
How AI Is Used In War Today
From autonomous drones to facial recognition algorithms designed to recognize perpetrators of war crimes, the conflict in Ukraine has become a testing ground for the use of artificial intelligence in warfare.[...]
Will AI Solve The World’s Inequality Problem – Or Make It Worse?
We are standing on the cusp of a new technological revolution. AI is increasingly permeating every aspect of our lives, with intelligent machines transforming the way we live and work.[...]
How You Become Irreplaceable In The Age Of AI
In a world where artificial intelligence is rapidly advancing, many of us are left wondering: Will AI take our jobs?[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media