The Life And Death Decision AI Robots Will Have To Make

The Life And Death Decision AI Robots Will Have To Make

How comfortable are we leaving life and death decisions up to robots? While machines can crunch all the data, they must be programmed by humans to use that data. That means we as humans need to grapple with these scenarios to instruct machines on how to make decisions regarding life and death matters. From autonomous cars to drones deciding what targets to hit to robotic doctors, we're at the point where many are contemplating the life and death decisions AI robots will have to make.

The Life And Death Decision AI Robots Will Have To Make

MIT’s Moral Machine

At first, the decisions we imagine machines needing to make don’t seem that troubling. However, researchers at MIT’s Media Lab give us a glimpse through their Moral Machine, the most extensive global ethics study ever conducted, at some of the ethical considerations that will need to be faced once autonomous vehicles are on the road. Should an autonomous vehicle break the law to avoid hitting a pedestrian? What if that act puts the car's passengers in danger? Whose life is more important? Does the answer change if the pedestrian was crossing the road illegally? These questions are challenging to answer, and there is rarely consensus on what is the moral answer especially across different cultures 

Autonomous car decision-making

Although autonomous vehicles are expected to reduce the number of accidents on our roadways by as much as 90% according to a McKinsey & Company report, accidents are still possible and we need to consider how to programme machines. Besides, we need to determine who is responsible for deciding how to programme the devices whether that's consumers, politicians, the market, insurance companies or someone else. If an autonomous car encounters an obstacle driving down the road, it can respond in a variety of ways from staying the course and risking getting hit to swerving lanes and hitting a car that ends up killing the passengers. Does the decision about which lane to swerve into alter based on the impact of the individuals in the vehicles? Maybe the person to be killed is a parent or a notable scientist. What if there are children in the car? Perhaps the decision on how to avoid the obstacle should be made by just a flip of a coin or randomly choosing from the options. These are all dilemmas we need to address as we build and design autonomous systems. Another wrinkle in the decision-making algorithms needs to include accidents that might cause loss of limbs, mental capacity and other disabilities.

Military drones deciding targets

With the U.S. Army’s announcement that it's developing drones that can spot and target vehicles and people using artificial intelligence, the prospect of machines deciding who to kill is no longer a storyline from science fiction but soon to be a reality. Currently, drones are controlled and piloted by humans who ultimately have the final decisions about where a bomb is dropped, or a missile is fired. The international humanitarian law allows "dual-use facilities, " those that create products for civil and military use, to be attacked. When drones enter combat, would tech companies and employees be considered fair targets? A key feature of autonomous systems is that they get better over time based on the data and performance feedback it receives. Is it plausible that as autonomous drone technology gets refined, we will need to determine an acceptable stage of self-development to avoid creating a killing machine? 

What are the implications when robo-doc is on call?

Much has been written about the medical breakthroughs artificial intelligence systems can produce for disease diagnosis and personalised treatment plans and drug protocols. The potential for AI to help with challenging cases is extraordinary, but what happens when your human doctor and your robo-doc are not aligned? Do you trust one over the other? Will insurance companies deny coverage if you don't adhere to what the AI system tells you? When should critical medical decisions be delegated to AI algorithms and who ultimately gets the final decision—doctors, patients or machines? As machines get better at medical decision-making, we might hit a point where it's impossible for the programmers or the doctors to understand the machine's decision-making process. The more we relinquish our medical knowledge and control to AI, the more difficult it becomes to spot errors in AI decision-making. 

These are difficult questions to answer whether you're talking about traffic safety, military operations or what happens in the healthcare system. Just like humans, AI machines will enable great creation at the same time being capable of devastating destruction. Without a moral compass, machines will require humans to thoughtfully consider how to programme humanity and morality into the algorithms.


Related Articles

 


 

Written by

Bernard Marr

Bernard Marr is a bestselling author, keynote speaker, and advisor to companies and governments. He has worked with and advised many of the world's best-known organisations. LinkedIn has recently ranked Bernard as one of the top 10 Business Influencers in the world (in fact, No 5 - just behind Bill Gates and Richard Branson). He writes on the topics of intelligent business performance for various publications including Forbes, HuffPost, and LinkedIn Pulse. His blogs and SlideShare presentation have millions of readers.

Some Of Our Customers

Bernard's
Bulletin

Sign-up and be the first to receive news, articles, insights and event updates from Bernard Marr & Co straight to your inbox.

I have read and agree to
your terms and conditions.
  

 

I have read and agree to
your terms and conditions.
  

Connect with Bernard Marr

 

 

 

Connect with Bernard Marr