Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

The Life And Death Decision AI Robots Will Have To Make

2 July 2021

How comfortable are we leaving life and death decisions up to robots? While machines can crunch all the data, they must be programmed by humans to use that data. That means we as humans need to grapple with these scenarios to instruct machines on how to make decisions regarding life and death matters. From autonomous cars to drones deciding what targets to hit to robotic doctors, we’re at the point where many are contemplating the life and death decisions AI robots will have to make.

MIT’s Moral Machine

At first, the decisions we imagine machines needing to make don’t seem that troubling. However, researchers at MIT’s Media Lab give us a glimpse through their Moral Machine, the most extensive global ethics study ever conducted, at some of the ethical considerations that will need to be faced once autonomous vehicles are on the road. Should an autonomous vehicle break the law to avoid hitting a pedestrian? What if that act puts the car’s passengers in danger? Whose life is more important? Does the answer change if the pedestrian was crossing the road illegally? These questions are challenging to answer, and there is rarely consensus on what is the moral answer especially across different cultures 

Autonomous car decision-making

Although autonomous vehicles are expected to reduce the number of accidents on our roadways by as much as 90% according to a McKinsey & Company report, accidents are still possible and we need to consider how to programme machines. Besides, we need to determine who is responsible for deciding how to programme the devices whether that’s consumers, politicians, the market, insurance companies or someone else. If an autonomous car encounters an obstacle driving down the road, it can respond in a variety of ways from staying the course and risking getting hit to swerving lanes and hitting a car that ends up killing the passengers. Does the decision about which lane to swerve into alter based on the impact of the individuals in the vehicles? Maybe the person to be killed is a parent or a notable scientist. What if there are children in the car? Perhaps the decision on how to avoid the obstacle should be made by just a flip of a coin or randomly choosing from the options. These are all dilemmas we need to address as we build and design autonomous systems. Another wrinkle in the decision-making algorithms needs to include accidents that might cause loss of limbs, mental capacity and other disabilities.

Military drones deciding targets

With the U.S. Army’s announcement that it’s developing drones that can spot and target vehicles and people using artificial intelligence, the prospect of machines deciding who to kill is no longer a storyline from science fiction but soon to be a reality. Currently, drones are controlled and piloted by humans who ultimately have the final decisions about where a bomb is dropped, or a missile is fired. The international humanitarian law allows “dual-use facilities, ” those that create products for civil and military use, to be attacked. When drones enter combat, would tech companies and employees be considered fair targets? A key feature of autonomous systems is that they get better over time based on the data and performance feedback it receives. Is it plausible that as autonomous drone technology gets refined, we will need to determine an acceptable stage of self-development to avoid creating a killing machine? 

What are the implications when robo-doc is on call?

Much has been written about the medical breakthroughs artificial intelligence systems can produce for disease diagnosis and personalised treatment plans and drug protocols. The potential for AI to help with challenging cases is extraordinary, but what happens when your human doctor and your robo-doc are not aligned? Do you trust one over the other? Will insurance companies deny coverage if you don’t adhere to what the AI system tells you? When should critical medical decisions be delegated to AI algorithms and who ultimately gets the final decision—doctors, patients or machines? As machines get better at medical decision-making, we might hit a point where it’s impossible for the programmers or the doctors to understand the machine’s decision-making process. The more we relinquish our medical knowledge and control to AI, the more difficult it becomes to spot errors in AI decision-making. 

These are difficult questions to answer whether you’re talking about traffic safety, military operations or what happens in the healthcare system. Just like humans, AI machines will enable great creation at the same time being capable of devastating destruction. Without a moral compass, machines will require humans to thoughtfully consider how to programme humanity and morality into the algorithms.

Data Strategy Book | Bernard Marr

Related Articles

How Do We Use Artificial Intelligence Ethically | Bernard Marr

How Do We Use Artificial Intelligence Ethically?

I’m hugely passionate about artificial intelligence (AI), and I'm proud to say that I help companies use AI to do amazing things in the world [...]

How Artificial Intelligence Can Help Small Businesses | Bernard Marr

How Artificial Intelligence Can Help Small Businesses

Small and medium-sized businesses all over the world are benefiting from artificial intelligence and machine learning – and integrating AI into core business functions and processes is getting more accessible and more affordable every day. [...]

What Really Is The Tesla Bot And How Much Will It Cost | Bernard Marr

What Really Is The Tesla Bot And How Much Will It Cost?

Elon Musk has just announced that Tesla will begin developing a humanoid robot called the Tesla Bot that is designed to perform “unsafe, repetitive, or boring” tasks. [...]

Should I Choose Machine Learning or Big Data | Bernard Marr

Should I Choose Machine Learning or Big Data?

Big Data and Machine Learning are two exciting applications of technology that are often mentioned together in the space of the same breath [...]

What Is The Next Level Of AI Technology | Bernard Marr

What Is The Next Level Of AI Technology?

Artificial Intelligence (AI) has permeated all aspects of our lives – from the way we communicate to how we work, shop, play, and do business. [...]

The 7 Biggest Ethical Challenges of Artificial Intelligence | Bernard Marr

The 7 Biggest Ethical Challenges of Artificial Intelligence

Today, artificial intelligence is essential across a wide range of industries, including healthcare, retail, manufacturing, and even government. [...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media



View Podcasts