Today, artificial intelligence is essential across a wide range of industries, including healthcare, retail, manufacturing, and even government.
But there are ethical challenges with AI, and as always, we need to stay vigilant about these issues to make sure that artificial intelligence isn’t doing more harm than good.
Here are some of the biggest ethical challenges of artificial intelligence.
We need data to train our artificial intelligence algorithms, and we need to do everything we can to eliminate bias in that data.
The ImageNet database, for example, has far more white faces than non-white faces. When we train our AI algorithms to recognize facial features using a database that doesn’t include the right balance of faces, the algorithm won’t work as well on non-white faces, creating a built-in bias that can have a huge impact.
I believe it’s important that we eliminate as much bias as possible as we train our AI, instead of shrugging our shoulders and assuming that we’re training our AI to accurately reflect our society. That work begins with being aware of the potential for bias in our AI solutions.
Control and the Morality of AI
As we use more and more artificial intelligence, we are asking machines to make increasingly important decisions.
For example, right now, there is an international convention that dictates the use of autonomous drones. If you have a drone that could potentially fire a rocket and kill someone, there needs to be a human in the decision-making process before the missile gets deployed. So far, we have gotten around some of the critical control problems of AI with a patchwork of rules and regulations like this.
The problem is that AIs increasingly have to make split-second decisions. For example, in high-frequency trading, over 90% of all financial trades are now driven by algorithms, so there is no chance to put a human being in control of the decisions.
The same is true for autonomous cars. They need to react immediately if a child runs out on the road, so it’s important that the AI is in control of the situation. This creates interesting ethical challenges around AI and control.
Privacy (and consent) for using data has long been an ethical dilemma of AI. We need data to train AIs, but where does this data come from, and how do we use it? Sometimes we make the assumption that all the data is coming from adults with full mental capabilities that can make choices for themselves about the use of their data, but we don’t always have this.
For example, Barbie now has an AI-enabled doll that children can speak to. What does this mean in terms of ethics? There is an algorithm that is collecting data from your child’s conversations with this toy. Where is this data going, and how is it being used?
As we have seen a lot in the news recently, there are also many companies that collect data and sell it to other companies. What are the rules around this kind of data collection, and what legislation might need to be put in place to protect users’ private information?
Huge companies like Amazon, Facebook, Google, are using artificial intelligence to squash their competitors and become virtually unstoppable in the marketplace. Countries like China also have ambitious AI strategies that are supported by the government. President Putin of Russia has said, “Whoever wins the race in AI will probably become the ruler of the world.”
How do we make sure the monopolies we’re generating are distributing wealth equally and that we don’t have a few countries that race ahead of the rest of the world? Balancing that power is a serious challenge in the world of AI.
Who is responsible for some of the things that AIs are creating?
We can now use artificial intelligence to create text, bots, or even deepfake videos that can be misleading. Who owns that material, and what do we do with this kind of fake news if it spreads across the internet?
We also have AIs that can create art and music. When an AI writes a new piece of music, who owns it? Who has the intellectual property rights for it, and should potentially get paid for it?
Sometimes we don’t think about the environmental impact of AI. We assume that we are using data on a cloud computer to train an algorithm, and then that data is used to run recommendation engines on our website. However, the computer centers that run our cloud infrastructure are power-hungry.
Training in AI, for example, can create 17 times more carbon emissions than the average American does in about a year.
How can we use this energy for the highest good and use AI to solve some of the world’s biggest and most pressing problems? If we are only using artificial intelligence because we can, we might have to reconsider our choices.
My final challenge is “How does AI make us feel as humans?” Artificial intelligence has now gotten so fast, powerful, and efficient that it can leave humans feeling inferior. This issue may challenge us to think about what it actually means to be human.
AI will also continue to automate more of our jobs. What will our contribution be, as human beings? I don’t think artificial intelligence will ever replace all our jobs, but AI will augment them. We need to get better at working alongside smart machines so we can manage the transition with dignity and respect for people and technology.
These are some of the key ethical challenges that we all need to think about very carefully when it comes to AI.