Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

How Do We Use Artificial Intelligence Ethically?

20 September 2021

I’m hugely passionate about artificial intelligence (AI), and I’m proud to say that I help companies use AI to do amazing things in the world.

But we must make sure we use AI responsibly, so we can make the world a better place. In this post, I’m going to give you some tips for making sure you apply AI ethically within your organization.

How Do We Use Artificial Intelligence Ethically? | Bernard Marr

1. Start with education and awareness about AI.

Communicate clearly with people (externally and internally) about what AI can do and its challenges. It is possible to use AI for the wrong reasons, so organizations need to figure out the right purposes for using AI and how to stay within predefined ethical boundaries. Everyone across the organization needs to understand what AI is, how it can be used, and what its ethical challenges are.

2. Be transparent.

This is one of the biggest things I stress with every organization I work with. Every organization needs to be open and honest (both internally and externally) about how they’re using AI.

One of my clients, the Royal Bank of Scotland, wanted to use AI to improve some of the services they provide to their clients. When they began their initiative, they were (and continue to be) transparent and clear with their customers about what data they were collecting, how that data was being used, and what benefits the customers were getting from it.

When I look at the recent Cambridge Analytica scandal, I feel like a big part of the problem was Facebook’s lack of transparency about how they were using AI and how they were collecting and using their customers’ data. A clear AI communication policy could have solved a lot of problems before they even happened.

Customers need to trust the companies they work with – and that requires full transparency about how AI fits into the company’s overall strategy and how it affects customers.

3. Control for bias.

As much as possible, organizations need to make sure the data they're using is not biased.

For instance, Google created a huge database of facial images called ImageNet. Their data set included far more white faces than non-white faces, so when they trained AIs to use this data, they worked better on white faces than non-white ones.

Creating better data sets and better algorithms is not just an opportunity to use AI ethically – it’s also a way to try to address some racial and gender biases in the world on a larger scale.

4. Make it explainable.

Can your artificial intelligence algorithms be explained?

When we use modern AI tools like deep learning, they can be “black boxes” where humans don’t really understand the decision-making processes within their algorithms. Companies feed them data, the AIs learn from that data, and then they make a decision.

But if you use deep learning algorithms to determine who should get healthcare treatment and who doesn't, or who should be allowed to go on parole and who shouldn’t, these are massively big decisions with huge implications for individual lives.

It is increasingly important for organizations to understand exactly how the AI makes decisions and be able to explain those systems. A lot of work has recently gone into the development of explainable AIs. We now have ways to better explain even the most complicated deep learning systems, so there’s no excuse for having a continued air of confusion or mystery around your algorithms.

5. Make it inclusive.

At the moment, we have far too many male, white people working on AI. We need to make sure the people building the AI systems of the future are as diverse as our world. There is some progress in bringing in more women and people of color (POC) to make sure the AI you’re building truly represents our society as a whole, but that has to go far further.

6. Follow the rules.

Of course, when it comes to the use of AI, we must adhere to regulation.

We are seeing increasing regulation of AI in Europe and in parts of the US. However, there’s still a lot of unregulated parts that rely on self-regulation by organizations. Companies like Google and Microsoft are focusing on using AI for good, and Google has its own self-defined AI principles.

When I work with organizations, we often put together an ethics council for AI that acts as the North Star for AI ethics concerns for that company. Whenever an organization identifies a use case for AI, the ethics council evaluates it for ethical concerns.

The Organization for Economic Co-operation and Development (OECD), was founded in 1961 to stimulate economic progress, and it includes 37 member countries. The organization created the OECD AI principles, which are a great starting point for thinking about how your organization can use AI in ways that benefit people and the planet.

Then AI should be designed in a way that respects laws, human rights, democratic values, and diversity. AI must function in a robust, secure, and safe way, with risks being continuously assessed and managed. Organizations developing AI should be held accountable for the proper functioning of these systems in line with these principles.

The 17 Sustainable Development Goals of the United Nations can also be a great resource for you as you’re establishing your AI use cases.

If the way you're using AI aligns with OECD principles and the UN Sustainable Development Goals, you're probably well on your way to ensuring that you're using AI ethically.

ATSCALE | Bernard Marr

Related Articles

What Is DNA Data Storage | Bernard Marr

What Is DNA Data Storage?

Experts predict accumulated global data will reach 175 billion trillion bytes by 2025. Could DNA synthesis be the[...]

Nanobots That Check Your Health | Bernard Marr

Nanobots That Check Your Health

Researchers are now using nanorobots (or nanobots) to diagnose and treat a wide range of medical conditions[...]

How AI Is Transforming The Future Of Digital Marketing | Bernard Marr

How AI Is Transforming The Future Of Digital Marketing

When people think about artificial intelligence (AI) today, they might think of computers that can speak to us like Alexa or[...]

The Biggest Artificial Intelligence Milestones Of The Decade So Far | Bernard Marr

The Biggest Artificial Intelligence Milestones Of The Decade So Far

It’s frequently been said that we’ve already seen five years’ worth of technology-driven change packed into the last 18 months[...]

Which Technologies Have The Most Potential To Impact Your Business | Bernard Marr

Which Technologies Have The Most Potential To Impact Your Business?

To stay competitive, your company needs to anticipate the biggest tech trends of the future that will have the biggest impact on your business[...]

What Is Artificial Intelligence As A Service AIaaS | Bernard Marr

What Is Artificial Intelligence As A Service (AIaaS)?

Think artificial intelligence is out of your reach as a small or mid-sized business or departments with limited budgets? With the help of AI as a service (AIaaS)[...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media



View Podcasts