Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

4 steps to using AI ethically in your organization

2 July 2021

AI can do incredible things, but just because something is possible doesn’t mean it’s right. There’s enormous potential for backlash against the misuse of AI, and policymakers and regulators will no doubt take an increasing interest in AI. This means it’s vital organizations pursue an ethical use of AI.

Here are four ways to do just that.

  1. Build stakeholder trust

Organizations must be transparent with customers, employees, and other stakeholders about how they’re using AI and data. In the past, some big tech companies have perhaps tried to get away with not telling users what they’re doing, but this is a dangerous path to go down. It’s far better to be upfront about what data you’re gathering, how that data is analyzed, and why you are using this data. And that means telling people in a straightforward, plain English way, not burying the details in long, jargon-heavy terms and conditions that nobody reads. This transparency will be key to building stakeholder trust.

Consent is another important part of building trust; meaning businesses must seek informed consent for gathering people’s data and, wherever possible, allow people to opt-out. When doing this, it helps to demonstrate how AI and data add real value for stakeholders – for example, by helping the organization create better products, deliver a smarter service, solve customers’ problems, make work better for employees, and so on. People are far more likely to give consent when they know it will deliver real value for them.

  1. Avoid the “black box problem”

From the satnav I follow in my car to the spellchecker that corrects my typos as I write, more and more decisions are being supported by AI. We all place a lot of trust in these systems, allowing them to direct our activities without really questioning how the technology arrives at its decisions.

And even when we do want to understand how an AI system makes a decision, it’s not always possible to get an explanation. This is known as the “black box problem.” The black box problem means we can’t always understand exactly how AIs work. We give the system data, and it gives us a response. It’s not like we can look under the hood and see what goes on in there! This is why AI engineers don’t always understand how their own systems work, particularly on very advanced deep learning AIs. This is a problem because if we can’t understand how advanced AI algorithms are making decisions, how can we trust those systems? How can you be sure they are accurate? How can we predict when they’re likely to fail?

Therefore, organizations must think to question AI providers for details of how their AI does what it does, and, wherever possible, look for AI tools that promote explainability. The good news is, AI providers appear to be grasping the gravity of this problem; for example, in 2019, IBM announced a new toolkit of algorithms called AI 360 Explainability, designed to help explain the decisions of deep learning AIs. It’s not a magic bullet, but it’s certainly a good start.

  1. Think critically (and don’t abdicate all decision making to AI)

Research has shown that humans have a worrying tendency to blindly follow automated systems, even when those systems are clearly leading us astray – a phenomenon known as “automation bias.” This means, as more decisions are being driven by AI, the need for humans to think critically about AI systems is more important than ever.

To ensure the ethical and safe use of AI, it’s really important to give people the tools they need to overcome automation bias. Organizations must, therefore, train their people to not blindly follow automated systems. Teams need to be educated about AI and encouraged to question AI decisions (what data is involved, and how decisions are made, etc.). Critical thinking should be prioritized. So, too, should data literacy – the more people understand about data and AI, the better able they are to ask questions about how systems work and what data is being used to support decisions.

  1. Check for biases in your data and algorithms

One of the many advantages of AI is that it has the potential to reduce bias. When decisions are augmented or even automated by AI systems, we can remove some of the baggage that humans bring to the decision-making process.

That’s the idea, anyway. The reality is that an AI algorithm is only as good as the data it’s trained on. If it’s trained on biased data, then the AI system will be biased. Let’s say I train a basic AI to predict the next president of the United States-based only on historical data of past presidents. It’s highly likely to predict the next president will be a white man of advancing years! That’s because there are a hefty race and gender biases built into the training data. The consequences of not addressing biases in data can be serious – inaccurate decisions, loss of reputation and trust, to name just a few. Some consequences could be far graver; just imagine what would happen if patient treatment decisions were based on biased or incomplete datasets.

Biased data is usually the result of an unintentional bias based on a lack of representation – meaning it’s probably an inherent systemic bias rather than any one individual’s prejudices rearing their head. The most obvious way to avoid these inherent biases is to look for under- or over-representation in the data and algorithms being used. Granted, it takes an expert eye to examine data and AI algorithms in any real depth, but that doesn’t let organizations off the hook. Instead, organizations must think to ask these questions of their AI providers, rather than blindly trusting that data and AI algorithms are unbiased. Where necessary, additional data may be needed to correct over-or under-representation in datasets.

AI is going to impact businesses of all shapes and sizes across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

Where to go from here

If you would like to know more about , check out my articles on:

Or browse the Artificial Intelligence & Machine Learning to find the metrics that matter most to you.


ATSCALE | Bernard Marr

Related Articles

What Is DNA Data Storage | Bernard Marr

What Is DNA Data Storage?

Experts predict accumulated global data will reach 175 billion trillion bytes by 2025. Could DNA synthesis be the[...]

Nanobots That Check Your Health | Bernard Marr

Nanobots That Check Your Health

Researchers are now using nanorobots (or nanobots) to diagnose and treat a wide range of medical conditions[...]

How AI Is Transforming The Future Of Digital Marketing | Bernard Marr

How AI Is Transforming The Future Of Digital Marketing

When people think about artificial intelligence (AI) today, they might think of computers that can speak to us like Alexa or[...]

The Biggest Artificial Intelligence Milestones Of The Decade So Far | Bernard Marr

The Biggest Artificial Intelligence Milestones Of The Decade So Far

It’s frequently been said that we’ve already seen five years’ worth of technology-driven change packed into the last 18 months[...]

Which Technologies Have The Most Potential To Impact Your Business | Bernard Marr

Which Technologies Have The Most Potential To Impact Your Business?

To stay competitive, your company needs to anticipate the biggest tech trends of the future that will have the biggest impact on your business[...]

What Is Artificial Intelligence As A Service AIaaS | Bernard Marr

What Is Artificial Intelligence As A Service (AIaaS)?

Think artificial intelligence is out of your reach as a small or mid-sized business or departments with limited budgets? With the help of AI as a service (AIaaS)[...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media

0
Followers
0
Likes
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Followers
0
Readers

Podcasts

View Podcasts