4 steps to using AI ethically in your organization
2 July 2021
AI can do incredible things, but just because something is possible doesn’t mean it’s right. There’s enormous potential for backlash against the misuse of AI, and policymakers and regulators will no doubt take an increasing interest in AI. This means it’s vital organizations pursue an ethical use of AI.
Here are four ways to do just that.
- Build stakeholder trust
Organizations must be transparent with customers, employees, and other stakeholders about how they’re using AI and data. In the past, some big tech companies have perhaps tried to get away with not telling users what they’re doing, but this is a dangerous path to go down. It’s far better to be upfront about what data you’re gathering, how that data is analyzed, and why you are using this data. And that means telling people in a straightforward, plain English way, not burying the details in long, jargon-heavy terms and conditions that nobody reads. This transparency will be key to building stakeholder trust.
Consent is another important part of building trust; meaning businesses must seek informed consent for gathering people’s data and, wherever possible, allow people to opt-out. When doing this, it helps to demonstrate how AI and data add real value for stakeholders – for example, by helping the organization create better products, deliver a smarter service, solve customers’ problems, make work better for employees, and so on. People are far more likely to give consent when they know it will deliver real value for them.
- Avoid the “black box problem”
From the satnav I follow in my car to the spellchecker that corrects my typos as I write, more and more decisions are being supported by AI. We all place a lot of trust in these systems, allowing them to direct our activities without really questioning how the technology arrives at its decisions.
And even when we do want to understand how an AI system makes a decision, it’s not always possible to get an explanation. This is known as the “black box problem.” The black box problem means we can’t always understand exactly how AIs work. We give the system data, and it gives us a response. It’s not like we can look under the hood and see what goes on in there! This is why AI engineers don’t always understand how their own systems work, particularly on very advanced deep learning AIs. This is a problem because if we can’t understand how advanced AI algorithms are making decisions, how can we trust those systems? How can you be sure they are accurate? How can we predict when they’re likely to fail?
Therefore, organizations must think to question AI providers for details of how their AI does what it does, and, wherever possible, look for AI tools that promote explainability. The good news is, AI providers appear to be grasping the gravity of this problem; for example, in 2019, IBM announced a new toolkit of algorithms called AI 360 Explainability, designed to help explain the decisions of deep learning AIs. It’s not a magic bullet, but it’s certainly a good start.
- Think critically (and don’t abdicate all decision making to AI)
Research has shown that humans have a worrying tendency to blindly follow automated systems, even when those systems are clearly leading us astray – a phenomenon known as “automation bias.” This means, as more decisions are being driven by AI, the need for humans to think critically about AI systems is more important than ever.
To ensure the ethical and safe use of AI, it’s really important to give people the tools they need to overcome automation bias. Organizations must, therefore, train their people to not blindly follow automated systems. Teams need to be educated about AI and encouraged to question AI decisions (what data is involved, and how decisions are made, etc.). Critical thinking should be prioritized. So, too, should data literacy – the more people understand about data and AI, the better able they are to ask questions about how systems work and what data is being used to support decisions.
- Check for biases in your data and algorithms
One of the many advantages of AI is that it has the potential to reduce bias. When decisions are augmented or even automated by AI systems, we can remove some of the baggage that humans bring to the decision-making process.
That’s the idea, anyway. The reality is that an AI algorithm is only as good as the data it’s trained on. If it’s trained on biased data, then the AI system will be biased. Let’s say I train a basic AI to predict the next president of the United States-based only on historical data of past presidents. It’s highly likely to predict the next president will be a white man of advancing years! That’s because there are a hefty race and gender biases built into the training data. The consequences of not addressing biases in data can be serious – inaccurate decisions, loss of reputation and trust, to name just a few. Some consequences could be far graver; just imagine what would happen if patient treatment decisions were based on biased or incomplete datasets.
Biased data is usually the result of an unintentional bias based on a lack of representation – meaning it’s probably an inherent systemic bias rather than any one individual’s prejudices rearing their head. The most obvious way to avoid these inherent biases is to look for under- or over-representation in the data and algorithms being used. Granted, it takes an expert eye to examine data and AI algorithms in any real depth, but that doesn’t let organizations off the hook. Instead, organizations must think to ask these questions of their AI providers, rather than blindly trusting that data and AI algorithms are unbiased. Where necessary, additional data may be needed to correct over-or under-representation in datasets.
AI is going to impact businesses of all shapes and sizes across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.
Where to go from here
If you would like to know more about , check out my articles on:
- Are Alexa And Siri Considered AI?
- How To Put AI Into A Business To Accelerate Performance?
- What Is The Impact Of Artificial Intelligence (AI) On Society?
Or browse the Artificial Intelligence & Machine Learning to find the metrics that matter most to you.
A Short History Of ChatGPT: How We Got To Where We Are Today
Picture an AI that truly speaks your language — and not just your words and syntax. Imagine an AI that understands context, nuance, and even humor.[...]
Hustle GPT: Can You Really Run A Business Just Using ChatGPT?
When a new technology emerges, it doesn't usually take long before people start looking for ways to make money – and generative AI has proven to be no exception.[...]
20+ Amazing (And Free) Data Sources Anyone Can Use To Build AIs
When we talk about artificial intelligence (AI) in business and society today, what we really mean is machine learning (ML).[...]
How Will The Metaverse Really Affect Business?
The metaverse is no longer just a buzzword – it's the future of business, and the possibilities are limitless. From creating value in virtual economies to transforming the way we work [...]
The Danger Of AI Content Farms
Using artificial intelligence (AI) to write content and news reports is nothing new at this stage.[...]
5 Bad ChatGPT Mistakes You Must Avoid
Generative AI applications like ChatGPT and Stable Diffusion are incredibly useful tools that can help us with many day-to-day tasks. Many of us have already found that when used effectively, they can make us more efficient, productive, and creative.[...]
- Get updates straight to your inbox
- Join my 1 million newsletter subscribers
- Never miss any new content