4 steps to using AI ethically in your organization
2 July 2021
AI can do incredible things, but just because something is possible doesn’t mean it’s right. There’s enormous potential for backlash against the misuse of AI, and policymakers and regulators will no doubt take an increasing interest in AI. This means it’s vital organizations pursue an ethical use of AI.
Here are four ways to do just that.
- Build stakeholder trust
Organizations must be transparent with customers, employees, and other stakeholders about how they’re using AI and data. In the past, some big tech companies have perhaps tried to get away with not telling users what they’re doing, but this is a dangerous path to go down. It’s far better to be upfront about what data you’re gathering, how that data is analyzed, and why you are using this data. And that means telling people in a straightforward, plain English way, not burying the details in long, jargon-heavy terms and conditions that nobody reads. This transparency will be key to building stakeholder trust.
Consent is another important part of building trust; meaning businesses must seek informed consent for gathering people’s data and, wherever possible, allow people to opt-out. When doing this, it helps to demonstrate how AI and data add real value for stakeholders – for example, by helping the organization create better products, deliver a smarter service, solve customers’ problems, make work better for employees, and so on. People are far more likely to give consent when they know it will deliver real value for them.
- Avoid the “black box problem”
From the satnav I follow in my car to the spellchecker that corrects my typos as I write, more and more decisions are being supported by AI. We all place a lot of trust in these systems, allowing them to direct our activities without really questioning how the technology arrives at its decisions.
And even when we do want to understand how an AI system makes a decision, it’s not always possible to get an explanation. This is known as the “black box problem.” The black box problem means we can’t always understand exactly how AIs work. We give the system data, and it gives us a response. It’s not like we can look under the hood and see what goes on in there! This is why AI engineers don’t always understand how their own systems work, particularly on very advanced deep learning AIs. This is a problem because if we can’t understand how advanced AI algorithms are making decisions, how can we trust those systems? How can you be sure they are accurate? How can we predict when they’re likely to fail?
Therefore, organizations must think to question AI providers for details of how their AI does what it does, and, wherever possible, look for AI tools that promote explainability. The good news is, AI providers appear to be grasping the gravity of this problem; for example, in 2019, IBM announced a new toolkit of algorithms called AI 360 Explainability, designed to help explain the decisions of deep learning AIs. It’s not a magic bullet, but it’s certainly a good start.
- Think critically (and don’t abdicate all decision making to AI)
Research has shown that humans have a worrying tendency to blindly follow automated systems, even when those systems are clearly leading us astray – a phenomenon known as “automation bias.” This means, as more decisions are being driven by AI, the need for humans to think critically about AI systems is more important than ever.
To ensure the ethical and safe use of AI, it’s really important to give people the tools they need to overcome automation bias. Organizations must, therefore, train their people to not blindly follow automated systems. Teams need to be educated about AI and encouraged to question AI decisions (what data is involved, and how decisions are made, etc.). Critical thinking should be prioritized. So, too, should data literacy – the more people understand about data and AI, the better able they are to ask questions about how systems work and what data is being used to support decisions.
- Check for biases in your data and algorithms
One of the many advantages of AI is that it has the potential to reduce bias. When decisions are augmented or even automated by AI systems, we can remove some of the baggage that humans bring to the decision-making process.
That’s the idea, anyway. The reality is that an AI algorithm is only as good as the data it’s trained on. If it’s trained on biased data, then the AI system will be biased. Let’s say I train a basic AI to predict the next president of the United States-based only on historical data of past presidents. It’s highly likely to predict the next president will be a white man of advancing years! That’s because there are a hefty race and gender biases built into the training data. The consequences of not addressing biases in data can be serious – inaccurate decisions, loss of reputation and trust, to name just a few. Some consequences could be far graver; just imagine what would happen if patient treatment decisions were based on biased or incomplete datasets.
Biased data is usually the result of an unintentional bias based on a lack of representation – meaning it’s probably an inherent systemic bias rather than any one individual’s prejudices rearing their head. The most obvious way to avoid these inherent biases is to look for under- or over-representation in the data and algorithms being used. Granted, it takes an expert eye to examine data and AI algorithms in any real depth, but that doesn’t let organizations off the hook. Instead, organizations must think to ask these questions of their AI providers, rather than blindly trusting that data and AI algorithms are unbiased. Where necessary, additional data may be needed to correct over-or under-representation in datasets.
AI is going to impact businesses of all shapes and sizes across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.
Where to go from here
If you would like to know more about , check out my articles on:
- Are Alexa And Siri Considered AI?
- How To Put AI Into A Business To Accelerate Performance?
- What Is The Impact Of Artificial Intelligence (AI) On Society?
Or browse the Artificial Intelligence & Machine Learning to find the metrics that matter most to you.
Related Articles
Ultimate Smartwatch Guide 2025: From AI Health Tracking To Adventure-Ready Timepieces
In an era where our smartphones rarely leave our pockets, smartwatches have emerged as the new frontier of personal computing – and the competition has never been fiercer.[...]
The Future Of Corporate Learning And Employee Engagement: Why Traditional Training Is Dead
Picture your last corporate training session. Was it memorable? Did it change how you work? Probably not. But that's about to change.[...]
4 AI-Powered Strategies For Your Ultimate Job Search
In today's hyper-competitive job market, the difference between landing your dream job and being lost in a sea of applicants often comes down to one thing: leveraging the right tools.[...]
The Impact Of Microsoft’s New AI Employees On Your Job
Imagine walking into your office to find that your company just hired thousands of new employees overnight – except they're not human.[...]
8 Game-Changing Smartphone Trends That Will Define 2025
As in just about every other field of technology, AI will undoubtedly continue to be the key driver of innovation throughout 2025.[...]
Pivot Or Die: Why Adaptability Is The Key To Survival In The Age Of AI
As AI and quantum computing rewrite the rules of commerce, adaptability isn't just a nice-to-have quality — it's a necessity for survival.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media