Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Building Responsible AI: How To Combat Bias And Promote Equity

3 June 2024

AI has the power to be hugely transformative, both in business and in the way we live our lives. But as we know, with great power comes great responsibility!

When it comes to AI, it’s the responsibility of those who build it and those who use it to ensure it’s done in a way that minimizes the risk of causing harm. And one of the most pressing challenges here is reducing the damage caused by bias in AI.

So here I’ll overview what we mean when we talk about AI bias, as well as discuss examples where this has caused real problems. Then I’ll take a look at some of the steps that businesses and us as individuals can take to ensure we’re using AI in a responsible and trustworthy way.

Building Responsible AI: How To Combat Bias And Promote Equity | Bernard Marr

What Is AI Bias And Why Is It A Problem?

AI bias can be caused by either bias that is present in data, or bias that is present in the algorithms that process data.

Bias often occurs in data when it is collected in a non-representative way – for example, by failing to ensure that there is an adequate balance between genders, age groups or ethnic groups in the sample. It can be introduced to algorithms when they are coded in a way that inadvertently favor certain outcomes, or overlook critical factors.

The danger with AI is that it is designed to work at scale, making huge numbers of predictions based on vast amounts of data. Because of this, the effect of just a small amount of bias present in either the data or the algorithms can quickly be magnified exponentially.

For example, if an image search algorithm is more likely to show pictures of a white male when asked to show people in high-paying professions (as they do – see the next section for examples!) and then an image generator is trained on the results of those searches, then the image generator will be more likely to create an image of a white male when asked for a picture of a CEO, doctor or lawyer.

When AI is used to make decisions or predictions that affect humans, the results of this can be severe and far-reaching.

For example, bias present in HR systems used to automate processes in hiring and recruitment can perpetuate existing unfair or discriminatory behaviors.

Bias present in systems used in financial services to automate lending decisions or risk assessment can unfairly affect people's ability to access money.

Biases in systems used in healthcare to assist with diagnosing illnesses or creating personalized treatments can lead to misdiagnosis and worsen healthcare outcomes.

When this happens, trust in the use of AI technologies is damaged, making it difficult for companies or organizations, in general, to put this potentially world-changing technology to work in ways that could potentially do a lot of good!

Examples Of AI Bias

There have been many occasions where bias in AI has created real-world problems. These include a recruitment tool developed by Amazon to help rate candidates for software engineering roles, which was found to discriminate against women. The ratings of female applicants were downgraded simply because fewer women applied, and there was less data available to assess them. This led to the system being scrapped.

And online education provider iTutorGroup was fined hundreds of thousands of dollars when its hiring algorithms were found to discriminate against older applicants, downgrading applications from women aged over 55 or men aged over 60.

Facial recognition software using AI algorithms to identify people from video and photographs has also been found to be more likely to misidentify people of an ethnic minority, leading to its use in law enforcement being banned in many jurisdictions, including the entire European Union.

Additionally, a system known as COMPAS, designed by the US government to predict the likelihood of criminals reoffending, was also found to be racially biased. According to an investigation by ProPublica, it overestimated the likelihood of black people reoffending.

Google’s algorithms have been accused of bias, too. Searching for terms like CEO is disproportionately likely to return an image of a white male, and researchers at Carnegie Melon University found that its system for displaying job ads was displaying vacancies for high-paying jobs to men more frequently than to women.

And in healthcare, an algorithm used to predict the future healthcare needs of patients was found to underestimate the needs of black people compared to white people because spending on black healthcare had historically been lower, reflecting ongoing systemic inequality.

How Do We Fix This?

There are important steps that everyone involved in AI – either building or using it should take to ensure they aren’t doing so in an irresponsible way.

Firstly, it's important to ensure that all the proper checks and guardrails are in place when collecting data. It should be done in a representative way, balanced by age, gender, race and any other critical factor that could lead to bias.

Human oversight is critical, too, in order to pick up on erroneous decisions before action is taken based on them. Many of the examples highlighted above were only spotted later by third-party investigators, increasing the harm that was caused as well as the financial impact and reputational damage to the organizations involved.

Algorithms and models should be regularly audited and tested. Tools like AI Fairness 360 and Google’s What-If can be used to examine and measure the behavior of machine learning algorithms.

And finally, it’s crucial to ensure that data and engineering teams are themselves diverse, as this makes a variety of perspectives and experiences available during design, development and testing.

With AI impacting our daily lives in more and more ways, everyone involved has a part to play in creating fairer and more equitable systems. It also sets us up for costly and embarrassing mistakes further down the line. Failing to do so now will damage trust in the potential for AI to do good and have severe ramifications on marginalized and vulnerable fellow humans.

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

The Simple ChatGPT Trick That Will Transform Your Business AI Interactions

I believe ChatGPT and other generative AI tools can help pretty much any business.[...]

The Third Wave Of AI Is Here: Why Agentic AI Will Transform The Way We Work

The chess pieces of artificial intelligence are being dramatically rearranged. While previous iterations of AI focused on making predictions or generating content, we're now witnessing the emergence of something far more sophisticated: AI agents that can independently perform complex tasks and make decisions.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts