Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Artificial Intelligence Has A Problem With Bias, Here’s How To Tackle It

2 July 2021

One of the problems in society that AI decision-making was meant to solve, was bias. After all, aren’t computers less likely to have inherent views on, for example, race, gender, and sexuality?

Well, that was true back in the days when, as a general rule, computers could only do what we told them. The rollout of machine learning, thanks to the explosion of Big Data, and the emergence of affordable computers with enough processing power to handle it have changed all that.    

In the old days, the term “garbage in, garbage out” concisely summed up the importance of high-quality data. When you give computers the wrong information to work with, the results they come up with are unlikely to be helpful.

Back then, this was mostly a problem for computer programmers and analysts. Today, when computers are routinely making decisions about whether we are invited to job interviews, eligible for a mortgage, or a candidate for surveillance by law enforcement and security services, it’s a problem for everybody.

In possibly the highest profile example of getting this wrong so far, a study found that an AI algorithm used by parole authorities in the US to predict the likelihood of criminals reoffending was biassed against black people. 

Exactly how this came about is unknown – the workings of the proprietary algorithms have not been made available for independent auditing. But the ProPublica study found that the system overestimated the likelihood of black offenders going to commit further crimes after completing their sentence while underestimating the likelihood of white offenders doing the same.

Biassed AI systems are likely to become an increasingly widespread problem as artificial intelligence moves out of the data science labs and into the real world. The “democratisation of AI” undoubtedly has the potential to do a lot of good, by putting intelligent, self-learning software in the hands of us all.

But there’s also a very real danger that without proper training on data evaluation and spotting the potential for bias in data, vulnerable groups in society could be hurt or have their rights impinged by biassed AI.

It’s possible AI may be the solution to, as well as the cause of this problem. Researchers at IBM are working on automated bias-detection algorithms, which are trained to mimic human anti-bias processes we use when making decisions, to mitigate against our own inbuilt biases.

This includes evaluating the consistency with which we (or machines) make decisions. If there is a difference in the solution chosen to two different problems, despite the fundamentals of each situation being similar, then there may be bias for or against some of the non-fundamental variables. In human terms, this could emerge as racism, xenophobia, sexism or ageism.

While this is interesting and vital work, the potential for bias to derail drives for equality and fairness runs deeper, to levels which may not be so easy to fix with algorithms.

I spoke to Dr. Rumman Chowdhury, Accenture’s lead for responsible AI, who outlined that there may be situations where data and algorithms are clean, but societal biases may still throw a spanner in the works.

She said, “With societal bias, you can have perfect data and a perfect model, but we have an imperfect world.”

“Think about the use of AI in hiring … you use all of your historical data to train a model on who should be hired and why. Then you parse their resume or look at people’s faces while they’re interviewing.

“But you’re assuming that the only reason people are hired and promoted is pure meritocracy, and we actually know that not to be true.

“So, in this case, there’s nothing wrong with the data, and there’s nothing wrong with the model, what’s wrong is that ingrained biases in society have led to unequal outcomes in the workplace, and that isn’t something you can fix with an algorithm.”

In very simplified terms, an algorithm might pick a white, middle-aged man to fill a vacancy based on the fact that other white, middle-aged men were previously hired to the same position, and subsequently promoted. This would be overlooking the fact that the reason he was hired, and promoted, was more down to the fact he is a white, middle-aged man, rather than that he was good at the job.

Chowdhury lists three specific steps which organisations can take to minimise the risk of perpetuating societal biases.

The first is to look at the algorithms themselves and ensure that nothing about the way they are coded perpetuates bias. This is particularly necessary when AI is constantly making predictions which are out-of-step with reality (as seems to be the case with the US probation example mentioned above).

Second is to consider ways in which AI itself can help to mitigate against the risk of biassed data – IBM’s bias detection algorithms could play a part here.

Thirdly, we must “make sure our house is in order – we can’t expect an AI algorithm that has been trained on data that comes from society to be better than society – unless we’ve explicitly designed it to be.”

This leads onto the discussion of the regulation of AI – who will be responsible for setting the parameters AI operates within – teaching machines what is valid data to learn from, and where inbuilt societal biases could limit its ability to make decisions which are both valuable and ethical.

Tech leaders including Google, Facebook and Apple jointly formed the Partnership on AI in 2016 to encourage research on the ethics of AI, including issues of bias. Part of the partnerships work involves informing legislators, but this “top-down” approach may not produce solutions to every problem, and may even stifle innovation.

Chowdhury says “What we don’t want … is every AI project at a company having to be judged by some governance group, that’s not going to make projects go forward. I call that model ‘police patrol, ‘ where we have the police going around trying to stop and arrest criminals. That doesn’t create a good culture for ethical behaviour.”

Neither should the burden of regulation and enforcement be put solely on the front-line – the data scientists themselves, argues Chowdhury.

“Yes, the data scientist plays a role, the AI researcher plays a role, but at a corporation, there are many moving parts. “We put a lot of responsibility on the data scientist … but they shouldn’t shoulder all of it.”

Basically, if society is at a stage where we are ready to democratise AI, by making it available to all, then we need to be ready to democratise the oversight and regulation of AI ethics.

Chowdhury refers to this concept as the Fire Warden model. “Think about how if there’s a fire in your building right now, everyone knows what to do – you all meet outside at a pre-arranged location, someone will raise the alarm – you won’t put out the fire, but you’ve been educated on how to respond.

“That’s what I want to see in the governance of AI systems, everybody has a role to play, everyone’s roles are a bit different, but everyone understands how to raise ethical issues.”

Crucially, this will only work if there is faith that someone will put out the fire – no one would bother calling the fire brigade if they knew they didn’t have the ability and motivation to do their job. Some top-down regulation will undoubtedly be a necessary part of tackling the issue of AI bias.

But building a culture of reporting and accountability throughout an organisation means there will be a far greater chance to spot and halt, bias in data, algorithms or systems before it is perpetuated and becomes harmful. 


Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

What Every CEO Needs To Know About The New AI Act

Having recently passed the Artificial Intelligence Act, the European Union is about to bring into force some of the world’s toughest AI regulations.[...]

How Generative AI Will Change The Jobs Of Journalists

You could be forgiven for thinking that journalists, like others who make their living from writing, would be among those most in danger of being made redundant by generative AI.[...]

The Best Generative AI Tools Transforming Education

Whether you’re a schoolteacher or university student, a workplace upskiller or a lifelong learner, there are generative AI tools you can use to make education more accessible, efficient and affordable.[...]

Will All Content Soon Be Fake?

Thanks to AI, it's becoming increasingly difficult to know when we are looking at something that exists in the real world and when we’re looking at a computer-generated image.[...]

As AI Expands, Public Trust Seems To Be Falling

Public trust in AI is shifting in a direction contrary to what companies like Google, OpenAI and Microsoft are hoping for, as suggested by a recent survey based on Edelman data.[...]

The Biggest Workplace Tech Trends In The Next 10 Years

What will the world look like ten years from now? Given the current pace of technological change, not to mention ongoing economic, environmental and[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts