And more importantly, are we so enthusiastic about turning towards technology -particularly AI – that we overlook the societal and human problems it can cause?
This is the argument made by Meredith Broussard in her latest book More Than a Glitch – Confronting Race, Gender and Ability Bias in Tech.
The book is the latest of a number of recent investigations into issues around bias and the wider social implications of our rush to embrace AI, joining other important works such as Weapons of Math Destruction by Cathy O’Neil, Safiya Noble’s Algorithms of Oppression, and Broussard’s own Artificial Unintelligence.
Broussard recently joined me on my podcast to discuss some of the ideas she puts forward, as well as her advice for business leaders interested in working with AI or adopting it in their organizations.
At the heart of her argument is the concept of “technochauvinism” – a belief that technological solutions are always superior to social or other methods of driving change.
What is Technochauvinism?
In the book, Broussard refers to the example of the “stair climbing machine," often proposed by technologists and engineers as an innovation that could improve the lives of disabled people.
“Designers like to create things … because it’s cool – let’s engineer this novel solution.
“But if you actually ask somebody who uses a wheelchair … they will generally say no – ‘it looks scary.' ‘It doesn’t look like it’s going to work.' They will say, 'I'd rather have a ramp or an elevator.’
"Then you realize, there's this really simple solution that works really well, and we don't need to add in a lot of extreme computational technology; we can just build a ramp.
“So until we’ve made the world really accessible, let’s not overengineer the solutions.”
Broussard says that this concept – and many others like it, is an example of a “disability dongle." This is described succinctly in this blog post as an idea put forward by a (usually) able-bodied engineer that appeals to our love of a technological “quick fix” over the complex, structural, societal change that is really needed.
The counter to the techno-chauvinistic mindset, Broussard suggests, is often simply choosing the right tool for the job. While not always assuming that this is going to be the most advanced technology or the most sophisticated data-crunching algorithm.
Broussard tells me, "We kind of have this idea that somehow technology solutions are going to be superior to others. And this is itself a kind of bias … sometimes the right tool is something simple, like a book …it’s not a competition, one is not inherently better than the other.”
Mathematically and Socially Fair
Another fascinating idea Broussard explores is the difference between mathematical and social fairness. When we use computers to assist with challenges around equality and fairness, what we are most often presented with is a mathematical solution.
A simple explanation: “A story that I think illustrates this concept – it’s about a cookie. When I was little, my brother and I would argue about who gets the last cookie.”
Ask a computer to solve this simple but pressing problem, and there is one obvious answer – each kid gets half a cookie.
“But in the real world, when you split a cookie in half, what happens is you get a big half and a little half. And then we’d fight over who has the bigger half.”
The solution, she suggests, lies in socially-constructed negotiation and compromise.
“So, if I wanted the big half, I would say, you give me the big half, and I’ll let you choose the TV show we watch after dinner.
“Mathematically fair decisions and socially fair decisions are not the same … this explains why we run into problems when we try to make socially fair decisions with computers.”
The outcome of this is that we should use computers to solve mathematically-oriented problems and not rely on them too heavily when it comes to societal challenges.
AI and Human Jobs
A similar principle emerges when we think about the question of how computers will be used to replace human workers. As a writer and journalist, Broussard’s own profession is one that’s commonly regarded as being threatened by the emergence of applications like ChatGPT. After all, if they can quickly and easily generate articles, essays, and even entire books from a simple prompt, who needs authors?
However, as anyone who has tried to use ChatGPT to write a book or even an essay to any level of sophistication will quickly tell you, that threat has been somewhat overexaggerated.
Although initially impressive, AI-generated content still lacks many essential human qualities – most crucially, any real ability to generate new ideas or truly creative thoughts. This is because all it really does is regurgitate language and ideas found in its training data.
“If you’re the kind of person in a position to replace workers with generative AI, you’re in for a nasty shock,” Broussard tells me.
“AI is mediocre. Mediocre writing is absolutely useful for a lot of situations … and it seems like it’s going to be incredibly useful and flexible … one of the things you quickly realize when you use generative AI for a while is that it’s kind of boring … it just gives you the same thing over and over again … that’s not what you want to be giving your customers.”
Her thoughts echo my own beliefs that AI is not a replacement for creativity – it’s a tool that allows humans to enhance their own creative skills and become better organized in the ways that they put them to work.
The Dangers of AI
One aspect of AI that Broussard finds particularly worrying, however, is computer vision – and specifically, the way it differs in its treatment of people according to their race, gender and other factors.
"Facial recognition is biased based on skin tone," she tells me.
“It’s generally better at recognizing light skin than dark skin, better at men than women … it doesn’t recognize trans and non-binary folk at all.”
This has caused problems when AI-powered computer vision systems have been used for policing and facial recognition in public areas. In several cases, the use of the technology by police has been found to be unlawful and unethical, leading to its ban in some jurisdictions.
Broussard says, “We should not be using facial recognition at all in policing. It’s disproportionately weaponized against people of color and communities that are already disproportionately policed.
“We’re not going to achieve justice if we keep using these powerful technologies that work very poorly and have a disproportionate impact on certain groups.”
More Important Than Fire?
“AI is nifty, generative AI especially is a lot of fun to play with, but it's not going to transform the entire world. It's going to change a few things; it's not the invention of fire."
Broussard is alluding to comments made by Google CEO Sundar Pichai a few years back when he described AI as "more profound than fire or electricity or anything we've done in the past."
It’s a refreshingly down-to-earth counterpoint to the views I myself – someone who works closely with companies in the business of selling AI, as well as companies whose reputations are built around the changes it can achieve – often hear.
Personally, my own experience and observations lead me to be somewhat more excited and optimistic about the upside than Broussard herself is. But that doesn’t mean I am in any way less cautious or concerned about the downside.
Broussard points to the work of organizations, institutions and campaign groups, including the Algorithmic Justice League, Equal AI, and NYU's Center for Critical Race and Digital Studies, as voices that will play a crucial role in the ongoing development of AI.
Rounding off our conversation, she tells me, "The thing that concerns me is when the conversations about AI do not focus on the real harms being experienced by real people … because if you’re trying to, say, put in biometric locks on people’s apartments or office doors, people with darker skin are … not going to be able to get into their apartments or offices as easily as other people.
"And that seems discriminatory and unnecessary; why not just use a key?"