Machines are getting better and better at doing jobs that traditionally could only be done by humans. Largely this is thanks to advances in machine learning that have given us machines that are capable of using data to make decisions. As they are trained on more data – in simulated or real-world situations, they are able to do this with increasing proficiency. This is what we’ve come to refer to as artificial intelligence (AI) – the closest we’ve so far come to creating machines that are capable of learning, thinking, and deciding.
So is this unprecedented situation going to result in widespread human redundancy, with the associated damage and disruption to society that this would seem to entail? There are certainly some who think so. On the other hand, some believe it will lead to a new paradigm in human work and productivity, where machines take care of all the dirty, boring, and dangerous jobs, leaving us free to spend time on more rewarding creative, fun or social pursuits.
As always, the truth is likely to be somewhere in the middle. We are already seeing robots threatening the livelihood of those in some low-skill applications – just take a look at Amazon’s cashier-less stores, or plans by McDonalds to introduce AI drive-thru restaurants. For a more general prediction, the World Economic Forum estimates that while up to 85 million jobs will be lost to AI and automation by 2025, 97 million will be created by the opportunities that AI brings in the same period.
Lawyers, accountants, doctors, computer programmers, web designers, writers, and geological technicians are among the countless professions where computers now perform tasks that previously were only done by people. But I think it’s unlikely that people have actually been made redundant from these roles yet, in order to be replaced by AI algorithms. This is because current trends indicate a firm belief in the business world that skilled subject matter experts working in partnership with sophisticated technological tools is the best recipe for success.
This was a point made by Ingrid Verschuren, head of data strategy for Dow Jones when I spoke to her about the subject recently. Humans are the real “machine” that drives AI. In nearly all cases, they are responsible for choosing the data that's used to train the algorithms, and they specify the outcomes that they want AI to achieve. That could be writing advertising copy that's most likely to lead to sales or looking at population heat maps to understand where pandemics are likely to break out. Machines simply predict the course of action that’s most likely to lead to the optimal outcome within the parameters we give them.
Verschuren’s own experience serves as a great example. When she joined Dow Jones in the nineties, her first role involved reading and manually tagging content – news articles – for indexing in digital content management systems. Three years after she started, this job had been entirely automated. Her own role evolved to encompass taking oversight of AI systems capable of processing close to one million articles per day – far more than could ever have been processed when the job was entirely done by humans, even by increasing the size of the workforce tenfold!
She tells me, “We have artificial intelligence, and we have human expertise, and when we put them together, we call that combination 'authentic intelligence.’ Both parts are equally important.”
AI involves drawing insights from the correlation of many different datasets. In the case of one of the functions Verschuren oversees – the detection of potentially fraudulent financial transactions – this involves over 500 different datasets. AI simply isn’t at the stage yet – and quite possibly won’t be for many years – where it can make decisions itself about which datasets to include and which ones are not relevant. It also can't always evaluate datasets for problems such as bias. If bad research methodology has been used when putting the dataset together – such as omitting data from an under-represented segment of the population – then the results won’t be based on a true reflection of reality. "Garbage in, garbage out," in other words. Tasks like this require oversight by people who know the subject inside-out.
Here’s another good example from Verschuren’s own experience. Part of her team’s job involves overseeing systems used to raise alerts when banks and other financial institutions might be at risk of doing business with people placed on international sanctions lists – a critical piece of compliance. On one occasion, an AI system unexpectedly cleared a number of transactions that an analyst expected would have been rejected due to the names involved. A check of the data used to inform the decision – a list of sanctioned people – suggested their names had been removed. At this stage, the analyst, acting on gut instinct that something was wrong, stepped in and decided to manually verify the removal with the data provider. It turned out that the names had been removed erroneously. It’s very unlikely that machines alone would provide this level of oversight, and a potentially expensive and dangerous mistake was avoided.
Of course, people with technical skills are essential too. Building a culture that allows technical experts and subject-matter experts to collaborate, encouraging each group to challenge the processes and conclusions of the other is key to building teams that will be innovative and productive in the age of smart machines.
So, what does this all mean for business today? One of the most important takeaways is that it’s more important than ever to invest in people – from making the right hires that are capable of working alongside AI technology to upskilling existing workforces to use the new generation of tools that are becoming available.
A high level of critical thinking, as well as a dash of gut instinct – as demonstrated by the researcher in the example above – will clearly be essential in roles that involve working alongside or taking oversight of intelligent machine systems.
Perhaps most importantly, it's critical to build data-friendly cultures, where we expect decisions to be based on data, while at the same time having the confidence to challenge data insights or processes when there's a human reason to do so – such as when our intuition, imagination or compassion tells us there's something that a machine may have overlooked.
For more on the topic of data and AI, sign up for my newsletter or check out the new edition of my book ‘Data Strategy: How To Profit From A World Of Big Data, Analytics And Artificial Intelligence.
You can watch my full interview with Ingrid Verschuren, head of data strategy at Dow Jones, here, where we dive deeper into the concept of “authentic intelligence” and ways in which AI will affect financial services in the future: