Computers have helped us to calculate the vastness of space and the minute details of subatomic particles. When it comes to counting and calculating, or following logical yes/no algorithms – computers outperform humans thanks to the electrons moving through their circuitry at the speed of light. But we generally don’t consider them as “intelligent” because, traditionally, computers haven’t been able to do anything themselves, without being taught (programmed) by us first.
So far, even if a computer had access to all of the information in the world it couldn’t do anything “smart” with it. It could find us a picture of a cat – but only because we had told it that certain pictures contain cats. In other words, ask it to find a picture of a cat and it will return with a picture which it has been told is of a cat.
This has several implications which limit its helpfulness – not least that a large amount of human time has to be spent telling it what every picture contains. The data (pictures) need to pass through a human bottleneck, where they are labelled, before the computer can, with lightning-quick precision, identify it as a cat picture and show it to us when we request it.
While this works well enough if we are just searching for cat pictures on Google to pass our time, if we want to do something more advanced – such as monitor a live video feed and tell us when a cat wanders in front of the camera – it’s not so great.
It is problems like this which machine learning is trying to solve. At its most simple, machine learning is about teaching computers to learn in the same way we do, by interpreting data from the world around us, classifying it and learning from its successes and failures. In fact, machine learning is a subset, or better, the leading edge of artificial intelligence.
How did machine learning come about?
Building algorithms capable of doing this, using the binary “yes” and “no” logic of computers, is the foundation of machine learning – a phrase which was probably first used during serious research by Arthur Samuel at IBM during the 1950s. Samuel’s earliest experiments involved teaching machines to learn to play checkers.
As knowledge – something to draw insight from and a basis for making decisions – is deeply integral to learning, these early computers were severely handicapped due to the lack of data at their disposal. Without all of the digital technology we have today to capture and store information from the analogue world, machines could only learn from data slowly inputted through punch cards and, later, magnetic tapes and storage.
Today things are a little different – thanks to the rollout of the internet, the proliferation of mobile, data-gathering phones and other devices and the adoption of online, connected technology in industry, we literally have more data than we know how to deal with.
No human brain can hope to process even a fraction of the digital information it has available. But, with its lightning speed and infallible binary logic, could a computer?
The neural net and deep learning
The idea that it can, is one half of what is driving the world-changing breakthroughs we are seeing today. The other half is the “brain” of machine learning. Because as well as simply ingesting data, a machine has to process it in order to learn.
Several different frameworks have been experimented with over the years, when building algorithms designed to let machines deal with data in the same way as humans do. These often drew from the field of statistics, employing methods such as linear regression and sampling to assign probabilities to various outcomes, therefore enabling predictions to be made.
However, the framework which has, in recent years, overtaken all others in popularity by consistently proving its usefulness and adaptability, is the artificial neural network.
By throwing neuroscience into the mix, researchers found that computer models which appear to function more similarly to a human brain than anything previously developed, were possible. Artificial neural networks, like real brains, are formed from connected “neurons”, all capable of carrying out a data-related task – such as recognising something, failing to recognise it, matching a piece of information to another piece and answering a question about the relationship between them.
Each neuron is capable of passing on the results of its work to a neighbouring neuron, which can then process it further. Because the network is capable of changing and adapting based on the data that passes through it, so as to more efficiently deal with the next bit of data it comes across, it can be thought of as “learning”, in much the same way as our brains do.
“Deep learning” – another hot topic buzzword – is simply machine learning which is derived from “deep” neural nets. These are built by layering many networks on top of each other, passing information down through a tangled web of algorithms to enable a more complex simulation of human learning. Due to the increasing power and falling price of computer processors, machines with enough grunt to run these networks are becoming increasingly affordable.
What can be done with machine learning?
The application of machine learning to society and industry is leading to advancements across many fields of human endeavour.
For example, in medicine, machine learning is being applied to genomic data to help doctors understand, and predict, how cancer spreads, meaning more effective treatments can be developed.
Data from deep space is being collected here on Earth through huge radio telescopes – and after being analysed with machine learning, is helping us to unlock the secrets of black holes.
In retail, machine learning matches shoppers with products they want to buy online, and in the bricks ‘n’ mortar world it allows shop assistants to personalise the service they offer their customers.
In the war against terror and extremism, machine learning is used to predict the behaviour of those wanting to harm the innocent.
In our day-to-day lives, machine learning now powers Google’s search and image algorithms, to more accurately match us with the information we need in our lives, at the time we need it.
The process of allowing computers to understand and communicate with us in human language, thanks to machine learning, is known as natural language processing (NLP) and this has led to breakthroughs in translation technology and the voice controlled devices we increasingly use every day, including Amazon’s Echo.
Without a doubt, machine learning is proving itself to be a technology with far-reaching transformative powers. The science fiction dream of robots capable of working alongside us and augmenting our own inventiveness and imagination with their flawless logic and superhuman speed is no longer a dream – it is becoming a reality in many fields. Machine learning is the key which has unlocked it, and its potential future applications are almost unlimited.
Bernard Marr is an internationally bestselling author, futurist, keynote speaker, and strategic advisor to companies and governments. He advises and coaches many of the world’s best-known organisations on strategy, digital transformation and business performance. LinkedIn has recently ranked Bernard as one of the top 5 business influencers in the world and the No 1 influencer in the UK. He has authored 16 best-selling books, is a frequent contributor to the World Economic Forum and writes a regular column for Forbes. Every day Bernard actively engages his almost 2 million social media followers and shares content that reaches millions of readers.