Computers have helped us to calculate the vastness of space and the minute details of subatomic particles. When it comes to counting and calculating, or following logical yes/no algorithms – computers outperform humans thanks to the electrons moving through their circuitry at the speed of light. But we generally don’t consider them as “intelligent” because, traditionally, computers haven’t been able to do anything themselves, without being taught (programmed) by us first.
So far, even if a computer had access to all of the information in the world it couldn’t do anything “smart” with it. It could find us a picture of a cat – but only because we had told it that certain pictures contain cats. In other words, ask it to find a picture of a cat and it will return with a picture which it has been told is of a cat.
This has several implications which limit its helpfulness – not least that a large amount of human time has to be spent telling it what every picture contains. The data (pictures) need to pass through a human bottleneck, where they are labelled, before the computer can, with lightning-quick precision, identify it as a cat picture and show it to us when we request it.
While this works well enough if we are just searching for cat pictures on Google to pass our time, if we want to do something more advanced – such as monitor a live video feed and tell us when a cat wanders in front of the camera – it’s not so great.
It is problems like this which machine learning is trying to solve. At its most simple, machine learning is about teaching computers to learn in the same way we do, by interpreting data from the world around us, classifying it and learning from its successes and failures. In fact, machine learning is a subset, or better, the leading edge of artificial intelligence.
How did machine learning come about?
Building algorithms capable of doing this, using the binary “yes” and “no” logic of computers, is the foundation of machine learning – a phrase which was probably first used during serious research by Arthur Samuel at IBM during the 1950s. Samuel’s earliest experiments involved teaching machines to learn to play checkers.
As knowledge – something to draw insight from and a basis for making decisions – is deeply integral to learning, these early computers were severely handicapped due to the lack of data at their disposal. Without all of the digital technology we have today to capture and store information from the analogue world, machines could only learn from data slowly inputted through punch cards and, later, magnetic tapes and storage.
Bernard Marr is an internationally bestselling author, futurist, keynote speaker, and strategic advisor to companies and governments. He advises and coaches many of the world’s best-known organisations on strategy, digital transformation and business performance. LinkedIn has recently ranked Bernard as one of the top 5 business influencers in the world and the No 1 influencer in the UK. He has authored 16 best-selling books, is a frequent contributor to the World Economic Forum and writes a regular column for Forbes. Every day Bernard actively engages his almost 2 million social media followers and shares content that reaches millions of readers.