What Is The Difference Between Supervised And Unsupervised Machine Learning?
2 July 2021
In recent articles I have looked at some of the terminology being used to describe high-level Artificial Intelligence concepts – specifically machine learning and deep learning.
Supervised and unsupervised learning describe two ways in which machines – algorithms – can be set loose on a data set and expected to ‘learn’ something useful from it.
Supervised Machine Learning
Today, supervised machine learning is by far the more common across a wide range of industry use cases. The fundamental difference is that with supervised learning, the output of your algorithm is already known – just like when a student is learning from an instructor. All that needs to be done is work out the process necessary to get from your input, to your output. This is usually the case when an algorithm is being ‘taught’ from a training data set. If the algorithms are coming up with results which are widely different from those which the training data says should be expected, the instructor can step in to guide them back to the right path.
Unsupervised Machine Learning
Unsupervised machine learning is a more complex process which has been put to use in a far smaller number of applications so far. But this is where a lot of the excitement over the future of Artificial Intelligence (AI) stems from. When people talk about computers learning to ‘teach themselves’, rather than us having to teach them (one of the principles of machine learning), they are often alluding to unsupervised learning processes.
In unsupervised learning, there is no training data set and outcomes are unknown. Essentially the AI goes into the problem blind – with only its faultless logical operations to guide it. Incredible as it seems, unsupervised machine learning is the ability to solve complex problems using just the input data, and the binary on/off logic mechanisms that all computer systems are built on. No reference data at all.
Example: Difference Between Supervised And Unsupervised Machine Learning
Here’s a very simple example. Say we have a digital image showing a number of coloured geometric shapes which we need to match into groups according to their classification and colour (a common problem in machine learning image recognition applications).
With supervised learning, it’s a fairly straightforward procedure. We simply teach the computer that shapes with four equal sides are known as squares, and shapes with eight sides are known as octagons. We also tell it that if the light given off by a pixel registers certain values, we classify it as ‘red’, and another set of values as ‘blue’.
With unsupervised learning, things become a little trickier. The algorithm has the same input data – in our example, digital images showing geometric shapes, in different colours, and the same problem, which is to sort them into groups.
It then uses what it can learn from this information – that the problem is one of classification, and that some of the shapes match other ones, to certain degrees – perhaps the same number of sides, or with matching digital markers indicating the colour.
It can’t know that we will call this object a square, or an octagon, but it will recognise other objects with roughly the same characteristics, group them together and assign its own label to them, which it can also apply – with a degree of probability – to other similar shapes.
Technically there is no right or wrong answer – the AI will simply learn the objective truth that certain shapes belong together, to a degree of probability. Machine learning makes mistakes – take a look at this video of DeepMind using unsupervised learning to master the video game Breakout. But like us, its strength lies in its ability to learn from its mistakes and make better educated estimations next time.
Towards Generalised AI
As you can see, creating unsupervised learning applications requires more work at the outset, to show it how to carry out this advanced, automated classification. But once you’ve done that, in theory it will continue to teach itself as it reads more input data, and become increasingly efficient at sorting shapes. Image recognition is by no means the only application, in fact it is likely that unsupervised learning will lead eventually to the development of ‘generalised AI’ applications, capable of teaching itself how to do many different tasks rather than specialising in one function.
So how is this done? Well the basic functions are generally methods drawn from the academic field of statistics, such as clustering, anomaly detecting and probability. More recently, as demonstrated by Google’s AI development group Deep Mind, knowledge from the field of neuroscience has been applied to the problem of classifying unlabelled data. Artificial neural networks (which tend to be just referred to as “neural networks” in computer talk) aim to mimic the thought and decision-making process of human brains. New breakthroughs in the field of biological neuroscience often bear results which also push forward the boundaries of computational neuroscience.
Semi-Supervised Machine Learning
In reality many problems require a solution that falls somewhere between the two extremes discussed here. Often, it is likely that the reference data needed to solve the problem exists, but is in an incomplete or inaccurate state. Semi-supervised learning solutions are deployed here, able to access reference data when it’s available, and use unsupervised learning techniques to make ‘best guesses’ when it comes to filling in gaps.
I hope this has served as a useful introduction to two different methods machines are using to become more intelligent, and ultimately useful. In particular, semi-supervised and unsupervised learning are likely to yield interesting results when robots advance to the stage where they can give us their objective, unbiased insights into how we work, and how the world around us fits together.
Here’s What The Future Of The Internet Will Look Like
It's difficult to predict exactly what the future internet will look like because new technology is evolving so quickly — but there is no doubt that the newest iteration of the web will transform virtually every part of our economy and society.[...]
How Panini Is Using Web3 To Create Digital Markets And Collectibles
Globally, Panini is the biggest name in the sports trading card business – a household name in its own right, with partnerships in place with global brands, including FIFA, Disney, and NASCAR.[...]
5 Reasons Why You Should Care About Web3
Web3 has the potential to disrupt pretty much everything we know about life online and who controls it.[...]
Universal Studios, The Metaverse, And The Future of Theme Parks
Universal Studios theme parks are constantly evolving to keep up with changing technology — and one of the most exciting recent developments has been the integration of metaverse technologies into Universal’s attractions.[...]
From Diagnosis To Treatment: 10 Ways AI Is Transforming Healthcare
AI is poised to revolutionize how we approach and address global health challenges. Dive into this post to explore the top 10 ways AI is positively impacting the healthcare landscape.[...]
Should We Stop Developing AI For The Good Of Humanity?
Almost 30,000 people have signed a petition calling for an “immediate pause” to the development of more powerful artificial intelligence (AI) systems.[...]
- Get updates straight to your inbox
- Join my 1 million newsletter subscribers
- Never miss any new content