What Is The Difference Between Supervised And Unsupervised Machine Learning?
2 July 2021
In recent articles I have looked at some of the terminology being used to describe high-level Artificial Intelligence concepts – specifically machine learning and deep learning.
Supervised and unsupervised learning describe two ways in which machines – algorithms – can be set loose on a data set and expected to ‘learn’ something useful from it.
Supervised Machine Learning
Today, supervised machine learning is by far the more common across a wide range of industry use cases. The fundamental difference is that with supervised learning, the output of your algorithm is already known – just like when a student is learning from an instructor. All that needs to be done is work out the process necessary to get from your input, to your output. This is usually the case when an algorithm is being ‘taught’ from a training data set. If the algorithms are coming up with results which are widely different from those which the training data says should be expected, the instructor can step in to guide them back to the right path.
Unsupervised Machine Learning
Unsupervised machine learning is a more complex process which has been put to use in a far smaller number of applications so far. But this is where a lot of the excitement over the future of Artificial Intelligence (AI) stems from. When people talk about computers learning to ‘teach themselves’, rather than us having to teach them (one of the principles of machine learning), they are often alluding to unsupervised learning processes.
In unsupervised learning, there is no training data set and outcomes are unknown. Essentially the AI goes into the problem blind – with only its faultless logical operations to guide it. Incredible as it seems, unsupervised machine learning is the ability to solve complex problems using just the input data, and the binary on/off logic mechanisms that all computer systems are built on. No reference data at all.
Example: Difference Between Supervised And Unsupervised Machine Learning
Here’s a very simple example. Say we have a digital image showing a number of coloured geometric shapes which we need to match into groups according to their classification and colour (a common problem in machine learning image recognition applications).
With supervised learning, it’s a fairly straightforward procedure. We simply teach the computer that shapes with four equal sides are known as squares, and shapes with eight sides are known as octagons. We also tell it that if the light given off by a pixel registers certain values, we classify it as ‘red’, and another set of values as ‘blue’.
With unsupervised learning, things become a little trickier. The algorithm has the same input data – in our example, digital images showing geometric shapes, in different colours, and the same problem, which is to sort them into groups.
It then uses what it can learn from this information – that the problem is one of classification, and that some of the shapes match other ones, to certain degrees – perhaps the same number of sides, or with matching digital markers indicating the colour.
It can’t know that we will call this object a square, or an octagon, but it will recognise other objects with roughly the same characteristics, group them together and assign its own label to them, which it can also apply – with a degree of probability – to other similar shapes.
Technically there is no right or wrong answer – the AI will simply learn the objective truth that certain shapes belong together, to a degree of probability. Machine learning makes mistakes – take a look at this video of DeepMind using unsupervised learning to master the video game Breakout. But like us, its strength lies in its ability to learn from its mistakes and make better educated estimations next time.
Towards Generalised AI
As you can see, creating unsupervised learning applications requires more work at the outset, to show it how to carry out this advanced, automated classification. But once you’ve done that, in theory it will continue to teach itself as it reads more input data, and become increasingly efficient at sorting shapes. Image recognition is by no means the only application, in fact it is likely that unsupervised learning will lead eventually to the development of ‘generalised AI’ applications, capable of teaching itself how to do many different tasks rather than specialising in one function.
So how is this done? Well the basic functions are generally methods drawn from the academic field of statistics, such as clustering, anomaly detecting and probability. More recently, as demonstrated by Google’s AI development group Deep Mind, knowledge from the field of neuroscience has been applied to the problem of classifying unlabelled data. Artificial neural networks (which tend to be just referred to as “neural networks” in computer talk) aim to mimic the thought and decision-making process of human brains. New breakthroughs in the field of biological neuroscience often bear results which also push forward the boundaries of computational neuroscience.
Semi-Supervised Machine Learning
In reality many problems require a solution that falls somewhere between the two extremes discussed here. Often, it is likely that the reference data needed to solve the problem exists, but is in an incomplete or inaccurate state. Semi-supervised learning solutions are deployed here, able to access reference data when it’s available, and use unsupervised learning techniques to make ‘best guesses’ when it comes to filling in gaps.
I hope this has served as a useful introduction to two different methods machines are using to become more intelligent, and ultimately useful. In particular, semi-supervised and unsupervised learning are likely to yield interesting results when robots advance to the stage where they can give us their objective, unbiased insights into how we work, and how the world around us fits together.
ChatGPT: What Are Hallucinations And Why Are They A Problem For AI Systems
In recent years, the rapid development of artificial intelligence (AI) has led to the rise of sophisticated language models, with OpenAI's ChatGPT at the forefront[...]
Top 10 Use Cases For ChatGPT In The Banking Industry
Banks have often been at the forefront of adopting cutting-edge technology to provide better customer service and meet compliance requirements.[...]
The 7 Best Examples Of How ChatGPT Can Be Used In Human Resources (HR)
Human Resources (HR) departments play a critical role in managing an organization's most valuable asset — its people.[...]
Microsoft’s Plan To Infuse AI And ChatGPT Into Everything
Microsoft has big plans for artificial intelligence (AI), and it’s becoming clear that it believes that ChatGPT – and natural language technology in general - will play a big part.[...]
GPT-4 Is Here: Unleashing the Power of Multimodal AI and Redefining the Future of Communication
The rapid evolution of artificial intelligence (AI) over recent years has given rise to ground-breaking advancements in natural language processing (NLP) and machine learning[...]
Revolutionizing Healthcare The Top 14 Uses Of ChatGPT In Medicine And Wellness
Over the past few years, artificial intelligence (AI) has made significant advancements in the healthcare industry[...]
- Get updates straight to your inbox
- Join my 1 million newsletter subscribers
- Never miss any new content