A Short History Of Deep Learning — Everyone Should Read
2 July 2021
Deep learning is a topic that is making big waves at the moment. It is basically a branch of machine learning (another hot topic) that uses algorithms to e.g. recognise objects and understand human speech. Scientists have used deep learning algorithms with multiple processing layers (hence “deep”) to make better models from large quantities of unlabelled data (such as photos with no description, voice recordings or videos on YouTube).

It’s one kind of supervised machine learning, in which a computer is provided a training set of examples to learn a function, where each example is a pair of an input and an output from the function.
Very simply: if we give the computer a picture of a cat and a picture of a ball, and show it which one is the cat, we can then ask it to decide if subsequent pictures are cats. The computer compares the image to its training set and makes an answer. Today’s algorithms can also do this unsupervised; that is, they don’t need every decision to be pre-programmed.
Of course, the more complex the task, the bigger the training set has to be. Google’s voice recognition algorithms operate with a massive training set — yet it’s not nearly big enough to predict every possible word or phrase or question you could put to it.
But it’s getting there. Deep learning is responsible for recent advances in computer vision, speech recognition, natural language processing, and audio recognition.
Deep learning is based on the concept of artificial neural networks, or computational systems that mimic the way the human brain functions. And so, our brief history of deep learning must start with those neural networks.
1943: Warren McCulloch and Walter Pitts create a computational model for neural networks based on mathematics and algorithms called threshold logic.
1958: Frank Rosenblatt creates the perceptron, an algorithm for pattern recognition based on a two-layer computer neural network using simple addition and subtraction. He also proposed additional layers with mathematical notations, but these wouldn’t be realised until 1975.
1980: Kunihiko Fukushima proposes the Neoconitron, a hierarchical, multilayered artificial neural network that has been used for handwriting recognition and other pattern recognition problems.
1989: Scientists were able to create algorithms that used deep neural networks, but training times for the systems were measured in days, making them impractical for real-world use.
1992: Juyang Weng publishes Cresceptron, a method for performing 3-D object recognition automatically from cluttered scenes.
Mid-2000s: The term “deep learning” begins to gain popularity after a paper by Geoffrey Hinton and Ruslan Salakhutdinov showed how a many-layered neural network could be pre-trained one layer at a time.
2009: NIPS Workshop on Deep Learning for Speech Recognition discovers that with a large enough data set, the neural networks don’t need pre-training, and the error rates drop significantly.
2012: Artificial pattern-recognition algorithms achieve human-level performance on certain tasks. And Google’s deep learning algorithm discovers cats.
2014: Google buys UK artificial intelligence startup Deepmind for £400m
2015: Facebook puts deep learning technology – called DeepFace – into operations to automatically tag and identify Facebook users in photographs. Algorithms perform superior face recognition tasks using deep networks that take into account 120 million parameters.
2016: Google DeepMind’s algorithm AlphaGo masters the art of the complex board game Go and beats the professional go player Lee Sedol at a highly publicised tournament in Seoul.
The promise of deep learning is not that computers will start to think like humans. That’s a bit like asking an apple to become an orange. Rather, it demonstrates that given a large enough data set, fast enough processors, and a sophisticated enough algorithm, computers can begin to accomplish tasks that used to be completely left in the realm of human perception — like recognising cat videos on the web (and other, perhaps more useful purposes).
Related Articles
The Amazing Ways Coca-Cola Uses Generative AI In Art And Advertising
Some say that in the very near future, we’ll need to either adopt artificial intelligence (AI) or be made redundant by it – or by others using it.[...]
The 5 Biggest Risks of Generative AI: Steering the Behemoth Responsibly
In our contemporary world, the pressures of the professional sphere often encroach upon our personal space, giving rise to stress and an overwhelming sense of dread.[...]
3 Ways To Reinvent Your Products And Services For The Future
With the rise of the metaverse and web3 technologies, there’s no denying the next evolution of the internet is already underway.[...]
Virtual Influencer Noonoouri Lands Record Deal: Is She The Future Of Music?
Teenage influencer Noonoouri has 400,000 followers on Instagram and has starred in fashion campaigns for Dior, Balenciaga and Valentino.[...]
Managing Stress at Work: 5 Top Tips Anyone Can Follow
In our contemporary world, the pressures of the professional sphere often encroach upon our personal space, giving rise to stress and an overwhelming sense of dread.[...]
How Can We Use AI to Address Global Challenges Like Climate Change?
As climate change continues to pose an enormous threat to our planet, we must explore innovative solutions that can help mitigate its impact.[...]
Stay up-to-date
- Get updates straight to your inbox
- Join my 1 million newsletter subscribers
- Never miss any new content
Social Media