A Short History Of Deep Learning — Everyone Should Read
2 July 2021
Deep learning is a topic that is making big waves at the moment. It is basically a branch of machine learning (another hot topic) that uses algorithms to e.g. recognise objects and understand human speech. Scientists have used deep learning algorithms with multiple processing layers (hence “deep”) to make better models from large quantities of unlabelled data (such as photos with no description, voice recordings or videos on YouTube).
It’s one kind of supervised machine learning, in which a computer is provided a training set of examples to learn a function, where each example is a pair of an input and an output from the function.
Very simply: if we give the computer a picture of a cat and a picture of a ball, and show it which one is the cat, we can then ask it to decide if subsequent pictures are cats. The computer compares the image to its training set and makes an answer. Today’s algorithms can also do this unsupervised; that is, they don’t need every decision to be pre-programmed.
Of course, the more complex the task, the bigger the training set has to be. Google’s voice recognition algorithms operate with a massive training set — yet it’s not nearly big enough to predict every possible word or phrase or question you could put to it.
But it’s getting there. Deep learning is responsible for recent advances in computer vision, speech recognition, natural language processing, and audio recognition.
Deep learning is based on the concept of artificial neural networks, or computational systems that mimic the way the human brain functions. And so, our brief history of deep learning must start with those neural networks.
1943: Warren McCulloch and Walter Pitts create a computational model for neural networks based on mathematics and algorithms called threshold logic.
1958: Frank Rosenblatt creates the perceptron, an algorithm for pattern recognition based on a two-layer computer neural network using simple addition and subtraction. He also proposed additional layers with mathematical notations, but these wouldn’t be realised until 1975.
1980: Kunihiko Fukushima proposes the Neoconitron, a hierarchical, multilayered artificial neural network that has been used for handwriting recognition and other pattern recognition problems.
1989: Scientists were able to create algorithms that used deep neural networks, but training times for the systems were measured in days, making them impractical for real-world use.
1992: Juyang Weng publishes Cresceptron, a method for performing 3-D object recognition automatically from cluttered scenes.
Mid-2000s: The term “deep learning” begins to gain popularity after a paper by Geoffrey Hinton and Ruslan Salakhutdinov showed how a many-layered neural network could be pre-trained one layer at a time.
2009: NIPS Workshop on Deep Learning for Speech Recognition discovers that with a large enough data set, the neural networks don’t need pre-training, and the error rates drop significantly.
2012: Artificial pattern-recognition algorithms achieve human-level performance on certain tasks. And Google’s deep learning algorithm discovers cats.
2014: Google buys UK artificial intelligence startup Deepmind for £400m
2015: Facebook puts deep learning technology – called DeepFace – into operations to automatically tag and identify Facebook users in photographs. Algorithms perform superior face recognition tasks using deep networks that take into account 120 million parameters.
2016: Google DeepMind’s algorithm AlphaGo masters the art of the complex board game Go and beats the professional go player Lee Sedol at a highly publicised tournament in Seoul.
The promise of deep learning is not that computers will start to think like humans. That’s a bit like asking an apple to become an orange. Rather, it demonstrates that given a large enough data set, fast enough processors, and a sophisticated enough algorithm, computers can begin to accomplish tasks that used to be completely left in the realm of human perception — like recognising cat videos on the web (and other, perhaps more useful purposes).
Related Articles
The Game-Changing Impact Of Generative AI On The Enterprise
In the world of business technology, generative AI has emerged as a transformative force, promising to revolutionize how enterprises operate.[...]
AI Can Change Conspiracy Theorists’ Minds, Study Finds. Here’s How
Belief in conspiracy theories is more than just a fringe phenomenon. From COVID-19 hoaxes to political cover-ups, conspiracy thinking has infiltrated every corner of society.[...]
Why Hybrid AI Is The Next Big Thing In Tech
Artificial intelligence (AI) is no longer just a buzzword; it's an integral part of modern business and society.[...]
5 Common Generative AI Prompt Writing Mistakes (And How To Fix Them)
Imagine having a brilliant personal assistant capable of tackling any task you throw at them, from crafting compelling marketing copy to debugging complex code.[...]
The 5 Biggest Business Trends For 2025 Everyone Must Be Ready For Now
2025 marks the halfway point of a decade that’s already brought profound upheaval and transformation across technology, politics, and society.[...]
How To Embrace The Enterprise AI Era
In the rapidly evolving world of technology, artificial intelligence (AI) has emerged as a transformative force, reshaping industries and redefining how businesses operate.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media