The Most Significant AI Milestones So Far
2 July 2021
Many people assume artificial intelligence is a new technology, but the concept has been around for a long time. There have been several significant milestones in its history that helped catapult artificial intelligence capabilities over the years. Here we look at some of the key milestones achieved so far.
You might be surprised to discover that artificial intelligence (AI) really isn’t all that new. The concept that one day machines would be capable of thinking and making decisions was first contemplated by mathematician Rene Descartes in 1637 in his book Discourse on the Method. He was also the first to identify the distinction between what today is called specialized AI, where machines learn how to perform one specific task, and general AI, where machines can adapt to any job. Although many people might see AI as a relatively new field, there have been important AI milestones in the industry already and a very interesting history. Here we highlight some of the most significant in a lengthy list.
AI Before the Term was Coined — Early 20th Century
In the early 20th century, the concepts that would ultimately result in AI started out in the minds of science fiction writers and scientists. In 1927, the sci-fi film Metropolis was released and featured an artificially intelligent robot and in 1950 a visionary collection of short stories by Isacc Asimov was published called I Robot. Asimov envisioned the Three Laws of Robotics and a computer that could answer questions because it could store human knowledge. In 1943, a collaboration between Warren McCulloch and Walter Pitts introduced the idea that logical functions could be completed through networks of artificial neurons—what today is known as artificial neural networks (ANNs)—in their paper “A Logical Calculus of the Ideas Immanent in Nervous Activity.”
Also in 1943, Alan Turing, who would invent the Turing test that would essentially help us understand when machines reached intelligence and neurologist William Grey Walter, who created the first autonomous robots called Elmer and Elsie known as “tortoises,” began to collaborate about intelligent machines.
The Dartmouth Conference — 1956
The term artificial intelligence was first used at a summer workshop organized by professor John McCarthy at Dartmouth College. The event brought together experts in the fields of machine learning and neural networks to generate new ideas and debate how to tackle AI. In addition to neural networks, computer vision, natural language processing and more were on the agenda at that summer event.
The Chatbot ELIZA— 1966
Before Alexa and Siri were a figment of their developers’ imaginations, there was ELIZA—the world’s first chatbot. As an early implementation of natural language processing, ELIZA was created at MIT by Joseph Weizenbaum. ELIZA couldn’t speak, but she used text to communicate.
AI Becomes Useful— 1980
By the late 1960s, the luster for AI had worn off a bit as millions of dollars had been invested and the field was still not hitting the lofty predictions people expected. But, in the 80s, the “AI winter” was over. The XCON expert learning system from Digital Equipment Corporation was credited with saving the company $40 million annually from 1980 to 1986. This was a significant milestone for AI because it showed that AI wasn’t just a cool technological feat and that it actually had important real-world applications. These applications had a narrower focus and programmed AI to solve a particular problem. Businesses began to understand the impact AI could have on operations and that money could be saved. In fact, it led to investments by companies in 1985 to the tune of $1 billion a year for AI systems.
Principles of Probability— 1988
Prior to IBM researchers publishing “A Statistical Approach to Language Translation” in 1988, AI was very rules-driven. This paper, that considered the automated translation between French and English language, marked the beginning of AI using probability of various outcomes to train on the data rather than be trained via rules. This strategy better mimicked the processes of the human brain and is still the way machine learning happens today. The machines take a trial-and-error approach to learning and adjust their future actions based on the feedback they receive from the current action.
Internet – 1991
When the worldwide web launched in 1991 when CERN researcher Tim Berners-Lee published the hypertext transfer protocol (HTTP), the world’s first website online, it made it possible for online connections and data to be shared no matter who or where you are. Since data is the fuel for artificial intelligence there’s little doubt that AI has progressed where it is today thanks to Berners-Lee’s work.
Chess and AI– 1997
Another milestone for AI is no doubt when world chess champion Garry Kasparov was defeated in a match of chess by IBM’s Deep Blue supercomputer. This was a win for AI that allowed the general population and not just those close to the AI industry to understand the rapid development and evolution of computers. In this case, Deep Blue won by using its high-speed capabilities (able to evaluate 200 million positions a second) to calculate every possible option rather than analyzing game play.
5 Autonomous Vehicles Complete the DARPA Grand Challenge– 2005
When the DARPA Grand Challenge first ran in 2004, there were no autonomous vehicles that completed the 100-kilometer off-road course through the Mojave desert. In 2005, five vehicles made it! This race helped spur the development of autonomous driving technology.
AI Wins Jeopardy! – 2011
In 2011, IBM’s Watson challenged human Jeopardy! players and ended up winning the $1 million prize. This was significant since prior challenges against humans such as the Kasparov chess match used the machine’s stellar computing power. In Jeopardy!, Watson had to compete in a language-based, creative-thinking game.
Deep Learning on Display – 2012
AI learned to recognize pictures of cats in 2012. In this collaboration between Stanford and Google, documented in the paper Building High-Level Features Using Large Scale Unsupervised Learning by Jeff Dean and Andrew Ng, unsupervised learning of AI was accomplished. Prior to this development, data needed to be manually labeled before it could be used to train AI. With unsupervised learning, demonstrated with the machines identifying cats, an artificial network could be put on the task. In this case, the machines processed 10 million unlabeled pictures from YouTube recordings to learn what images were cats. This ability to learn from unlabeled data accelerated the pace of AI development and opened up tremendous possibilities for what machines could help us do in the future.
Insightful Vision – 2015
In 2015, the annual ImageNet challenge highlighted that machines could outperform humans when recognizing and describing a library of 1,000 images. Image recognition was a major challenge for AI. From the beginning of the contest in 2010 to 2015, the algorithm’s accuracy increased to 97.3% from 71.8%.
Gaming Capabilities Grow– 2016
AlphaGo, created by Deep Mind which is now a Google subsidiary, defeated the world’s Go champion over five matches in 2016. The number of variations of the game makes brute force impractical (there are more than 100,000 possible opening moves in Go compared to 400 in chess). In order to win, AlphaGo used neural networks to study and then learn as it played the game.
On the Road with Autonomous Vehicles– 2018
2018 was a significant milestone for autonomous vehicles because they hit the road thanks to Waymo’s self-driving taxi service in Phoenix, Arizona. And it wasn’t just for testing. There were 400 individuals who paid to be driven by the driverless cars to work and school within a 100-square-mile area. There were human co-pilots who could step in if necessary.
2019 and Beyond
We can expect to see further development and refinement of some of the breakthrough technologies that AI has already demonstrated, such as self-driving vehicles on land, sea and air and chatbots in the coming years. As a result of AI’s proficiency in natural language generation and processing, we can expect to “speak” to even more algorithms in the future than we already do now (and might even mistake them for humans). In response to the COVID crisis, we will see new applications for AI to support more contactless deliveries, cleaning and more. And, of course, there may be applications we haven’t even dreamed up yet.
What do you think the next notable AI milestone will be?
Where to go from here
If you would like to know more about measuring HR effectiveness, check out my articles on:
- The Most Amazing Artificial Intelligence Milestones So Far
- Key Milestones Of Waymo – Google’s Self-Driving Cars
- What is AI?
- What Are The Negative Impacts Of Artificial Intelligence (AI)?
Or browse the Artificial Intelligence & Machine Learning to find the metrics that matter most to you.
Related Articles
The Future Of Corporate Learning And Employee Engagement: Why Traditional Training Is Dead
Picture your last corporate training session. Was it memorable? Did it change how you work? Probably not. But that's about to change.[...]
4 AI-Powered Strategies For Your Ultimate Job Search
In today's hyper-competitive job market, the difference between landing your dream job and being lost in a sea of applicants often comes down to one thing: leveraging the right tools.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media