Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

The Most Significant AI Milestones So Far

2 July 2021

Many people assume artificial intelligence is a new technology, but the concept has been around for a long time. There have been several significant milestones in its history that helped catapult artificial intelligence capabilities over the years. Here we look at some of the key milestones achieved so far. 

You might be surprised to discover that artificial intelligence (AI) really isn’t all that new. The concept that one day machines would be capable of thinking and making decisions was first contemplated by mathematician Rene Descartes in 1637 in his book Discourse on the Method. He was also the first to identify the distinction between what today is called specialized AI, where machines learn how to perform one specific task, and general AI, where machines can adapt to any job. Although many people might see AI as a relatively new field, there have been important AI milestones in the industry already and a very interesting history. Here we highlight some of the most significant in a lengthy list.

AI Before the Term was Coined — Early 20th Century

In the early 20th century, the concepts that would ultimately result in AI started out in the minds of science fiction writers and scientists. In 1927, the sci-fi film Metropolis was released and featured an artificially intelligent robot and in 1950 a visionary collection of short stories by Isacc Asimov was published called I Robot. Asimov envisioned the Three Laws of Robotics and a computer that could answer questions because it could store human knowledge. In 1943, a collaboration between Warren McCulloch and Walter Pitts introduced the idea that logical functions could be completed through networks of artificial neurons—what today is known as artificial neural networks (ANNs)—in their paper “A Logical Calculus of the Ideas Immanent in Nervous Activity.”

Also in 1943, Alan Turing, who would invent the Turing test that would essentially help us understand when machines reached intelligence and neurologist William Grey Walter, who created the first autonomous robots called Elmer and Elsie known as “tortoises,” began to collaborate about intelligent machines.

The Dartmouth Conference — 1956

The term artificial intelligence was first used at a summer workshop organized by professor John McCarthy at Dartmouth College. The event brought together experts in the fields of machine learning and neural networks to generate new ideas and debate how to tackle AI. In addition to neural networks, computer vision, natural language processing and more were on the agenda at that summer event.  

The Chatbot ELIZA— 1966

Before Alexa and Siri were a figment of their developers’ imaginations, there was ELIZA—the world’s first chatbot. As an early implementation of natural language processing, ELIZA was created at MIT by Joseph Weizenbaum. ELIZA couldn’t speak, but she used text to communicate.

AI Becomes Useful— 1980

By the late 1960s, the luster for AI had worn off a bit as millions of dollars had been invested and the field was still not hitting the lofty predictions people expected. But, in the 80s, the “AI winter” was over. The XCON expert learning system from Digital Equipment Corporation was credited with saving the company $40 million annually from 1980 to 1986. This was a significant milestone for AI because it showed that AI wasn’t just a cool technological feat and that it actually had important real-world applications. These applications had a narrower focus and programmed AI to solve a particular problem. Businesses began to understand the impact AI could have on operations and that money could be saved. In fact, it led to investments by companies in 1985 to the tune of $1 billion a year for AI systems.

Principles of Probability— 1988

Prior to IBM researchers publishing “A Statistical Approach to Language Translation” in 1988, AI was very rules-driven. This paper, that considered the automated translation between French and English language, marked the beginning of AI using probability of various outcomes to train on the data rather than be trained via rules. This strategy better mimicked the processes of the human brain and is still the way machine learning happens today. The machines take a trial-and-error approach to learning and adjust their future actions based on the feedback they receive from the current action.

Internet – 1991

When the worldwide web launched in 1991 when CERN researcher Tim Berners-Lee published the hypertext transfer protocol (HTTP), the world’s first website online, it made it possible for online connections and data to be shared no matter who or where you are. Since data is the fuel for artificial intelligence there’s little doubt that AI has progressed where it is today thanks to Berners-Lee’s work.

Chess and AI– 1997

Another milestone for AI is no doubt when world chess champion Garry Kasparov was defeated in a match of chess by IBM’s Deep Blue supercomputer. This was a win for AI that allowed the general population and not just those close to the AI industry to understand the rapid development and evolution of computers. In this case, Deep Blue won by using its high-speed capabilities (able to evaluate 200 million positions a second) to calculate every possible option rather than analyzing game play.

5 Autonomous Vehicles Complete the DARPA Grand Challenge– 2005

When the DARPA Grand Challenge first ran in 2004, there were no autonomous vehicles that completed the 100-kilometer off-road course through the Mojave desert. In 2005, five vehicles made it! This race helped spur the development of autonomous driving technology.

AI Wins Jeopardy! – 2011

In 2011, IBM’s Watson challenged human Jeopardy! players and ended up winning the $1 million prize. This was significant since prior challenges against humans such as the Kasparov chess match used the machine’s stellar computing power. In Jeopardy!, Watson had to compete in a language-based, creative-thinking game.

Deep Learning on Display – 2012

AI learned to recognize pictures of cats in 2012. In this collaboration between Stanford and Google, documented in the paper Building High-Level Features Using Large Scale Unsupervised Learning by Jeff Dean and Andrew Ng, unsupervised learning of AI was accomplished. Prior to this development, data needed to be manually labeled before it could be used to train AI. With unsupervised learning, demonstrated with the machines identifying cats, an artificial network could be put on the task. In this case, the machines processed 10 million unlabeled pictures from YouTube recordings to learn what images were cats. This ability to learn from unlabeled data accelerated the pace of AI development and opened up tremendous possibilities for what machines could help us do in the future.

Insightful Vision – 2015

In 2015, the annual ImageNet challenge highlighted that machines could outperform humans when recognizing and describing a library of 1,000 images. Image recognition was a major challenge for AI. From the beginning of the contest in 2010 to 2015, the algorithm’s accuracy increased to 97.3% from 71.8%.

Gaming Capabilities Grow– 2016

AlphaGo, created by Deep Mind which is now a Google subsidiary, defeated the world’s Go champion over five matches in 2016. The number of variations of the game makes brute force impractical (there are more than 100,000 possible opening moves in Go compared to 400 in chess). In order to win, AlphaGo used neural networks to study and then learn as it played the game.

On the Road with Autonomous Vehicles– 2018

2018 was a significant milestone for autonomous vehicles because they hit the road thanks to Waymo’s self-driving taxi service in Phoenix, Arizona. And it wasn’t just for testing. There were 400 individuals who paid to be driven by the driverless cars to work and school within a 100-square-mile area. There were human co-pilots who could step in if necessary.

2019 and Beyond

We can expect to see further development and refinement of some of the breakthrough technologies that AI has already demonstrated, such as self-driving vehicles on land, sea and air and chatbots in the coming years. As a result of AI’s proficiency in natural language generation and processing, we can expect to “speak” to even more algorithms in the future than we already do now (and might even mistake them for humans). In response to the COVID crisis, we will see new applications for AI to support more contactless deliveries, cleaning and more. And, of course, there may be applications we haven’t even dreamed up yet. 

What do you think the next notable AI milestone will be? 

Where to go from here

If you would like to know more about measuring HR effectiveness, check out my articles on:

Or browse the Artificial Intelligence & Machine Learning to find the metrics that matter most to you.


Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

How Generative AI Will Enhance The Supply Chain

Savvy business leaders are already beginning to identify use cases for generative AI in their organizations.[...]

6 Ways Generative AI Will Transform Healthcare

Generative AI is ushering in a transformative era in healthcare, with far-reaching implications for patient care, diagnostics, and more.[...]

Did OpenAI Sora Just Kickstart The Era Of Generative Video?

Just a few weeks back, I wrote that we are probably still some way from being able to create a movie from a natural language prompt.[...]

The Biggest AI Trends In The Next 10 Years

Although I like to write about future predictions for the world of technology and business, I’m usually focused on what’s coming up in the next five years.[...]

The Role Of Generative AI In HR

Generative AI is one of the most transformative technologies that humans have ever had access to.[...]

The Amazing Ways Walmart Is Using Generative AI

Walmart is no stranger to adopting new technologies and embracing transformation.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts