Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest book is ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’

View Latest Book

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Machine Learning In Practice: How Does Amazon’s Alexa Really Work?

2 July 2021

“Alexa, what’s the weather going to be like today.”

It’s taken decades for scientists to understand natural human speech to the point where voice-activated interfaces such as Alexa, the natural language processing system by Amazon, are sufficiently enabled to be successfully accepted by consumers. Alexa is who talks to users of Amazon’s Echo products including the Echo, Dot and Tap, as well as Amazon Fire TV and other third-party products. Even since 2012, when the patent was filed for what would ultimately become Amazon’s artificial intelligence system Alexa, there has been tremendous growth in capabilities and the credit for that growth goes to machine learning. 

For something that we do every day without giving it any thought, conversation between machines and humans is complex. So, how did Amazon and others in the space such as Google, Apple and Microsoft crack the code?

ABCs of Alexa

Over 30 million smart speakers were sold globally last year, and this number is expected to grow to nearly 60 million this year. While Amazon remains the industry leader in smart speakers selling about 20 million devices last year, others (especially Google) are also growing and starting to catch up. There are nuances to each, but let’s look “under the hood” of an Echo to see how Alexa works.

While there is some capability contained in the Echo cylinder such as speakers, a microphone and a small computer that can awake the system and blink its lights to let you know it’s activated, its real capabilities occur once it sends whatever you have told Alexa to the cloud to be interpreted by Alexa Voice Services (AVS).

So, when you ask Alexa, “What’s the weather going to be like today, ” the device records your voice. Then that recording is sent over the Internet to Amazon’s Alexa Voice Services which parses the recording into commands it understands. Then, the system sends the relevant output back to your device. When you ask about the weather, an audio file is sent back and Alexa tells you the weather forecast all without you having any idea there was any back and forth between systems. What that of course means is that if you lose internet connexion Alexa is no longer working.

The skills Echo has out of the box are impressive to most of us, but Amazon allows and encourages approved developers free access to Alexa Voice Services so they can create new Alexa skills to augment the system’s skill-set just as Apple did with the app store. As a result of this openness, the list of skills that Alexa (currently over 30,000) can help with continues to grow rapidly. Users can, of course, purchase products from Amazon, but they can also order pizza from Domino’s, hail a ride from Uber or Lyft, control their light fixtures, make a payment through the Capital One skill, get wine pairings for dinner and so much more.

Constantly learning from human data

Data and machine learning is the foundation of Alexa’s power, and it’s only getting stronger as its popularity and the amount of data it gathers increase. Every time Alexa makes a mistake in interpreting your request, that data is used to make the system smarter the next time around. Machine learning is the reason for the rapid improvement in the capabilities of voice-activated user interface. For example, Google speech was able to improve its error rate tremendously in a year; now it recognises 19 out of 20 words it hears. Understanding natural human speech is a gargantuan problem, and we now have the computing power at our disposal to make it better the more we use it.

The challenges of natural language generation and processing

As a subset of artificial intelligence, natural language generation (NLG) is the ability to get natural sounding written and verbal responses back based on data that’s input into a computer system. Human language is quite complex, but today’s natural language generation capabilities are becoming very sophisticated. Think of NLG as a writer that turns data into language that can be communicated.

Natural language processing (NLP) is the reader that takes the language created by NLG and consumes it. Advances in this technology have allowed dramatic growth in intelligent personal assistants such as Alexa.

Voice-based AI is so appealing because it holds the promise of supporting in a way that is natural to us humans; no swiping or typing necessary. That’s also why it’s a technical challenge to build. Just think about how nonlinear your typical conversation is.

When people talk they interrupt themselves, change topics or repeat themselves, use body language to add meaning and use a wide variety of words that have multiple meanings depending on the context. It’s like a parent trying to understand the vernacular of teens, but much, much more complicated.

Amazon continues to have an army of specialists in addition to a cadre of machines on the task of making Alexa and Alexa Voice Services even better. Their goal is to make spoken language a user interface that is as natural as talking to another human being. I can’t wait to see what’s in store next. 

Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

Artificial Intelligence And The Future Of Marketing

Marketing is one of the areas of business operations where it is widely predicted that artificial intelligence (AI) will drive enormous change.[...]

Can A Metaverse AI Win America’s Got Talent? (And What That Means For The Industry)

When “Simon Cowell” performed on the America’s Got Talent stage, no one was more shocked than the man himself.[...]

Quantum Computing Now And In The Future: Explanation, Applications, And Problems

A new generation of computer technology is on the horizon, which many think will eventually increase the computing power available to humanity by factors of thousands or possibly even millions[...]

10 Best AI Consultancy Firms | Bernard Marr

What Are The 10 Best AI Consulting Firms

Google CEO Sundar Pichai has described the advent of artificial intelligence (AI) as more revolutionary than the discovery of fire or electricity.[...]

The Best Examples Of Human And Robot Collaboration

The future of work is not about humans being replaced by robots. Rather, it is about us learning to work alongside smart, automated technology[...]

The Best Examples Of Digital Twins Everyone Should Know About

The digital twin is an exciting concept and undoubtedly one of the hottest tech trends right now.[...]

Stay up-to-date

  • Get updates straight to your inbox
  • Join my 1 million newsletter subscribers
  • Never miss any new content

Social Media

Yearly Views


View Podcasts