The AI Technology Stack: 4 Key Layers Of Technologies Used For Artificial Intelligence
2 July 2021
After existing in the dreams of science fiction authors for centuries, in recent years artificial intelligence (AI) has quickly started to become a reality.
The computer processing power available today, combined with the explosion in the amount of data available to us in a digital world, means smart, self-teaching machines are now commonplace. Although, they are often hidden away behind services or web interfaces where we may not even notice them, unless we know what we’re looking for!
But behind the scenes at Google, Facebook, Netflix or any of the hundreds of organisations which have deployed this revolutionary technology, vast data warehouses and lightning-fast processing units crunch through huge volumes of information to make this a reality. So, here’s an overview of the tech that goes into the natural language processing, image recognition, recommendation and prediction engines used in today’s cutting-edge AI.
Data collection
AI is dependent on the data that is gathered. Just as our brains take in huge amounts of information from the world around us and use it to make observations and draw conclusions, AI can’t function without information.
In the AI tech stack, this can come from a number of places. Thanks to the ongoing rollout of the Internet of Things, millions of devices worldwide are connected and able to talk to each other, from industrial scale machinery to the smart phones we carry everywhere we go. The data collection layer of an AI stack is composed of software that interfaces with these devices, as well as web-based services which supply third-party data, from marketing databases containing contact information to news, weather and social media APIs. Virtual Personal Assistants allow data collection to take place from human speech – natural language recognition will convert speech to data, whether it is background noise or commands which are issued directly to a machine.
Data storage
Once you’ve collected data, or set up streams so it is pouring into your AI-enabled organisation in real-time, you need somewhere to put it. Because AI data is usually Big Data – it needs a lot of storage space, and it needs to be storage which can be accessed very quickly.
Often this is where cloud technology will play a leading role. Some organisations have the capability and resources to establish their own distributed data centres, using technology such as Hadoop or Spark, which can cope with the vast amount of information. Often however third party cloud infrastructure – such as Amazon Web Services or Microsoft Azure – provides a more suitable solution. Storage can be scaled up or down when it is needed, saving money, and these platforms also provide a host of methods for integrating with analytics services.
Data processing and analytics
This is probably what most people consider to the most important element when they talk about artificial intelligence – though without the rest of the stack (collection, storage and output) any insights are going to be severely limited.
AI processing takes in machine learning, deep learning, image recognition, natural language processing, sentiment analytics, recommendation engines – all the hot topic buzzwords we’re used to hearing when organisations are waxing lyrical on the subject of how smart and cognitive their technology is.
These algorithms are often provided in the form of services which are either accessed through a third party API, deployed on a public or private cloud or run “on the metal” in a private data centre, data lake or, in the case of edge analytics, at the point of data collection itself (for example, within sensor or data capture hardware).
The power, flexibility and self-learning capabilities of these algorithms is what really differentiates the latest, current wave of artificial intelligence from what has come before – together with the increase in the amount of data available. Today the increase in raw power comes from the deployment of GPUs – processors originally designed for the very heavy-duty task of generating sophisticated computer visuals. Their mathematical prowess makes them ideal for repurposing as data-crunchers. A new wave of processing units specifically designed for handling AI related tasks should provide a further quantum leap in AI performance in the very near future.
Data output and reporting
If the aim of your AI strategy is to get machines working more efficiently and effectively together (perhaps for predictive maintenance purposes, or minimising power or resource usage) then this will be technology which communicates the insights from your operational AI processing to the systems which will benefit from it. Other insights may be intended for humans to take action on – for example, sales assistants using handheld terminals to read insights and recommendations relating to customers who are standing in front of them. In some cases the output may be in the form of charts, graphics and dashboards. Virtual personal assistant – technology such as Apple’s Siri and Microsoft’s Cortana – can often play a role here, too – these use natural language generation to convert digital information into human language – which alongside visuals is the most easily understood and acted-upon form of data output for a human.
Where to go from here If you would like to know more about AI and machine learning, cheque out my articles on:
- How Is Big Data Transforming Business?
- What is Spark – An easy explanation for absolutely anyone
- What is Hadoop – An easy explanation for absolutely anyone
- What is a data lake? A super-simple explanation for anyone
- The 6 best Hadoop vendors for your big data project
Or browse the Artificial Intelligence & Machine Learning section or AI use case library of this site to find more articles and many practical examples.
Related Articles
Creating The Universal AI Employee Of The Future
Imagine a world where your most productive employee never sleeps, never takes a vacation, and can seamlessly adapt to any role you need.[...]
20 Generative AI Tools For Creating Synthetic Data
The AI revolution that we’re currently living through is a direct result of the explosion in the amount of data that’s available to be mined and analyzed for insights.[...]
How To Tell Reality From Fiction Amid The AI-Driven Truth Crisis
The artificial intelligence narrative swings between utopian dreams and dystopian nightmares, often overshadowing the nuanced reality of its current capabilities and limitations.[...]
7 Ways To Turn The ‘Bring Your Own AI’ Threat Into An Opportunity
As AI tools become increasingly accessible, companies face a new trend: BYOAI, or bring your own AI.[...]
AI Gone Wild: How Grok-2 Is Pushing The Boundaries Of Ethics And Innovation
As AI continues to evolve at breakneck speed, Elon Musk's latest creation, Grok-2, is making waves in the tech world.[...]
Apple’s New AI Revolution: Why ‘Apple Intelligence’ Could Change Everything
Apple's announcement of 'Apple Intelligence' marks a seismic shift in how we interact with our devices.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media