Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

Simple Explanations Of Key Artificial Intelligence (AI) Terminology Everyone Should Understand

2 July 2021

Just as science-fiction authors have always predicted, Artificial Intelligence (AI) is increasingly becoming an everyday part of our lives. From personal assistants such as Siri or Cortana to cutting-edge applications in healthcare and across industry, self-learning machines have arrived and are busy helping us find new ways to solve problems.

As the predictions and dreams of yesterday’s futurologists have solidified into today’s tools and technology, a sometimes confusing lexicon has sprung up around the subject. As these new words and phrases are often attempts to outline the fundamental thinking behind this ongoing robotic revolution, understanding them is key to getting to grips with what AIs actually are, where they’ve come from, what they want, and most importantly, how you can put them to work yourself!

So, here’s my brief guide to some of the most common and some of the latest terminology being used when discussing cutting-edge AI, in alphabetical order.


AlphaGo

AlphaGo is an AI which became the first computer program to beat a professional player at the board game Go. Game playing has often been a field in which computer scientists have sought to prove that machines can outperform humans. However earlier applications such as chess computers are not considered “true” AI today because they don’t really learn – they simply rely on brute force to consider every permutation of a structured dataset (all of the moves possible in a game of chess.) AlphaGo uses deep learning to refine its algorithms based on the results of historical games, and from running simulated games against itself. This means it can be considered to be learning and comes closer to what we consider “true” (human-like) intelligence.

Artificial Intelligence

This is the original, catch-all term for “machines which can think”, first conceived by philosophers and storytellers in ancient times. Technological advancement has brought them closer to reality and also redefined what we consider “intelligence” when it comes to machines. Rather than walking, talking automatons, today’s AI’s are more likely to take the form of discreet computer code dedicated to handing a particular task in an intelligent way.

Big Data

The “fuel” of AI. Knowledge unlocks understanding and wisdom. AI platforms leverage the huge volume, variety and velocity of information available in today’s digitized world to learn faster and make increasingly well-informed recommendations and decisions.

Cognitive computing

Cognitive computing is the process by which computers think and learn, as well as the development of these processes. These give rise to artificial intelligence, machine learning, deep learning and all technologies which involve simulating human thought and decision-making. In practice, the term is often used synonymously with modern, application-focused AI.

Deep learning

This is a subfield of machine learning (see below) which uses many layers of artificial neural networks to handle processing of data in increasingly complex ways. This means that classification (sorting into sets) can be done more precisely and pattern recognition is more sophisticated. These are two of the most useful fundamental tasks that AI carries out today, meaning Deep Learning is a cutting-edge and very active field of research. Layers of neural networks stacked on top of each other to be used in deep learning are known as deep neural networks.

Generalized AI

Generalized AI is a concept – widely thought to still be some way off – of a machine which can carry out any job it is told to do. An android such as those seen in Star Trek or Blade Runner – who could be given a mop and told to clean a floor, or given a weapon and told to defend against attacking Klingons, would be an archetypal example. While advances such as machine learning and deep neural networks point towards it being something that we will achieve in the future, currently the majority of AI research focuses on creating applied or specialized AIs (see below).

Image recognition

Teaching machines to recognise and classify objects visually – by inputting visual data – is an important foundation of AI because visual information is so valuable to humans, and AI seeks to emulate human thought processes. Either using cameras or raw image data such as picture or video files, computers are being taught to classify images according to what they depict, using pattern recognition to identify key features. Advances in machine learning have greatly improved the ability of computers to do this, as they have become able to teach themselves from vast image databases, increasing their probability of outputting accurate results.

Machine learning

Often used synonymously with AI these days, but there is an important distinction. While AI applies to the entire concept of “thinking” machines from sci-fi robots to self-learning computer code being developed by business and academia today, Machine Learning (ML) is the practical implementation that is generating the biggest breakthroughs in the real world. At its most basic it is technology designed around the principle that rather than have to teach machines to carry out every task, we should just be able to feed them data and allow them to work out the rules by themselves. This is done through a process of simulated trial-and-error where machines crunch datasets through algorithms which are capable of adapting, based on what they learn from the data, in order to more efficiently process subsequent data.

Natural Language Processing

Natural Language Processing (NLP) technology is concerned with building machines which can understand human speech patterns. Because spoken communication comes far more naturally to us than writing computer code, it makes sense that machines, with their superior processing powers, learn to adapt to us by understanding and speaking our language, rather than us adapt to them! Due to the huge variance in human languages and the way they are used, machine learning is employed to pick out patterns, tonal variances and colloquial or non-literal use of language and interpret what we are trying to express. ML-derived NLP can be seen or heard in action in virtual assistants such as Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa.

Neural networks

Algorithmic models structured as hierarchical networks of nodes which all pass information (data) between themselves, extrapolating more and more precise meaning and value from it as it passes along the chain. Their complex, interconnected nature allows data to be processed far more comprehensively than traditional, linear algorithms allow, enabling them more insightful output from big, messy and unstructured datasets.
The more precise and correct term, artificial neural networks (ANNs), is often simply shortened to “neural network”, the term for the system of biological neurons in the animal brain which machine learning attempts to emulate.

Specialized AI

The form of AI becoming commonplace in business, scientific research and our everyday lives – usually in the form of applications designed to carry out one specific task in an increasingly efficient way. This could be anything from giving you tips on improving your fitness by monitoring exercise patterns, to predicting when machinery will break down on a production line, to spotting genetic indicators of illness in a human gene sequence.

Supervised Learning

Supervised learning is a term used for machine learning processes where the output of the algorithm is checked, and the results fed back to the computer to enable it to know how accurate they are. It can then use this knowledge to increase the probability that it will return with an acceptably accurate result next time around. As a simple example imagine an AI fraud detection algorithm designed to flag up suspicious transactions by a bank. In unsupervised learning, data is matched against previous outcomes to look for patterns in financial transactions, such as their point of origin, size or time of day they take place, which may indicate they are suspicious. As new suspicious transactions are identified, the algorithm adapts to “learn” that other features of the newly-identified suspicious transactions may also be an indicator of fraud. In this way, a supervised learning system can learn to identify fraud from characteristics that were not highlighted in its initial training data as indicators of fraud.

Unsupervised Learning

Unsupervised learning is the flip-side of the coin from supervised learning and involves giving computers the ability to increasingly accurately recognise and classify data without needing a human, or initial training data, to check it if it is right or wrong. In unsupervised learning the algorithm only ever sees the input data, and it classifies it according to patterns that it recognises from other input data that it has previously processed. This is generally done through a statistical process known as clustering, where objects (financial transactions to carry on my example from above) are grouped together according to qualities and attributes that they share. This approach to the problem of data classification has tremendous potential for developing machines which more closely emulate our own thought and decision-making processes, but also requires huge amounts of processing power compared to supervised learning.


Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

How The 2025 Presidential Election Could Transform The Future Of AI In America And Beyond

The clock is ticking toward what might be the most consequential technological crossroads in American history.[...]

Why Artificial Superintelligence Could Be Humanity’s Final Invention

Imagine a future where machines don't just beat us at chess or write poetry but fundamentally outthink humanity in ways we can barely comprehend.[...]

The 10 Most Powerful Data Trends That Will Transform Business In 2025

Imagine a world where every business decision is powered by real-time AI insights, where synthetic data eliminates privacy concerns, and where your personal data becomes as valuable as currency.[...]

The Future Of Retail: 10 Game-Changing Trends That Will Define 2025

The retail industry is on the cusp of its most dramatic transformation yet. As someone who's been analyzing business and technology trends for decades, I'm particularly excited about how 2025 is shaping up to be a watershed year where science fiction meets shopping reality.[...]

Ultimate Smartwatch Guide 2025: From AI Health Tracking To Adventure-Ready Timepieces

In an era where our smartphones rarely leave our pockets, smartwatches have emerged as the new frontier of personal computing – and the competition has never been fiercer.[...]

The Future Of Corporate Learning And Employee Engagement: Why Traditional Training Is Dead

Picture your last corporate training session. Was it memorable? Did it change how you work? Probably not. But that's about to change.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts