Artificial intelligence is everywhere at the moment, thanks mostly to the huge viral success of generative AI apps. In particular, ChatGPT has become the app with the fastest-growing userbase of all time, causing much debate about what this technology is capable of and what it may be capable of in the future.
Of course, as with anything that causes so much excitement, there’s quite a bit of misunderstanding and misinformation floating around. And with a lot of money on the table, it’s not unusual to see some quite grand claims about what it can do or how clever it actually is! So here’s a rundown of five of the most common misconceptions I regularly come across when it comes to AI and machine learning today.
AI is Intelligent
Intelligence is a property inherent to living creatures – it allows us to learn, communicate, understand, empathize, and make decisions. AI is an attempt to simulate that or create similar results using a machine, but it is still only that – a mechanical simulation that may appear to produce some of the same results as natural intelligence.
What we are usually referring to when we talk about AI today – particularly when we are referring to its use by businesses and in online applications – is machine learning. This is a form of AI that uses algorithms trained on data to become increasingly good at performing a particular task. This might be playing a game, recognizing images, translating languages, driving a car, answering questions … or any one of numerous other tasks that can be completed by a machine if it’s given the right information. Because they only carry out the particular task they are trained to do, these are all examples of what is known as specialized AI.
An AI that was truly intelligent in the same way as a human (though far faster) and capable of doing any kind of task we can do would be known as general AI. However, we are still a long way from achieving this.
AI is Expensive and Difficult to Implement
AI has traditionally been expensive and only available to large organizations or well-funded research organizations. This is because it costs money to gather, cleanse and store all the data that's needed if we want machines to learn to make decisions and to provide the compute power necessary to process it. The cost of training ChatGPT, for example, has been estimated at around $5 million, while the cost of training larger models that are expected to emerge in the future will be far higher.
Most businesses and organizations, however, that can benefit from using AI have no need to train their own models, and certainly not models of the size of ChatGPT. The availability of AI services via cloud platforms means that they can be accessed and used at a low cost and without any specialist knowledge or technical skills. This means that unlike five years ago, AI learning and decision-making are now available to domain experts rather than only to AI and data experts, creating a "democratizing" effect that is enabling far more businesses and organizations to reap the benefits.
AI is Going To Take Jobs From Humans
It's probably inevitable that some human jobs will be taken over by machines that are simply able to perform them more quickly, accurately, and cost-effectively than humans. This has been the case following every other major industrial revolution – mechanization, electrification, digitization, and now automation.
However, it will also create more jobs – and what's more, they are likely to be better paid and more rewarding than the ones that are lost. In 2020, the World Economic Forum released a report that said while 85 million jobs would be replaced by automation by 2025, including in manufacturing, insurance underwriting, customer service, data entry, and long-haul truck driving, 97 million new opportunities will be created.
AI is Neutral and Unbiased
AI comes from machines, so it’s understandable that people not familiar with the way it works would assume that it would always take a fair and balanced stance and be free from bias. Unfortunately, this isn't true, as AI algorithms only “know” anything at all because they are trained on data, and this data is often created or curated by humans. This means that, particularly with larger datasets, it’s almost inevitable that some human biases will creep in and affect the output of the algorithms. AI is only as good as the data that it's trained on, and a warning that's commonly given about any computer system is that "garbage in = garbage out." Academics in the field of AI research have been warning for some time that bias is one of the primary dangers we have to watch out for in a world where computers can make decisions for us, and a great deal of research into the ethics of AI is concerned with ensuring that the risks of bias contained in data can be eliminated or minimized.
AI Will Take Over The World and Enslave Humans
This has been the premise of a number of popular and very entertaining science fiction stories, including The Matrix and Terminator franchises, among many others. However, the idea may not be entirely far-fetched – at least, some very famous and clever people have said this may be the case, including Elon Musk, Stephen Hawking, and Bill Gates!
The absolute truth is that no one knows where AI will lead eventually, and to a large extent, it will come down to how we as humans develop, implement and regulate it. This is why ethics and oversight is a hugely important elements of the work being put into understanding and creating AI today. Today’s most advanced AIs, such as ChatGPT, do not pose any existential threat to us as a species because they simply don’t have the capacity to cause us harm or act in ways other than how they have been programmed, which is to help us with basic tasks involving information. Also, they don't have the instinct for self-preservation, which is usually the motivation for machines to turn against us in science-fiction stories – simply because it hasn’t been programmed into them.