Very few subjects in science and technology are causing as much excitement right now as artificial intelligence (AI). In a lot of cases this is good reason, as some of the world’s brightest minds have said that it’s potential to revolutionise all aspects of our lives is unprecedented.
On the other hand, as with anything new, there are certainly snake-oil salesmen looking to make a quick buck on the basis of promises which can’t (yet) be truly met. And there are others, often with vested interests, with plenty of motive for spreading fear and distrust.
So here is a run-through of some basic misconceptions, and frequently peddled mistruths, which often come up when the subject is discussed, as well as reasons why you shouldn’t necessarily buy into them.
AI is going to replace all jobs
It’s certainly true that the advent of AI and automation has the potential to seriously disrupt labour – and in many situations it is already doing just that. However, seeing this as a straightforward transfer of labour from humans to machines is a vast over-simplification.
Previous industrial revolutions have certainly led to transformation of the employment landscape, such as the mass shift from agricultural work to factories during the nineteenth century. The number of jobs (adjusted for the rapid growth in population) has generally stayed consistent though. And despite what doom-mongers have said there’s very little actual evidence to suggest that mass unemployment or widespread redundancy of human workforces is likely. In fact, it is just as possible that a more productive economy, brought about by the increased efficiency and reduction of waste that automation promises, will give us more options for spending our time on productive, income-generating pursuits.
In the short-term, employers are generally looking at AI technology as a method of augmenting human workforces, and enabling them to work in newer and smarter ways.
Only low-skilled and manual workers will be replaced by AI and automation
This is certainly a fallacy. Already, AI-equipped robots and machinery are carrying out work generally reserved for the most highly trained and professional members of society, such as doctors and lawyers. True, a lot of their focus has been on reducing the “drudgery” of day-to-day aspects of the work. For example, in the legal field, AI is used to scan thousands of documents at lightning speed, drawing out the points which may be relevant in an ongoing case. In medicine, machine learning algorithms assess images such as scans and x-rays, looking for early warning signs of disease, which they are proving highly competent at spotting. Both fields, however, as well as many other professions, involve a combination of routine, though technically complex, procedures – which are likely to be taken up by machines – as well as “human touch” procedures. For a lawyer this could be presenting arguments in court in a way that will convince a jury, and in medicine, it could be breaking news in the most considerate and helpful way. These aspects of the job are less likely to be automated, but members of their respective professions could find they have more time for them – and therefore become more competent at them – if mundane drudgery is routinely automated.
Super-intelligent computers will become better than humans at doing anything we can do
Broadly speaking, AI applications are split into two groups – specialised and generalised. Specialised AIs – ones focused on performing one job, or working in one field, and becoming increasingly good at it – are a fact of life today – the legal and medical applications mentioned above are good examples.
Generalised AIs on the other hand – those which are capable of applying themselves to a number of different tasks, just as human or natural intelligences are – are somewhat further off. This is why although we may regularly come across AIs which are better than humans at one particular task, it is likely to be a while before we come face-to-face with robots in the mould of Star Trek’s Data –essentially super-humans who can beat us at pretty much anything.
Artificial intelligence will quickly overtake and outpace human intelligence
This is a misconception brought about by picturing intelligence as a linear scale – for example, from one to 10 – imagining that perhaps animals score at the lower end, humans at the higher end, and with super-smart machines at the top of the scale.
In reality intelligence is measured in many different dimensions. In some of them (for example speed of calculations or capacity for recall) computers already far outpace us, while in others, such as creative ability, emotional intelligence (such as empathy) and strategic thinking, they are still nowhere near and aren’t likely to be any time soon.
AI will lead to the destruction of enslavement of the human race by superior robotic beings
This one is obviously out of any number of sci-fi scenarios – The Terminator and The Matrix are probably the most frequently cited! However, some voices which have proven themselves to be worth listening to in the past – such as physicist Stephen Hawking and tech entrepreneur Elon Musk – have made it very clear they believe the danger is real.
The fact is though, that notwithstanding the distant future, where indeed anything is possible, a great number of boundaries would have to be broken down, and allowances made by society, before we would be in a position where this would be possible. Right now, it’s highly unlikely anyone would think about building or deploying an autonomous machine with the potential to “make up its mind” to hurt and turn against its human creators. Although drones and security robots designed to detect and prevent threats, and even take autonomous action to neutralise them, have been developed, they have yet to be deployed and doing so is likely to provoke widespread public condemnation. The hypothetical scenario tends to be that robots either develop self-preservation instincts, or re-interpret commands to protect or preserve human life to mean that humans should be taken under robotic control. As it is unlikely that anyone would build machines with the facilities to carry out these actions autonomously, this is unlikely to be an immediate problem. Could it happen in the future? It’s a possibility, but if you’re going to worry about science fiction threats, then it’s just as likely that invading aliens will get to us first.
Bernard Marr is a bestselling author, keynote speaker, and advisor to companies and governments. He has worked with and advised many of the world's best-known organisations. LinkedIn has recently ranked Bernard as one of the top 10 Business Influencers in the world (in fact, No 5 - just behind Bill Gates and Richard Branson). He writes on the topics of intelligent business performance for various publications including Forbes, HuffPost, and LinkedIn Pulse. His blogs and SlideShare presentation have millions of readers.