The 7 Most Dangerous Technology Trends Everyone Should Know About
2 July 2021
As we enter new frontiers with the latest technology trends and enjoy the many positive impacts and benefits it can have on the way we work, play and live, we must always be mindful and prepare for possible negative impacts and potential misuse of the technology. Here are seven of the most dangerous technology trends:
The British, Chinese, and United States armed forces are testing how interconnected, cooperative drones could be used in military operations. Inspired by a swarm of insects working together, drone swarms could revolutionise future conflicts, whether it be by overwhelming enemy sensors with their numbers or to effectively cover a large area for search-and-rescue missions. The difference between swarms and how drones are used by the military today is that the swarm could organise itself based on the situation and through interactions with each other to accomplish a goal. While this technology is still in the experimentation stage, the reality of a swarm that is smart enough to coordinate its own behaviour is moving closer to reality. Aside from the positive benefits of drone swarms to minimise casualties, at least for the offence, and more efficiently achieve a search-and-rescue objective, the thought of machines equipped with weapons to kill being able to “think” for themselves is fodder for nightmares. Despite the negative possibilities, there seems to be little doubt that swarm military technology will eventually be deployed in future conflicts.
2. Spying Smart Home Devices
For smart home devices to respond to queries and be as useful as possible, they need to be listening and tracking information about you and your regular habits. When you added the Echo to your room as a radio and alarm clock (or any other smart device connected to the Internet), you also allowed a spy to enter your home. All the information smart devices collect about your habits such as your viewing history on Netflix; where you live and what route you take home so Google can tell you how to avoid traffic; and what time you typically arrive home so your smart thermostat can make your family room the temperature you prefer, is stored in the cloud. Of course, this information makes your life more convenient, but there is also the potential for abuse. In theory, virtual assistant devices listen for a “wake word, ” before they activate, but there are instances when it might think you said the wake word and begin recording. Any smart device in your home, including gaming consoles and smart TVs, could be the entry point for abuse of your personal information. There are some defensive strategies such as covering up cameras, turning off devices when not needed and muting microphones, but none of them are 100% foolproof.
3. Facial Recognition
There are some incredibly useful applications for facial recognition, but it can just as easily be used for sinister purposes. China stands accused of using facial recognition technology for surveillance and racial profiling. Not only do China’s cameras spot jaywalkers, but they have also monitored and controlled Uighur Muslims who live in the country. Russia’s cameras scan the streets for “people of interest, ” and there are reports that Israel tracks Palestinians inside the West Bank. In addition to tracking people without their knowledge, facial recognition is plagued with bias. When an algorithm is trained on a dataset that isn’t diverse, it is less accurate and will misidentify people more.
4. AI Cloning
With the support of artificial intelligence (AI), all that’s needed to create a clone of someone’s voice is just a snippet of audio. Similarly, AI can take several photos or videos of a person and then create an entirely new—cloned—video that appears to be an original. It’s become quite easy for AI to create an artificial YOU and the results are so convincing our brains have trouble differentiating between what is real and what is cloned. Deepfake technology that uses facial mapping, machine learning, and artificial intelligence to create representations of real people doing and saying things they never did is now targeting “ordinary” people. Celebrities used to be more susceptible to being victims of deepfake technology because there was abundant video and audio of them to use to train the algorithms. However, the technology has advanced to the point that it doesn’t require as much raw data to create a convincing fake video, plus there are a lot more images and videos of ordinary people from the internet and social media channels to use.
5. Ransomware, AI and Bot-enabled Blackmailing and Hacking
When high-powered technology falls into the wrong hands, it can be very effective to achieve criminal, immoral, and malicious activities. Ransomware, where malware is used to prevent access to a computer system until a ransom is paid, is on the rise according to the Cybersecurity and Infrastructure Security Agency (CISA). Artificial intelligence can automate tasks to get them done more efficiently. When those tasks, such as spear phishing, are to send out fake emails to trick people into giving up their private information, the negative impact could be extraordinary. Once the software is built, there is little-to-no cost to keep repeating the task over again. AI can quickly and efficiently blackmail people or hack into systems. Although AI is playing a significant role to combat malware and other threats, it’s also being used by cybercriminals to perpetrate the crimes.
6. Smart Dust
Microelectromechanical systems (MEMS), the size of a grain of salt, have sensors, communication mechanisms, autonomous power supplies, and cameras in them. Also called motes, this smart dust has a plethora of positive uses in healthcare, security, and more, but would be frightening to control if used for evil pursuits. While spying on a known enemy with smart dust could fall into the positive column, the invasion of a private citizen’s privacy would be just as easy.
7. Fake News Bots
GROVER is one AI system capable of writing a fake news article from nothing more than a headline. AI systems such as GROVER create articles more believable than those written by humans. OpenAI, a nonprofit company backed by Elon Musk, created “deepfakes for text” that produces news storeys and works of fiction so good, the organisation initially decided not to release the research publicly to prevent dangerous misuse of the technology. When fake articles are promoted and shared as true, it can have serious ramifications for individuals, businesses, and governments.
Along with the positive uses of today’s technology, there is no doubt that it can be very dangerous in the wrong hands.
Related Articles
4 Smartphones Leading The AI Revolution
As enterprises increasingly rely on company-issued smartphones as primary computing devices, these mobile devices are becoming the frontline of workplace AI integration.[...]
The Rise Of AI-Enabled Virtual Pets: Why Millions Are Raising Digital Companions
Remember Tamagotchis? Those tiny digital pets that had millions of kids frantically pressing buttons to keep their virtual companions alive in the 1990s?[...]
The Dark Side Of AI: How Deepfakes And Disinformation Are Becoming A Billion-Dollar Business Risk
Every week, I talk to business leaders who believe they're prepared for AI disruption. But when I ask them about their defense strategy against AI-generated deepfakes and disinformation, I'm usually met with blank stares.[...]
Why You Should Be Polite To ChatGPT And Other AIs
In my latest conversation with ChatGPT, I caught myself saying "please" and "thank you." My wife, overhearing this, couldn't help but laugh at my politeness toward a machine.[...]
The 7 Revolutionary Cloud Computing Trends That Will Define Business Success In 2025
Picture this: A world where quantum computing is as accessible as checking your email, where AI automatically optimizes your entire cloud infrastructure, and where edge computing seamlessly melds with cloud services to deliver lightning-fast responses.[...]
AI And The Global Economy: A Double-Edged Sword That Could Trigger Market Meltdowns
The stock market's current AI euphoria, driven by companies like NVIDIA developing powerful processors for machine learning, might mask a more troubling reality.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media