Progress is sure to throw a few curveballs – for example, if I was writing an equivalent list in 2010, while I like to think I might have had the foresight to predict AI and the “app economy” would be a big part of everyday life by 2020, I may not have realized how much of a big deal cloud and “everything-as-a-service” would become.
With that in mind, here’s an overview of what I feel will be the most important trends throughout this decade. Individually they all have the potential to be as transformational as anything we’ve seen so far. But as with cloud, AI, and IoT over the past ten years, the truly revolutionary developments will be seen when they are applied together to push the boundaries of what we can do with technology.
A paradigm under which computers are no longer discrete objects that can be applied to various tasks but built into just about everything we use in order to make them more efficient at their job. Cloud computing, edge computing, IoT, and wearables are all trends that fit this concept. Ubiquitous computing is about creating IT strategies that deploy all of these capabilities in tandem to go beyond what’s possible with each one individually. Throwing AI and cognitive computing capabilities into the mix means all elements of the system will learn from each other, creating streams of data and insights that will impact many areas of our lives. By 2030, Cisco predicts that there will be 500 billion devices connected to the internet – roughly 50 devices for every person on the planet!
Connected and smart everything
Ubiquitous computing refers to the paradigm or ecosystem and infrastructure, but we can expect to see continued development on the hardware side as well, as more and more devices become networkable and capable of communicating and sharing data. Smartphones will probably still be most people’s interface with the digital world for some time yet – light, portable and economical on power consumption as they are. But miniaturization and advances in user interface design will bring changes to the way we use many other devices alongside the smartwatches, TVs, cars, kitchen appliances, and toilets that are already on the market. The merging of smart products with smart services that extend their usefulness will be another strong trend – for example, health insurance provider Vitality uses a number of smart devices such as watches and exercise equipment to track and assess their customers' progress toward more healthy lifestyles, rewarding them when they hit targets such as becoming more active. Other products focus on workplace wellness and performance, monitoring stress and activity levels to give insights that can improve employee health and productivity.
Increasingly, every aspect of the world we live in, including our own actions and what we say, can be captured and stored as digital information (data). This data is then available for us to analyze and use as the foundation of models and simulations that can help us to better understand the world and ourselves. This trend has been going on since we first developed the ability to store digital data but has greatly sped up in recent decades due to the extent to which our activities have moved online. However, in the coming decade, it will increase even further, thanks to other technologies covered here, such as AI and ubiquitous computing, that will allow us to do more with that data than ever before, therefore increasing its value. Some people find this very scary, and they are by no means wrong to do so. If used irresponsibly or outside of our best interests, it could be very damaging. However, it also lets us work more efficiently, create new cures for diseases, learn and understand more, and tackle large-scale societal or environmental challenges such as poverty, disease, and climate change.
Called “the most profound technology that humanity will ever develop” by Google CEO Sundar Pichai, we have only started to scratch the surface of what's possible with AI. Nevertheless, it now already underlies much of our activity, from searching for information online to shopping, socializing on networks like Facebook, or interacting with chatbots and voice assistants. In business and industry, it’s used to automate everything from email marketing campaigns to HR policies to industrial machinery. Over the next decade, breakthroughs in the way we understand the mechanisms of AI, as well as ever-increasing network and processing speeds, mean it will become even more useful and ubiquitous. AI is fueled by data and provides the algorithmic control mechanisms for the networks of computers, robots, and smart devices we talked about previously – a great example of how these trends converge and impact each other. One possible direction we will see this evolve over the next ten years is a move towards algorithms that fit the category of “generalized” or “strong” AI – able to adapt and carry out many different jobs, rather than being specifically good at one particular task, as is the case with the “specialized” or “weak” AI that’s mostly in use today.
Some examples of the most advanced AI in use today include the “language model” GPT-3, developed by OpenAI, which can create text that is virtually indistinguishable from that created by humans, as well as write computer code by itself. Another is the Perlmutter computer developed by Nvidia for the US National Energy Research Scientific Computer Center, which will be used to create the most detailed and accurate map of the universe ever made.
Extended reality (XR) covers virtual reality (VR) and augmented reality (AR), and although they are different technologies, they share a common purpose. This is to bring the digital and physical worlds we operate in today closer together. VR allows us to navigate digital spaces using the tools we’re most naturally adept at operating – our eyes, ears, and hands (and sometimes our legs too!) AR is about bringing the datafication and flexibility of the digital realm into the physical world – allowing us to overlay computer graphics (or any data) over our actual, physical environment. Both will play an ever-more prominent part in our lives as the decade moves forward. They are already used extensively by the military for training and simulation – for example, pilots can practice dogfighting against simulated opponents that actually appear to be flying in the air with them, and bomb disposal experts locate and deactivate simulated explosives in real-life environments. NASA uses a full-body VR suit that provides haptic (pressure) feedback, known as the Teslasuit, in astronaut training. Usage is quickly expanding into more mundane and everyday areas of industry, and the market for VR equipment in business is forecast to grow from $829 million in 2018 to $4.26 billion by 2023.
I believe that by 2030 it will be very normal for people to work, play and socialize in fully virtual environments that simulate all of the five senses we experience in the real world. Communications specialists Ericsson, however, predicts that by this time, virtual environments will be accessible that are indistinguishable from real-life. And Facebook is developing a platform called Horizon, which allows users to build and share online worlds where they can work on collaborative projects or just hang out. This is likely to cause big changes in society – for example, some people undoubtedly will prefer to spend their time in virtual environments and will spend as little time in reality as possible. However, it will also make training for many jobs cheaper and safer, bring people together, and open up new outlets for our creativity.
In an increasingly digitized world, how can we be sure that any piece of digital information is authentic, up-to-date, and only accessible by the right people? Over the last decade, blockchains – encrypted, distributed databases – have emerged as a solution to this problem. Current use cases are often related to finance, such as Bitcoin, and initiatives by the likes of Visa, Mastercard, many banks, and even an increasing number of governments.
Over the coming decade, though, we will see this technology applied across many more use cases. A blockchain can be useful anywhere where records need to be kept, which are accessible and easy to edit but also secure and traceable. It is often referred to as "immutable" because, unlike with a traditional database, or sales ledger, it's mathematically impossible for anyone to alter the information they contain without permission, thanks to the power of cryptography. They have the potential to shake up everything from medical record-keeping to supply chain tracking, royalty payments, food provenance, and legal documentation. Perhaps even more significantly, they will also become intertwined with the concept of the internet of things (IoT) – offering a secure and tamper-proof way of tracking interactions between software and machines that will take place far too quickly for humans to maintain oversight. In a future where 500 billion devices are networked and talking to each other, it will be essential that we have a secure method of ensuring traceability and accountability in machine-to-machine communication, and many believe blockchain will provide that solution.
3D printing technologies have already brought us houses that can be constructed in just a matter of hours and, more controversially, untraceable 3D-printed guns. It’s even possible to buy 3D food printers that can create entire meals from ingredients compressed into capsules. This is just the beginning of what will be possible with new manufacturing techniques known as "additive manufacturing." Experiments have led to the creation of 3D printed living tissue for use in medical testing, but which has also been successfully implanted into animals. Once implanted, they generated new functioning tissue and blood vessels. This opens up the possibility of 3D-printed organs being created for transplant into human patients. An experiment that saw 3D-printed ovaries implanted into mice at the Northwestern University Feinberg School of Medicine in Chicago has also led to hopes that the technology may provide new treatments for infertility. 3D printed vehicles – including an electric tricycle that can be created entirely from waste products – will benefit the environment by reducing the carbon emissions created by shipping new vehicles around the world between manufacturers and distributors.
Further down the road, we will see the emergence of “4D” printing – a term that’s being used for objects created through an additive process that are designed to change and adapt. Simple examples that exist today include shoes and clothing that are “programmed” to adapt to the shape of their wearer’s bodies. More futuristic potential use cases include materials that can repair themselves when they become worn out or damaged.
Genomics and synthetic biology
Our increasing understanding of the mechanisms behind genes and the human genome is ushering in a new age of healthcare where treatments and medication can be personalized for individuals. As well as this, gene-editing technology potentially holds the key to eradicating a huge number of diseases that blight lives across the world.
Biotechnology has advanced to the point where it’s now possible to alter the DNA encoded within a cell, influencing the characteristics (known as phenotypes) its descendants will have when the cell replicates. Amazingly, this is done by physically cutting DNA strands that measure one 40,000th of the width of a human hair. One technology developed to do this – known as CRISPR-Cas9 – has been shown to potentially be useful in fixing a common mutation known as MYBPC3, which is strongly correlated with heart disease that may affect as many as one in every 500 adults. It's also shown to be effective in fixing a mutation in cells in beagle dogs which, when present in humans, is believed to be linked to Duchenne Muscular Dystrophy. The human genome was first sequenced almost 20 years ago, in 2003. Today, projects such as the Earth Microbiome Project aim to catalog the genome of all microbial life on the planet, and at-home genetic testing is increasingly popular and affordable, with kits costing less than $100. Developing a more in-depth understanding of the ways in which living things grow and change is likely to be at the core of many of the most significant scientific advances of the next decade.
Nanotechnology is essentially construction but on a tiny scale. Typically, nanotech works on a scale around one billion times smaller than the centimeters and meters we are used to measuring in – the word "nano" has the literal meaning of one-billionth, but in practice, it is used to refer to any form of construction involving very, very small things! Nanoparticles of chemical compounds, including titanium dioxide and zinc oxide, are often added to sunscreen due to their ability to block UV rays. Carbon and silicon nanofibers are also added to clothes and furniture to make the materials more resilient and water-proof, as well as protective coatings for car paintwork.
Over the next decade, nanotechnologists expect the focus of their work to shift from inert materials to machines and devices. Pharma giant GSK is developing what it calls “electroceuticals" – basically tiny implantable devices measured in nanometers that can be implanted into the body to monitor organ functions and even attempt to improve their function or repair damage by stimulating nerve clusters. Smaller also generally means more energy-efficient, meaning tiny machines could be created that could power themselves fully from solar energy or other renewable sources. Self-healing materials are also under development that can repair themselves when it becomes worn or cracked – this could potentially be game-changing when it comes to spacecraft, vehicle, or even bridge and road design.
New energy solutions
Burning oil, coal, and gas at the rate we've been doing for the last 100 years is not going to be sustainable for the next 100 years. At the same time, challenges exist with renewable energy sources such as solar, hydro, and wind, such as transporting the power to areas where it can't be efficiently generated. Nuclear power brings its own set of problems, such as what to do with waste by-products that could be dangerously unstable for thousands of years.
By 2023 it’s expected that 30% of the electricity we use will be generated from renewable sources, which can include biofuels and biomass generated from agricultural and animal waste products. Other potential sources include space solar energy, where sunlight is collected by satellites before it can enter our atmosphere. This has the advantage of generating energy that can be transferred to places that don't get enough direct sunlight to make terrestrial solar panels operate efficiently. Some are betting that hydrogen – the most abundant material in the universe – will be the energy source of the future, where energy generated from renewable sources is used to generate "green" hydrogen through electrolysis that can be used as fuel. Further down the road, nuclear fusion offers the promise of a potentially unlimited source of energy, generated via the same processes that power the sun and stars. Engineering challenges mean that, at the moment, this isn't viable, but experiments are ongoing, and recent breakthroughs suggest it may only be a matter of time before it becomes a reality.