If there are three “buzzword” topics that have certainly been responsible for a fair amount of hype over the past few years, they would be digital twins, generative AI, and the metaverse.
However, one field in which they are undoubtedly generating more than just hot air is gaming and 3D design, where companies like Unity Technology and Epic Games are pulling the strings connecting these hot technology topics.
The Epic Games platform and Unity’s platform are best known for powering many of the most popular video games in history, but they are also universally used for creating experiential 3D designs, virtual reality environments, and all manner of simulations for the industrial and leisure markets.
To discuss this convergence and the potential it offers for democratizing access to real-time 3D design, I recently took the chance to talk to Marc Whitten, SVP and GM Create at Unity.
We spoke about some of the ways that artificial intelligence (AI) – in particular, the newly emerging class of generative AI apps like ChatGPT and Stable Diffusion – will soon make it easier for anyone to create digital twin simulations and 3D interactive experiences and environments. This has the potential to revolutionize the many industries that have already enthusiastically adopted these technologies – including gaming, automotive, manufacturing, and healthcare.
But the potential doesn’t stop there. As well as democratizing the creative process, the benefits will undoubtedly also be experienced during “runtime” – when users will be able to interact with simulations, digital twins, and immersive environments using natural language, pulling out the information and insights they need in real-time.
He told me, "When you connect [3D simulated environments] with natural language and AI-based tools, you can literally ask …’hey computer, what’s happening on the shop floor?' or 'What are the top three issues I need to be paying attention to right now?’”
In this theoretical situation, a supervisor is overseeing the operation of their premises or facility via a real-time 3D graphical representation. This could be a factory, retail premises, or a sports or entertainment facility such as a theme park or sports stadium. Viewing on a screen, via a virtual reality (VR) headset, or even as images superimposed on top of the real-world view with augmented reality (AR) glasses, they can watch predictions for what will happen next play out in front of them in real-time.
Could this – rather than the whimsical, 3D environments of Meta Horizons' 3D worlds or Decentraland – be the true vision of how the much-talked-about metaverse will play out?
“The term is overhyped almost to the point of meaninglessness,” Whitten tells me.
“It started being used to cover everything … what people are really looking to do is think about the next generation of these experiences where 3D plays a role. The ability for anyone to interact, whether that’s in a group together or in a richer environment … it’s why I prefer terms like ‘digital twin’ … they [better describe] what a particular company is trying to do to get value out of the technology.”
Increasingly, businesses and industries are entering the world of gaming to help them achieve this vision – because that's where the expertise is.
In the nineties, 3D graphics technology evolved to the point where video games began to move away from the two-dimensional, Pac-Man-style bitmapped images that characterized their first two decades of existence.
Since then, game developers, using tools like Unity and Unreal Engine, have pushed the boundaries when it comes to creating realistic, simulated worlds. Now this expertise is being tapped by companies like those responsible for the creation of the new Vancouver Airport – which built a realistic 3D simulation before picking up a single construction tool. Or automobile manufacturers such as Daimler, which adopted the technology for its configurator apps used by buyers to pick and choose options for new vehicles.
Moving beyond these existing applications, Whitten likes to look towards a future where all industries are transforming their static, flat data into 3D, real-time models in order to reshape their value chain from the ground up.
His view is that this maturation will involve enterprises starting out with a “representative” digital twin – where a 3D environment is recognizable as the environment, process, or system that it is modeling. Then they will progress to a “connected” digital twin and on to a “live data” twin. This describes a level of maturity where the simulation is informed by data collected from the real world by sensors and scanners, enabling the model to be updated as it happens.
He refers to the next and final (in this model) stage of maturity as the “predictive twin." This is where the simulation and data modeling is sufficiently sophisticated that the operator can effectively peek into the future.
He says, "That's part of the beauty of connecting these things through a real-time engine. Real-time 3D means it's physical, so you can run the clock forward and say, given these conditions, let's simulate the next time period. Based on the real data coming in, plus simulated data going forward.
“What does this tell us that we should take action on? That type of information flowing through the enterprise will allow everyone to make much richer decisions faster.”
Back to the world of gaming – Whitten sees applications for generative AI and natural language technology that promise to create richer and more immersive experiences for players of the future.
Once again, he talks about benefits at both the creative stage – where generative AI will reduce the workload associated with creating content and environments by allowing designers simply to describe what they want to create, rather than painstakingly create everything with 3D modeling tools.
And gamers will interact with non-player characters (NPCs) that are empowered with seemingly realistic intelligence and conversational abilities. In other words – no more identikit guards patrolling the medieval towns or space stations of our gaming environments.
However, bringing this exciting vision to life would mean developing ways of building AI processing into the game loop in an economically viable way.
“Because games might have 100 million people playing them – and if they all have to hit the cloud to do AI and work out what an NPC is going to say, it would cost too much money.”
This is a problem Unity has been working on solving, however, and Edge computing is likely to form a part of the solution – with smaller-scale neural networks running on the players’ own devices. Perhaps not churning out as much processing power as GPT-4, but certainly enough to make today’s NPCs look like the wooden toys our grandparents played with.
Overall, Whitten tells me he is hugely excited about the impact that these three cutting-edge technology trends will have on gaming as well as industry at large and the wider economy.
He tells me, "The mission at Unity is we believe the world is better with more creators in it … we’ll understand things we never had a chance to do before, and those things will feel more alive and real because we can infuse them with AI … and that will lead to extraordinary things.”
You can click here to watch my complete conversation with Marc Whitten, SVP and GM Create at Unity, where we dive deeper into the convergence of generative AI, digital twins, and the metaverse and the impact he predicts it will have across many industries.