Written by

Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest books are ‘Future Skills’, ‘The Future Internet’, ‘Business Trends in Practice’ and ‘Generative AI in Practice’.

Generative AI Book Launch
View My Latest Books

Follow Me

Bernard Marr ist ein weltbekannter Futurist, Influencer und Vordenker in den Bereichen Wirtschaft und Technologie mit einer Leidenschaft für den Einsatz von Technologie zum Wohle der Menschheit. Er ist Bestsellerautor von 20 Büchern, schreibt eine regelmäßige Kolumne für Forbes und berät und coacht viele der weltweit bekanntesten Organisationen. Er hat über 2 Millionen Social-Media-Follower, 1 Million Newsletter-Abonnenten und wurde von LinkedIn als einer der Top-5-Business-Influencer der Welt und von Xing als Top Mind 2021 ausgezeichnet.

Bernards neueste Bücher sind ‘Künstliche Intelligenz im Unternehmen: Innovative Anwendungen in 50 Erfolgreichen Unternehmen’

View Latest Book

Follow Me

The Latest AI Trends & The Omniverse

2 July 2021

NVIDIA GPU Technology Conference (GTC) gets underway in just a few days, and as this year’s event is a fully virtual affair, anyone can attend from the comfort of their home. This means that even those who couldn’t fit a five-day trip to California into their schedule can spend time with leaders in the field of Artificial Intelligence (AI) to experience and learn about the revolutionary impact it’s having on our lives.

 | Bernard Marr

NVIDIA is a world leader in computer graphics hardware, and its products were originally developed for gaming before they were put to use providing the processing horsepower required for AI. The machine learning algorithms and deep neural networks that power today’s AI rely on powerful hardware to carry out the millions of calculations every second needed for machines to “think.”

Since their transition into AI pioneers, the company and its products and services have become a key part of the ecosystem of leading-edge technologies of today – from creating simulations of the real world inside the virtual world to the internet of things (IoT), collaborative working environments, and augmented and virtual reality – (AR/VR, sometimes collectively known as extended reality, or XR).

The fact that all of these technologies – all potentially world-changing in their own right – have reached a point of maturity where their potential to drive change is tremendous is no coincidence. They have evolved in parallel with many crossovers throughout the timeline of their development, where advances in one have jump-started ideas and experiments in the other.

Take AI and AR, for example. Outwardly, the experience of AR is about putting on a headset or holding up a smartphone and seeing a world where the digital and the physical co-exist. But the technology powering AR is rooted in AI, with computer vision built on deep learning algorithms carrying out the work of identifying real-world objects and combining what the user sees in front of them with computer images. Now it’s believed that the benefits will run both ways – with AR and VR enabling new ways for humans to work with smart machines. It’s proposed that it will even help with our ability to “look inside” what is often called the “black box” of AI (this isn’t because it’s indestructible, like a black box on an aircraft, but because it’s very hard to see what’s going on inside it).

By giving us new perspectives on the data and algorithms behind the scenes – which due to their complexity can be very hard for a human to overview via a flat, 2D representation – we will make AI more understandable and explainable, and therefore useful.

Today, AI processing is carried out in the cloud, and thousands of organizations all over the world benefit from being able to run intelligent platforms and applications, both as products and services in themselves that can be sold or rented to customers, and behind the scenes to empower their own processes and operations. Very often, it will be an NVIDIA GPU that’s doing the heavy lifting for them. More recently, as well as hardware, NVIDIA has established itself as a creator of software and applications that let organizations bring together all the technologies they need to be growing into as they move through the stages of digital transformation. AI, IoT, and XR lead on to the next generation of technological advancement that will shape our world in the future, such as high-powered and quantum computing, robotics, and genomics.

Among their most interesting offerings from my point of view is the NVIDIA Omniverse platform. Recently released into open beta, at its heart, Omniverse is a tool for collaboration, built during and for a time when teams are more widely dispersed and remote than ever. Centered around a highly powerful and flexible Universal Scene Description (USD) developed by Pixar, it is a collaborative environment targeted at workloads involving 3D graphics and design.

The need to coordinate remote teams and give them environments that recreate the opportunities for creative teamworking and culture-building inherent to real shared workspaces was on the agenda before the Covid-19 pandemic struck. NVIDIA’s general manager of Omniverse and head of developer relations (and former CTO at Lucasfilm), Richard Kerris, told me, “Challenges around working collaboratively around different locations go far beyond the past year … at Lucasfilm, we were working with people in Singapore, Europe … all over the world … and the biggest challenge we had was how to work together. Even though we had fairly similar pipelines, we had different software, different types of workflows; it was always a major challenge.”

In fact, Kerris tells me, there was an entire team of nine people known as the global pipeline that existed solely to manage the geographically diverse teams.

Work carried out within the global ecosystem of cutting-edge 3D graphics and animation eventually led to the creation of open file formats for sharing data, including the Pixar USD format that forms the backbone of Omniverse. The 3D worlds that it can create are used for everything from film and game development to architecture, engineering, and urban planning. Anything involving 3D visualizations that are sufficiently complex and require a team to put together will find a home on Omniverse. As the name suggests, it is intended as a framework that will support any number of different worlds (or applications), unifying them by using a common language to describe them and giving them and allowing them to work within a consistent set of rules.

For example, an engineering team in the real world may be geographically dispersed, but if they’re building something in collaboration, at least they have the benefit of knowing the laws of physics will be the same in Paris as they are in Peru. However, if they are simulating their build in a digital environment, then the laws might very well be different from simulation to simulation, depending on tools used and the environment the simulation is built in. Omniverse aims to allow a persistent and repeatable experience, through the use of USD format, MDL, and advanced simulation capabilities. MDL allows materials and objects to be defined in ways that they will behave in the same way, even between different applications, when they are connected through the Omniverse platform. Omniverse can’t function with only USD and MDL – simulation makes digital twins and high-fidelity collaboration possible. Omniverse’s simulation capabilities provide a ‘ground truth’ representation of 3D models – meaning they obey the laws of physics, as they would in the real world.

As Kerris told me, “We believe anything that’s been built will be visualized, and anything that moves will end up being autonomous … and that anything that’s autonomous will end up being simulated … we believe there are huge opportunities for Omniverse … manufacturing has a huge need for visualization, not just of the products they’re building but the factories they’re building them in. It costs tens of millions to retrofit a car factory from one car model to the other because you have to make decisions about where machines go, who needs access to them … if you can factor all that in in a ‘digital twin’ environment, you’ve only got to configure it once, and you’re going to save a lot of money. Time is money, and being accurate on those decisions is essential.”

Omniverse is heavily driven by AI, with many built-in AI functions, including physics and weather simulations, features such as “Audio2Face” that constructs facial animations from spoken text, and AI pose estimation that allows a fully animated avatar to be created from a webcam feed. It’s also enabled for AR and VR, meaning environments and objects created in Omniverse can be experienced in immersive 3D while wearing glasses or a headset. As such, it represents a perfect amalgamation of these distinct but closely linked technology trends, as mentioned earlier in this article.

One customer that has adopted Omniverse into its workflow is architectural firm KPF, renowned for designing iconic and striking structures such as the CUNY research center in New York, 52 Lime Street in London and the China Resources Tower in Shenzhen.

KPF director Cobus Bothma said that thanks to Omniverse, it has been able to build simulations that let it experience its projects from the point of view of users before a single brick has been laid.

He told me, “When we started working with NVIDIA and testing the software, we looked at how quickly we could get the design models into Omniverse and really understand the fidelity of the geometry. We started to understand … how accurate it could be … we find designers are very caught up in the process of the technology before they get to the process of design, and we wanted to kind of flip that around – they need to be experienced with the process of design, with tech supporting that.

“We’re at the point now with Omniverse that we can get the geometry in, get the rendering fidelity – now we want the emotive side, and the experience of design.”

One of the potential uses for Omniverse and other technology that augments the abilities of creatives like those on his teams, Bothma tells me, is around insights it can mine into people’s relationships with the built environment, through social media, for example.

“There’s an alleyway in Covent Garden that we know people take and post a lot of photographs of – so we know straight away that there’s an attraction that’s going to draw people down this alleyway. If we want to draw people another way, we know we have to generate a fingerprint that’s just as attractive,” he tells me.

“We work on projects where the clients challenge us, they show us a street with 10,000 people passing by every hour, but a short way away, there’s a street where there’s no one. They want us to get people involved with the other side of the neighborhood, driving them to markets or smaller shops … this is something we’re learning at the moment mainly, but in the future, we want to apply it.”

Bothma will be one of those speaking at GTC next week, and as anyone can attend free of charge, I’d highly recommend dropping into his session for some fascinating insights and ideas into how technology will impact his fields of design and creativity today and in the near future.

And while you’re there, there’s an awful lot more you should check out, too! This includes keynotes from legends in the field of AI, including Geoffrey Hinton of Google Brain and the Vector Institute, and Turing Award winners Yoshua Bengio and Yann LeCun. There are over 1,600 sessions and talks that can be attended completely free of charge. These cover every tech-related subject from life simulation to 5G, decarbonization, AI-powered healthcare, edge computing, and the mission to enable the potential of Africa’s one million AI developers.

On top of that, all the key players in the field of AI today, including Google, Microsoft, IBM, Dell, Lenovo, and HP will be present, discussing and demonstrating their latest developments, pilot projects, and deployments, as well as sharing thoughts on the journey that technology will take us on over the next 10 years.

There’s also involvement from many of the most promising and innovative start-ups, including many of the 7,000-plus that are involved with NVIDIA’s Inception accelerator program. It’s this combination of academic excellence, industry giants and up-and-coming stars of tomorrow that makes the event unique, Greg Estes, VP for corporate marketing and developer programs at NVIDIA , tells me.

“You’ve got the researchers, you’ve got the core developers, you’ve got the ecosystem of all the OEMs and others that are selling them, government agencies, everything coming together in one place.”

“I don’t know that there are other conferences that bring that … we’re more or less unique … you have amazing research conferences for AI, and all the researchers go, but you don’t get the rest of the ecosystem. Then you get vendor shows, but the researchers won’t be there … I think GTC is quite unique in being able to bring this.”

In line with most industry events for the last year (and probably most of the next year two), GTC is a wholly virtual affair. This has positives and negatives, Estes says.

“Everyone who regularly attends conferences misses the ‘happenstance’ – running into somebody in the hallways or when someone introduces you to a colleague. But there are a lot of advantages. First of all, you can have a lot more people.”

More than 100,000 have already registered for the event, far more than would be expected to attend if the event was in person. With all the content available both live and on-demand, a virtual event means that no one has to make choices if two sessions they want to attend are scheduled for the same time. Additionally, the net can be cast far further and wider when it comes to securing speakers and celebrity guests, who do not have to make such a big commitment if they can host their sessions simply by logging in from home.

In fact, the benefits of a virtualized event are so apparent that NVIDIA is very likely to continue using the format when we are fully rid of Covid-19, alongside in-person events.

“I don’t think we will ever have a purely physical event again – digital is the way things are going,” says Estes.

NVIDIA  GTC opens with the keynote from the company’s CEO and founder, Jensen Huang, on April 12 and runs until April 16. You can sign up to see all of the speakers mentioned in this article as well as more than 2,200 other speakers, sessions, and workshops by clicking here.

Where to go from here

If you would like to know more about , check out my articles on:

Or browse the Artificial Intelligence & Machine Learning to find the metrics that matter most to you.


Business Trends In Practice | Bernard Marr
Business Trends In Practice | Bernard Marr

Related Articles

How Generative AI Will Enhance The Supply Chain

Savvy business leaders are already beginning to identify use cases for generative AI in their organizations.[...]

6 Ways Generative AI Will Transform Healthcare

Generative AI is ushering in a transformative era in healthcare, with far-reaching implications for patient care, diagnostics, and more.[...]

Did OpenAI Sora Just Kickstart The Era Of Generative Video?

Just a few weeks back, I wrote that we are probably still some way from being able to create a movie from a natural language prompt.[...]

The Biggest AI Trends In The Next 10 Years

Although I like to write about future predictions for the world of technology and business, I’m usually focused on what’s coming up in the next five years.[...]

The Role Of Generative AI In HR

Generative AI is one of the most transformative technologies that humans have ever had access to.[...]

The Amazing Ways Walmart Is Using Generative AI

Walmart is no stranger to adopting new technologies and embracing transformation.[...]

Sign up to Stay in Touch!

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.

He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.

Bernard’s latest book is ‘Generative AI in Practice’.

Sign Up Today

Social Media

0
Followers
0
Followers
0
Followers
0
Subscribers
0
Followers
0
Subscribers
0
Yearly Views
0
Readers

Podcasts

View Podcasts