HPE Empowers Users with Generative AI through Advanced Computing & Cloud Solutions
30 October 2023
HPE Empowers Users with Generative AI through Advanced Computing & Cloud Solutions
But decades of experience in providing high-performance computing (HPC) and storage solutions to industry mean it’s well-positioned to provide the infrastructure its clients need for their transformative AI initiatives.
The global technology giant’s aim is to enable its customers to create their own "private AI cloud," covering the spectrum of AI use cases from autonomous driving to large language models (LLMs) and bioscience.
Recently, I sat down to chat with Mark Armstrong, HPE’s VP and GM of AI for the EMEA region. We discussed how organizations that are at the stage where they are ready to scale artificial intelligence (AI) deployments can benefit from partnering with infrastructure providers, as well as the challenges they face. Here are some of the highlights of our chat, which you can watch in full here.
Optimizing Workload Management
In any enterprise AI deployment, getting the infrastructure and architecture elements right is always critical. This is generally where HPE starts – as Armstrong tells me: “The first aspect of it is ensuring that we work really closely with our customers to determine what the right architecture is … and compute the workloads that they need. And honestly, I think we have the strongest team globally in terms of the ability to understand how the technology works … and optimize that to the applications.”
Along with its depth of AI expertise, HPE’s long history working in the sphere of HPC leaves it uniquely placed to offer assistance:
“If you look at the workloads that are required for generative AI, those requirements are very similar to high-performance compute – that’s why we’ve had success in the past couple of years, servicing this new and upcoming generative AI market,” Armstrong tells me.
After all, HPE established its reputation by building and deploying some of the world’s most advanced supercomputers, such as its Cray and Apollo series.
It also created a number of high-performance storage solutions of the sort that are required when it comes to feeding real-time, streaming data into machine learning algorithms.
Despite all of this technology, though, Armstrong tells me that the lynchpin of the organization’s strategy when it comes to helping clients optimize their workloads is its customer-centric approach, which is focused on ensuring solutions are rigorously tailored to the needs of individual customers.
“Taking the experience that we gain from deploying these massive systems helps us to ensure that we architect the right solutions for customers that want to solve for generative AI,” he says.
Generative AI for Critical Business Functions
An example of this partnership working can be seen in the collaboration between HPE and Aleph Alpha. Building on HPE’s Apollo 6500 Gen10 Plus HPC platform and deploying the HPE Machine Learning Development Environment helped the German startup to create the sort of explainable, auditable AI solutions its own customers – private companies as well as government organizations – need today.
In this case, the concept of "data sovereignty" was vital. The idea was to offer a solution that enables sensitive data to be processed and acted on without compromising either privacy or the commercial value of the data. This means that professional clients – lawyers or healthcare professionals, for example, can benefit from the accessible analytics made possible by generative AI.
“Aleph Alpha has an aspiration to lead the creation of next-generation AI … and they’re demonstrating that with an incredible strategy to make LLM models available through all of the major European languages. HPE underpins that vision with the compute architecture and solutioning that they need,” Armstrong tells me.
Other noteworthy partnerships include work done with Oracle Red Bull Racing to assist with the design and simulation of Formula One cars and with Volvo through its Zenseact autonomous driving subsidiary.
Flexible Models
A further pivotal aspect of HPE’s strategy in this field is its commitment to delivering flexible procurement and usage models.
“We couple [HPE’s] capabilities with the availability for our customers to procure these systems and use them in many different ways,” Armstrong tells me.
This includes both capital expenditure and a number of as-a-service models. This all plays into HPE’s vision of enabling their customers to create their own private AI cloud. LLM-as-a-service can be accessed via the GreenLake platform.
“We’re about ensuring that customers can consume this generative AI in a way that suits their business,” says Armstrong.
This means HPE’s customers have a range of options when it comes to purchasing and integrating AI infrastructure, meaning they can be adapted for whatever use case or scale is necessary.
None of this is new to HPE, of course, which has offered everything from supercomputers to printer ink refills “as-a-service” for decades. Extending these models into its generative AI strategy shows that it considers this breakthrough technology one of its core offerings today.
How Pivotal Will Generative AI be Going Forward?
Armstrong tells me that key to HPE’s strategy is the idea that generative AI will vastly simplify the process of businesses leveraging their data to create highly bespoke services for clients.
He says, “I would suggest that we’ll see major innovations coming out of pretty well all industries in the coming few years with respect to what generative AI can deliver.”
This means more models tuned to deliver company-specific or industry-specific outcomes, as well as a greater understanding of how data can be kept secure while it's used to deliver very customer-specific services.
Armstrong says, “I think that will be a significant future step, and we’ll see that emerging- we’re already starting to see it emerge, but I think we’ll start to see it more over the next 24 months.”
You can click here to see my full interview with Mark Armstrong, VP and GM of AI for the EMEA region for Hewlett Packard Enterprises.
You can read more about future tech and business trends in my books, The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society, Future Skills: The 20 Skills And Competencies Everyone Needs To Succeed In A Digital World and Business Trends in Practice, which won the 2022 Business Book of the Year award. And don’t forget to subscribe to my newsletter and follow me on X (Twitter), LinkedIn, and YouTube for more on the future trends in business and technology.
Related Articles
The Employees Secretly Using AI At Work
Imagine walking into your office and noticing your colleague Sarah effortlessly breezing through her tasks with uncanny efficiency.[...]
Battling AI Fakes: Are Social Platforms Doing Enough?
Since generative AI went mainstream, the amount of fake content and misinformation spread via social media has increased exponentially.[...]
Sign up to Stay in Touch!
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity.
He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world.
Bernard’s latest book is ‘Generative AI in Practice’.
Social Media