From AI Momentum to Reality: HPE on Building the AI Factory
AI is moving fast, but most organizations are not ready for what it takes to deploy it at scale. HPE’s Trish Damkroger explains why infrastructure, not models, is now the limiting factor and how the “AI factory” is redefining enterprise execution.
AI innovation is not slowing down, but enterprises are struggling to operationalize it.
At NVIDIA GTC, Ryan Shrout sits down with Trish Damkroger, SVP and GM of HPC & AI Infrastructure Solutions at HPE, to unpack what it actually takes to move from AI ambition to real-world deployment.
As enterprises scale beyond experimentation, the challenge is shifting fast. Training, fine-tuning, and inference workloads are exposing cracks in infrastructure, forcing a rethink of how systems are designed, integrated, and operated.
HPE’s answer is the “AI factory” model: tightly integrated infrastructure, software, and models built for repeatable, production-grade outcomes. Backed by decades of HPC expertise, from Cray supercomputing to liquid cooling, these systems are engineered for dense compute, converged workloads, and enterprise scale.
At the same time, AI and simulation are merging, accelerating discovery while raising the bar for performance and flexibility. The focus is shifting from experimentation to execution, where time-to-value and uptime define success.
Key Takeaways
🔹 AI deployment is significantly more complex than initial experimentation
🔹 The “AI factory” model is driving standardized, scalable infrastructure
🔹 HPC expertise, including liquid cooling, is foundational to modern AI systems
🔹 AI and simulation workloads are converging on shared infrastructure
🔹 Time to AI value and uptime are now primary enterprise metrics
🔹 Infrastructure strategy is becoming central to AI ROI
Learn more about HPE’s Enterprise AI Solutions.
Subscribe to our Youtube channel for more insights from NVIDIA GTC 2026.
Listen to the audio here:
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such
Trish Damkroger:
People don't associate HPC and AI, but it's the same infrastructure, folks. But we don't need to tell them that, right? Because HPC is scary. And for some reason, AI isn't, you know, whatever.
Ryan Shrout:
Hi, everybody. Welcome to Six Five On The Road. I'm Ryan Shroud here at GTC 2026 at the HPE booth. And we're going to talk about this continued migration from AI experimentation into AI deployment. And there's really nobody better to talk to about that than Trish Damproger. You are the SVP GM of HPC and AI at HPE, right?
Trish Damkroger:
Correct.
Ryan Shrout:
Thank you for joining us. Really appreciate it.
Trish Damkroger:
My pleasure.
Ryan Shrout:
So you and I actually spoke fairly recently in December. We were in Barcelona HPE Discover. But it's always interesting to me because this this industry moves so fast.
Trish Damkroger:
It does.
Ryan Shrout:
And in the span of three months anything can change. Right. And so I want to get some of your perspective on that especially if we go through the questions talking about maybe what's different or what's changed or what's new. The first area of interest to me is that continued migration from experimentation and ideas about AI infrastructure to actually deploying it. And I think even in our experience, when we work with customers and end users that are trying to deploy it, they're finding that it's way more complex than they expected it to be. And that's whether or not they're doing fine tuning or inference or some kind of solution deployment. So I'm just curious, from your perspective, Where do you see those types of challenges creep in, that complexity? Where does it kind of first pop up?
Trish Damkroger:
So I do think, I mean, one, the customers I deal with are really those ones that are looking at the dense compute, scale up architectures that have gone all in on AI, right? So your tier ones, your Neo clouds, your large enterprises that have procured the infrastructure in order to do, you know, go from generative AI to a genetic AI and that journey that we're all on. And I think that the enterprises themselves you know the the more even within HPE it's where's that return on investment. So we're still looking at where that is. Now I will tell you I mean I finally caved and bought the professional versions of some of these. You know I won't do
Ryan Shrout:
The tools, yes.
Trish Damkroger:
Yes, exactly. And I use that all the time.
Ryan Shrout:
Yeah.
Trish Damkroger:
I mean, I don't know how I survived without it. It was like, how did we survive without your cell phone before, right? Right. So I do know that there is a lot of opportunity for improvement and efficiencies, and we're seeing that. And I think that I was just meeting with some customers before this, and they were talking about exactly this. We have this AI factory. where we're helping people. We can bring it to you so that basically you'll get the infrastructure and the full software stack so that we can make it easy for you along with those models on top.
Ryan Shrout:
And that kind of gets to my next question. We continue to hear about that term AI factory. And when I think about it, what does that actually mean? It's like the industrialization of AI infrastructure. How do you make it faster more easily deployable but also still a kind of a massive scale right when I think of a factory you're thinking of something enormous like that. So is that how you're helping them move faster to build out this package you're kind of providing these maybe ready made solutions or customizable solutions.
Trish Damkroger:
Exactly so you know instead of the raw infrastructure or the bare metal as we used to call it and you know my world it's really putting that whole software stack and allowing to have the models that you need already developed for you tuned to your data so that you can get using them and we're working with a consulting company to actually go to market.
Ryan Shrout:
Now, one of the advantages or unique capabilities that I think HPE brings is its history and legacy in HPC and supercomputing. You've got Cray, HPE is regularly in the top 100, 50, 20 lists of supercomputers. I'm curious how that expertise helps translate or has helped HPE have an advantage in kind of working with customers on these massive AI deployments.
Trish Damkroger:
So I it's a great question because I think that people don't associate HPC and AI but it's the same infrastructure folks but we don't need to tell him that right because HPC is scary and for some reason AI is it you know whatever so I think that yes the whole liquid cooling. We've been doing it for 50 years. Cray-1 was liquid cooled. The Dents architecture, how do you network? You know, MVL72, we've been doing, look at the Cray-1 design. That's a good point, yeah. About 50 years ago. So it is the same basic architecture to do a Dents compute platform that they're doing for AI.
Ryan Shrout:
Is there anything that you've seen specifically that used to be in HPC supercompute specific? Let's say deployment model or or infrastructure piece that you is now just regularly applied to AI. Is it the networking piece to it?
Trish Damkroger:
Well I think it's broader than that. I mean for one we are taking advantage now because of the supply chain. So with liquid cooling as a great example it's harder to get these components because everybody wants them. But now we have options where before it was a niche market. And so the supply chain was more much more difficult. We were building it as we went. So that has been great. And I would say that that's everywhere. And Networking the whole going to Ethernet now is is again where people are going where you really used to have to be very much you're an InfiniBand or you know some other form of specialized high performance networking. So I mean I think there is. It's good for us because it's giving us a lot more options out there to for partners to work with. But it's hard for us because coming from Cray you know it's like we always were the innovators. And and now we need to take advantage of that. Our partner ecosystem.
Ryan Shrout:
Yeah, that's a great point. Another area that I find really interesting that has maybe cycled back into it is AI for sciences, right? We talked about HPC being scary, but that's one of the initial primary verticals that HPC was used for, and now we see AI being applied into those sciences. I'm curious, when we start to see those simulation and AI workloads running at the same time or on the same system or sharing hardware, Does anything change in how you design that infrastructure or how you would design those systems, knowing that you're applying it to both of those kind of workload categories?
Trish Damkroger:
So our traditional customers still need our floating point 64, right, for the modeling and simulation workflows. And so we do a design with NVIDIA, just an MVL4 versus the 72. Right for those traditional customers and that's what we're selling to a lot of the A.I. factories or to our Department of Energy laboratories because they want to do both as you say and they need to go from one to the other and they need to have a full workflow where they're sending it back and forth. So I would say we're doing that. But I got higher science. I mean this is the Genesis mission. that you know with the Department of Energy. I'm super excited about the ground challenges they're doing because they're going to do large science models like large language models. And this is going to revolutionize what we've been able to do.
Ryan Shrout:
What is unique about that system or that design that they're putting together that maybe is differentiated from other AI factory deployments or AI supercomputer deployments?
Trish Damkroger: So if they're going to use a lot of the same infrastructure and so the stack, it's more what the end result is going to be used for. Because sometimes science is not funded broadly by the commercial entities. So this is where the Department of Energy or the US government can really make a difference. and make sure that we can do that leapfrog in the sciences, not just for commercial interest.
Ryan Shrout:
Yes, which again is like the historical context of where these kind of government labs and HPC structures were designed for. I think at the end of the day though, I'm sure you see this all the time, the customers, what they actually care about is the outcome, right? How are you helping me get to whatever my end state will be? So I have maybe two questions on this. How much do you help define what those outcomes should be what those results should be. Because I feel like a lot of customers are still they know they want a I they know they need an A.I. strategy. Jensen's on stage telling everybody every CEO is going to be judged based on their token rate and these different rights. And so are they coming to HPE to learn what is the art of possible.
Trish Damkroger:
Somewhat, I mean, I think Jensen is doing, and NVIDIA's done a great job at telling everybody the importance. I will also tell you I have a number of CIOs that go, do not even mention tokens to me. They're costing me a fortune, you know? So I think it's how do we make those tokens useful? And that's what the most important conversation is. And so HPE has some great in-house examples of how we are doing that. Like I said we're working with consulting firms to help people. We're not going to be there day in and day out. Always.
Ryan Shrout:
Yeah. The other part of that is when when they do come to you with their with their outcomes that they want to see and they want to partner with HPE to go drive that the time to A.I. value is probably the most important metric. Right. Yes. The difference between getting set up and running in six months versus six weeks is dramatic. So how do you have that conversation? What are the important pieces from an HPE perspective to kind of accelerate that?
Trish Damkroger:
Yeah, I think it's really important. It's not only just time to deploy, so it's when can I get this system? Because if my competitor has it before me, They're using their making money before I am so that's one thing so how do you get it there as fast as possible but then it's also uptime and I think one of the differentiations that HP brings is our services. I mean we've been servicing these dense platforms for 50 years right and. And we often are finding issues, let's just say, with some of our vendors' equipment and helping them solve them so our customer can be successful out and as fast as possible, getting it deployed and running it. I mean, they want 100% uptime. And you know how challenging that can be.
Ryan Shrout:
So time to AI value, but also sustaining that AI value and ROI is a critical piece of it.
Trish Damkroger:
Right.
Ryan Shrout:
Well, I'm really excited to see kind of what continues to happen. Maybe we'll talk again in three months.
Trish Damkroger:
Probably. And then it will be a whole new thing.
Ryan Shrout:
It'll be a totally different conversation. Thank you for joining us. I really appreciate it. It's always great to talk with you. And thank you for joining us at Six Five On The Road at GTC 2026. Make sure you check us out on social media. Follow us on our YouTube channel and all of our content is at SixFiveMedia.com. See you soon.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.




