AI-Native by Design: How HPE Is Building the Next Era of Compute - Six Five On The Road
The Six Five team unpacks HPE’s AI-native infrastructure strategy, exploring how the company is evolving its platforms, compute, and architecture for scalable enterprise AI—from edge to cloud, and from training to distributed inference.
What will it really take for enterprise infrastructure to keep up with AI at scale?
From HPE Discover Barcelona 2025, hosts David Nicholson, Will Townsend, and Ryan Shrout break down Hewlett Packard Enterprise (HPE)’s AI-native infrastructure strategy—spanning next-gen compute, accelerators, storage, and platform integration. They explore how HPE is tackling power density, cooling, and the shift from edge to cloud, and how GreenLake’s full-stack platform is shaping scalable, efficient AI for the enterprise.
Key Takeaways Include:
🔹Defining AI-native Infrastructure: HPE’s vision for differentiated architecture that addresses unique AI workflow needs in 2025 versus traditional compute.
🔹Optimizing for Inference at Scale: How HPE is enabling distributed AI inferencing across edge and cloud environments.
🔹Overcoming Deployment Barriers: The critical role of hardware choices, power and cooling solutions, and architectural integration in addressing efficiency and scalability.
🔹GreenLake’s Full-Stack Approach: Why having an integrated data-to-deployment platform is essential for moving enterprise AI from proof-of-concept to production.
🔹Scalable AI Design Principles: The importance of accelerators, interconnects, storage, and networking in supporting reliable, future-proof AI infrastructure.
Learn more at HPE.
Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Ryan Shrout:
Welcome everyone to HPE Discover Barcelona 2025. We are here with Six Five On The Road. I'm Ryan Shrout, joined by my good friends Will and Dave. We are going to kind of talk through some of the interesting AI-centric topics, not that everything we haven't talked about. really is kind of AI-centric, but this is really diving into kind of HPE's AI native by design mentality that they've really been talking about kind of all year. I think that theme is really interesting to me, AI native by design, something that they started talking about kind of in the March time frame when they did their server refreshes and they put all this out into the world, but I'm curious, like, still there's this sentiment that it is different than traditional compute, right? That it's somehow fundamentally different. I tend to take the side that it's just more of high-performance compute and it's just a unique workload, but I'm curious what you guys have taken away from the shows thus far.
Will Townsend:
I'll let you take the compute piece, Dave, and I'm going to take the network. Yeah, sure.
David Nicholson:
I mean, it's a never-ending question whether it's simply a quantitative difference or an actual qualitative difference for a lot of these things going through the question of legacy compute to AI through quantum. Is it really just a question of quantity or quality? I think it's interesting. Designed for AI. AI first, not retrofitted. Designed for AI native. There's a company that I think partners with HPE out there called Unify Apps. They talk about it from the perspective of AI nativity. It's a very similar story. I kind of like that AI nativity phrase. It's a similar story where what they're saying is, look, you need to organize your enterprise around data in such a way that these new models can reason over that data. Otherwise, you're going to be siloed and all of your legacy baggage is just going to be baggage.
Ryan Shrout: Sure. I think that HPE's strategy is a sound one. It's a bit of a revenge of the infrastructure folks. that I really enjoy. And what about on the networking side?
Will Townsend:
So, you know, when you talk about networking for AI, right? I mean, you've got to have very low latency, very fast throughput to move all this data around from cloud to campus to edge. And what I really like about HPE strategy is bringing Juniper into the fold. Juniper has always been very focused on AI native networking. And you've seen the evolution of Ethernet from scale up to scale out to scale across. Their data center fabrics that are being launched as both companies come together. And this year, From my perspective, another thing that we're going to see transpire is a unified AIOps platform. And that was announced this week at Discover Barcelona, where the company is going to leverage the best of Ariba Central and the best of Juniper Most to really strengthen what they're doing from the AIOps perspective. So from my perspective, I really like what I'm seeing from the company.
Ryan Shrout:
In the networking side, it's interesting because there's been, I think maybe over the last 12 months, a shift from heavy focus of training as the AI workload of importance to inference as the AI workload of importance, and it puts different pressures, I guess I would say, on networking and on the compute, right? How do you think, I'll start on the networking side, does the networking have to look different now because we're doing inferencing versus training, or the emphasis rather?
Will Townsend:
So from my perspective, AI is moving hybrid. I think both of you guys would agree with this. So hybrid being large language models hosted in clouds to smaller language models at network edges, as well as within enterprises themselves. And so you've got to think about having that consistent connectivity and throughput to support that continuum, right? And so you're beginning to see sort of industry-wide a lot of infrastructure companies focusing on how do you bring compute closer to the data creation points and how do you improve connectivity as well to make all of that work. I also do believe that one of 5G's killer applications longer term, especially in OT environments, could be supporting these edge AI use cases when you don't have a wired connection.
Ryan Shrout:
Anything fundamentally changing on the compute side though as we go from training to inference? Is it just maybe this inference can be applied to a broader range of solutions, right? So it's kind of more opportunities for the AI story to fit in.
David Nicholson: Yes. So I think things change because inference becomes much more decentralized over time. A lot more of it will be happening at the edge. We've talked about this at length. I think the interesting talking points around network that we need to consider have to do with The fact that nearly nothing we're doing in the era of AI is sort of self-contained in maybe a server or just a cluster of a couple of servers, like it might have been in the past. So you think about running like an Oracle 9i rack.
Ryan Shrout:It's highly distributed, right?
David Nicholson Yeah, and so there's nearly no workload. that doesn't traverse this Ethernet network. So in this era, whether it's inference or training, you have all of this traversing. Broadcom makes the point constantly that this is the era of connectivity. Connectivity is more important than it ever has been. But the other underlying thing on the networking story is the fact that this is Open networking in the sense that we're talking Ethernet, Ultra Ethernet. It's not InfiniBand necessarily. It's not InfiniBand. And yes, NVIDIA has embraced the idea of, yeah, we're Ethernet too. Think about that. They only do that because they feel like they have to. Because otherwise they would say, no, buy our whole stack.
Will Townsend:
Because there's more margin in it for them, right?
David Nicholson: Yeah. And then, you know, in InfiniBand, from my perspective, you'll probably still see that as an interconnect for, I don't know, supercomputing, predicting the weather, and that sort of thing. But when you look at what, like, the Alternate Ethernet Consortium has done to really advance the Ethernet standard, and address those things that I was just mentioning about scale up, scale out, and scale across, connecting data centers and massive clusters.
Ryan Shrout: It's impressive, and Ethernet has also been around for almost 50 years. When you think about that, guys, that's an eternity in the tech industry, right?
David Nicholson:
One of the other things that stood out to me walking around the show floor is this idea of power density and cooling. You see its criticality and importance for all of these implementations. It's something that can be a real barrier to enterprise deployment. Maybe you have a certain amount of power, certain amount of cooling capability in any infrastructure piece without significant capital expenditures. How do you think HP has kind of described its story around being able to populate? I mean, they showed Cray and Helios servers, right? These are ultra high-end, high-performance computing solutions, but then also you need small blades that can go and edge deployments that are, you know, air throughput limited, cooling limited. It's a pretty broad spectrum that they've been able to provide. I sort of think of this as akin to the air-cooled Porsche era of the 993 moving to the 996 water-cooled. And there are people who will look back, and they will lament the passing of the air-cooled vehicle. But the fact of the matter is, if you have that much horsepower, you need DLC. You need direct liquid cooling. You need liquid cooling. And I think HPE is doing a really good job of managing that transition because they offer air-cooled, water-cooled sort of hybrid moving into.
Ryan Shrout: And I constantly forget that they have this history, this legacy because of Cray. in the most significant high-performance computing environments. Later today, we're gonna talk to Trish Damkroger, who runs HPC at HPE, but it's an interesting conversation. So I think they're doing a good job of managing that transition, helping people through that. But no, biggest constraint on AI today, power. Power, power, power. Power, yep, it's crazy. They've also talked a lot about GreenLake, their GreenLake platform, and helping enterprises move from POC to production. I'm curious, how's that stance different than it was maybe a year ago on HPE's vision?
Will Townsend:
Well, so at Discover in Las Vegas, the company launched GreenLake Intelligence, right? So this is their agentic AI framework, and Aruba is the first, sort of out of the chute, with what they're doing around orchestration and automation for networking. There wasn't a lot of talk about that evolution at this event. I do expect when we're all together in Las Vegas next year, we'll learn more about that. But for me, that's one of the biggest changes in kind of developing and maturing GreenLake. There's always been a NAS, like network as a service solution. Adoption has been somewhat long tail, I think just in general for networking as a service. I do really like what I'm seeing with GreenLake Intelligence, and it speaks volumes to the fact that Aruba is sort of first out of the gate within that agentic framework.
Ryan Shrout:
Yeah, people want to figure out how to get real value out of AI. Well, use AI to do sophisticated automation of operational tasks, gain efficiencies there. That's going to be the biggest win for most enterprises around the globe. It's not going to be inventing something crazy that no one has seen before. It's going to be optimizing things that they're already doing. Alright, well thanks guys for the time. I know we've got a lot of great interviews and discussions still to be had here later today. So that's it for this quick segment at HPE Discover Barcelona 2025. Stay tuned for more.
MORE VIDEOS

Building a New Generation of Networks with HPE & Juniper - Six Five On The Road
Rami Rahim, EVP, President & GM, Networking at HPE, joins Will Townsend and Ryan Shrout to discuss HPE’s acquisition of Juniper Networks and how unified AI-native platforms are accelerating innovation in high-performance networking environments.

HPE & SPAR: Creating the Retail Experience of the Future – Six Five On The Road
Phil Mottram, EVP & Chief Sales Officer at HPE, and Andreas Kranabitl, Managing Director at SPAR ICS, join the team to discuss how their partnership is accelerating innovation and shaping the retail experience of the future.

HPE ProLiant Compute Gen12, powered by Intel Xeon 6 - Six Five On The Road
Justin Christiansen, General Manager at HPE, and John Carter, VP of Server Product at Intel, join Six Five hosts to share insights on HPE ProLiant Compute Gen12 and Intel Xeon 6—advancements driving AI workload acceleration, stronger security, and more resilient operations.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.

