Home

Unlocking the Power of Open Source Innovation: Insights from Red Hat and HPE - Six Five On The Road at HPE Discover Las Vegas 2025

Unlocking the Power of Open Source Innovation: Insights from Red Hat and HPE - Six Five On The Road at HPE Discover Las Vegas 2025

Ryan King, Global Head of AI and Infrastructure Ecosystem at Red Hat, joins hosts David Nicholson and Keith Townsend to explore how Red Hat and HPE are driving innovation through open source technologies.

How is open source innovation not just keeping pace with, but actively driving the future of enterprise IT? 

Uncover the answers from #HPEDiscover Las Vegas 2025! Join David Nicholson and Keith Townsend from The Futurum Group as they host Ryan King, Global Head of AI and Infrastructure Ecosystem at Red Hat. They discuss how Red Hat and HPE are collaborating to shape customer innovation and strategy through their open-source offerings, particularly in areas such as AI, virtualization, and hybrid cloud solutions.

Key takeaways include:

🔹Red Hat's AI Core Philosophy: A profound look into Red Hat's enterprise AI strategy following recent announcements at Red Hat Summit, including RHEL AI and OpenShift AI, demonstrating their approach to solving critical customer challenges.

🔹Upstream Catalysts for Enterprise Solutions: Insights into the pivotal role of upstream innovation in Red Hat's strategy, highlighting the meticulous evaluation and seamless integration of cutting-edge projects, such as vLLM, into robust enterprise offerings.

🔹Accelerating AI with Strategic Alliances: Discover Red Hat's forward-looking AI initiatives, emphasizing powerful collaborations with partners such as HPE to democratize AI and amplify its impact for enterprises.

🔹Redefining Virtualization with Open Source: An examination of the significant shift away from traditional hypervisors, showcasing how HPE GreenLake and RHEL 10 are instrumental in driving these transformative changes in virtualization practices.

Learn more at Red Hat and HPE.

Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer:Six Five On The Road at HPE Discover Las Vegas 2025 is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

David Nicholson: Welcome to Six Five On the Road here at HPE Discover Las Vegas 2025. I'm Dave Nicholson. I'm here with my esteemed co host Keith Townsend and Ryan. Tell us who you are.

Ryan King: Ryan King. I manage our global ecosystem for infrastructure and AI at Red Hat.

David Nicholson: Good to have you. Good to have you here. You mentioned AI just now. AI is all the buzz.

Ryan King: Yeah.

David Nicholson: Talk about the ecosystem. That includes Red Hat and HPE and your thoughts around AI these days.

Ryan King: So a little. My personal story, if you don't mind.

David Nicholson: Yeah.

Ryan King: I've been at red hat for 15 years now, and before that I was at Intel for 10 years. I'm kind of a hardware person in the software world. And so in my roots, I knew kind of like what was happening in the hardware space. And I saw what was happening with AI back in like the mid 20, like 2015, 2017 period. And when I saw deep learning come on the scene, I was the person at Red Hat that said we really should start working with Nvidia because all this deep learning stuff is like really heavy Matrix math and these models are getting very big and that's what's called deep boarding at the time. And so I live in the Bay Area, so I took our corp debt people and we went down and we started working with Nvidia very early. So like the, the work you see in kind of like GPU operator is something that we wrote with Nvidia and turned over to them. So all of this stuff to kind of lay down the foundation of Kubernetes and AI was a personal project of mine, a passion project at Red Hat, which has turned into a very close partnership with HPE and Nvidia.

David Nicholson: So early on they were accusing you of messing around with video game stuff?

Ryan King: Yeah, totally. Yeah. It was like Back to the Future where Doc Brown's like Ronald Reagan the actor and Nvidia the graphics company. And I'm like, yeah, like seriously, this is. And they had such a hook and I'd seen what they'd done. Like, they're so innovative and so fast paced. I was like, they're gonna do very well here. Like, they know how to turn on a dime. They know how to build layers of technology and make it available and it's proven out.

David Nicholson: And so the AI landscape as far as Red Hat is concerned, now, where are we?

Ryan King: Well, I personally. So there's a couple things that have happened. And so I've been saying to people like, this is the first time I've really been like a software architect type person. I know I'm a hardware person, but I could see how you can fully implement like enterprise scale AI with the agentic turn. Like, because we went from models into generative and into, you know, now into agentic. So we were creating a platform to build and run AI and so on the build side with the agentic capabilities, like an OpenShift AI around like MCP. You heard a little bit about that? Yep. We're adopting Llama Stack as a standard. Now you're seeing these standard interfaces and so now you have like those primitives that you can build agentic applications on top of, providing all the interfaces back into your enterprise world. Like MCP is a crazy standard. How fast that has taken root, I don't think I've ever seen anything go so quickly in terms of all the different services exposed. And that's building and then running. We acquired a company, Neural Magic. Have you heard of Neural Magic before?

David Nicholson: I haven't.

Ryan King: Neural. So they are the upstream folks that maintain VLM. So you have the core contributors and the committers to that project. So that makes Red Hat now like the top contributor and committer to the VLM project. And so when you run AI, you have to have a runtime for it. So you can think like the Linux kernel is like running on CPUs. You can think of VLM as running on GPUs. And so that community figured out like that, hey, you need the ability to like, you don't want to page memory in and out all the time. Like efficiency in running models is super important. So they did all the work to say, here's how you can do that with page memory attention and really cool techniques to make memory and models run faster. And so the communities come around that. And that's what Red Hat's good at, right?

David Nicholson: Yeah.

Ryan King: And so with Neural Magic coming into Red Hat, it actually brings back Bryan Stevens, who is Red Hat's former CTO, was Google Cloud CTO, and now back at Red Hat as our AI CTO like this we just know like where communities get formed and what the purpose of those communities are. And so VLM is really becoming the standard for inference, which is how you run AI in production. So now Red Hat's in the middle of. We always had like inference in our product, but now we're in the middle of the conversation for like day zero model performance and new accelerators. And so we're helping make sure that there's so many models, right? And there's like, you know, choice and accelerators with whether it's like the variety of Nvidia offers or like alternatives that people want to explore. Like, we're now deep in those conversations as a company. We're back to our roots of infrastructure.

Keith Townsend: So you bring up a really interesting topic in VLM and Red Hat's unique role. A lot of times, especially when it comes to open source projects, we rely on enterprise architects, CTOs rely on Red Hat science, upstream innovation and contribution. So how are you looking at evaluating something like a BLM and then releasing that to your customer base that wants the reliability of Red Hat?

Ryan King: So tons done in the community and that's always ongoing. And the best thing about that is we try to bring in different people to be part of that community and the stewardship of those projects and that's what makes them successful long term. And that brings all the innovation from every angle. We take that and we create a downstream from that that is like essentially a hardened version for the enterprise. And then we maintain that over a series of like a latest, which is one month, we have like a two, you know, stable version and then up to like 18 months for like our long term support. And that's the enterprise support they expect from us. We provide that support inside of OpenShift AI, we provide it in Rhel Ai. And now the new exciting thing was Red Hat AI inference server. And so that is just a container with the inference engine in it. And we support that on, you know, could be on OpenShift, it could be on RHEL, but we actually support it on any distro. So we're now kind of like saying, hey, now you can have an inference engine you can run anywhere on any distribution. So that's the commercial end of it. But you know, it all starts in the community.

David Nicholson: So looking forward, what are the kinds of things that we should be looking out for from Red Hat's perspective? What are you rolling out in the AI space? And, how does that work with partners like HPE? How do you manage that?

Ryan King: So we do a lot of work with HPE foundationally for all the infrastructure and availability on our platform. The RHEL 10 launch, the OpenShift updates, the new Red Hat AI infra server. There's a lot behind the scenes in just engineering to harden everything and make it enterprise ready. So that keeps going with HP around AI. You asked about what's interesting and coming around to it. So these are a lot of new acronyms, always new words all the time. My wife just goes, what did you just say? Anyway, LLMD is a new project we announced at the Summit. And so what that is is distributed inference. And so we're using the Kubernetes primitives to distribute inference across a number of nodes and GPUs. And so the foundation for this is a technology called Kvcache, key value cache for AIs. And it shows how you can. There's routers being built into it. So Nvidia is contributing to it. With Nixel, which is like cross node communication, we have, you know, we have core supporters with like Core Weave and Google and a number of others that are coming to say not just how do you run inference on a single node, but now how do you distribute it? Because there's a lot of optimization that can be done across different nodes, different GPUs and different stages of a model, you know, per pre fill and all of those types of work that needs to be done at Inference. So it's, you think about it like in the future it'll be a cluster of servers running Inference and it will know how to load the model, it'll know how to distribute it and distribute the load across models across users in real time. And that's the.

David Nicholson: It's a new FLA for you there. Four letter acronym. Yeah, LLMD.

Ryan King: There's an apostrophe in there so you could say it's. Oh, there are three with a plus.

David Nicholson: Or a minus case three plus one.

Keith Townsend: Yeah, it's my job and I still can't keep up with all of this. And I think this leads into my next question. You know, you folks have a wonderful relationship with HPE. You just mentioned that HPE has a fabulous relationship with Nvidia. And if Nvidia is the gorilla in the room, especially for training but even for inferencing, they're starting to build some capabilities around there. How are you folks working with Nvidia to help shape what's coming down in customers' environments to help them adopt this technology?

Ryan King: Well, I mean I just mentioned their contribution to LLMD and so like distributed inference is a capability. We also at Computex, Jensen announced their enterprise AI reference architecture. So the word enterprise there, that's really red hat, right? So like personal pride. I got to draw this slide that he put up there. So he's like, all I got was this red hat. But like, you know, that's awesome. But like the idea that how do we bring, you know, how do we actually deliver enterprise, you know, agentic AI factories. So that's one area of collaboration with them. If you dig into that further, you can look at what you know, they're doing with nim, what we're doing with them there. That's a nice encapsulated way of delivering, you know, a model with a runtime or a particular use case. They're looking at how we're doing things with MCP and lambastack and saying like, hey, it's good to see now enterprise support for those foundational platform capabilities too. So we have a very strong better together story with Nvidia that represents customers and upstream.

David Nicholson: So what about in the virtualization space? It's funny to refer to traditional hypervisors. Cause Keith and I are like, are we that old? Yeah, are we that old? But a lot of things have happened in the last few years in the kind of virtualization space you've got REL10, you got HPE with Greenlake. I mean, what are you doing together on that front? Helping people kind of navigate what's potentially a big change for people.

Ryan King: Yeah, I mean, we have a broad view, which has played well for us. We were doing Greenlake with them and the container space. Certain things happen to market with traditional hypervisors that gave us a fresh look with a number of customers. And so over the past 18 months, we've seen a number of customers say, we're going to use. We're going to go kubernetes as our foundation for vert. And so with OpenShift virtualization now, they can use OpenShift to run containers and vert and AI. And so that's the broad view. An analogy I like to use is like, if you're in a situation and you have your car, you like, and you have a problem with it, you're like, got to get a new car. If it's in the moment, you just go like, for, like, just get me another, you know, Honda Accord, whatever. Right? Like. But if you're considering what your future is, you're going to look at like, what do I actually need now? And I think when people do that, they've made some really strong considerations and have started to pivot towards OpenShift because they see that OpenShift can provide that significant single control planes. They're operations people. They're like hey I can pick up containers and AI, because AI is all kubernetes, right? And they're like, maybe I'm expanding my skill set now. They also look at it funny enough. And if you look at, like, you know, Okapitan over there, if you've been over to see that thing, you've seen the. The water chiller on the side of it. These things have heat. And so they're like looking at the full rack of servers. And here's my vert property. They can actually reduce the power footprint for that to make room for what they want to do in AI. So from an infrastructure and an operational standpoint, we look very strategic in terms of what people are doing. And they couple that with, like, Greenlake. So HPE brings in the pay as you go, which is a big demand signal for us together. And Greenlake has turned from this, you know, start of a business and kind of like a technology venture between the companies to a very repeatable and kind of the anchor for our growth between the two companies right now for, like, where we see customer growth. Between Red Hat and HPE.

David Nicholson: Yeah, I can relate to the car analogy. I'm actually in the process of spending a ridiculous amount of money to update a 2013 car with CarPlay.

Ryan King: Oh yeah, it's worth it, I think.

David Nicholson: You know, hardware, software, it's kind of all the same.

Keith Townsend: So, Ryan, my last question for you, kind of keying off what you just said with AI Kubernetes and VM in one platform, it gets me thinking about the platform engineering team and this ability to now for that team to now serve up AI alongside the development experience around VMs and both containers.

Ryan King: Yeah. And I think, like, it's beautiful, right? Like in terms of its simplicity in some ways, right. You're still going to have these GPU boxes that are doing the AI portion of it, but you're also gonna say, hey, like now I'm gonna use MCP to call back to these other applications that are gonna serve up answers to those agents. Right. So you're starting to get more of a comprehensive viewpoint of the whole thing. So it could be containerized, application can be virtualized, it could be, you know, agent calling, you know, rag somewhere, like. But it all starts to look like one infrastructure. And that's kind of the moment we're going through right now.

David Nicholson: Ryan King from Red Hat, thanks for joining us.

Ryan King: Thank you.

David Nicholson: Pleasure. And for myself, Dave Nicholson and my colleague here, Keith Townsend. Thanks for joining us here for this edition of Six Five On The Road from HPE Discover Las Vegas 2025. Stay tuned for more action coming.

MORE VIDEOS

Networking for AI, Edge, and Experience | Six Five Live at HPE Discover

Will Townsend, VP & Principal Analyst at Moor Insights, joins hosts to explore HPE's next-gen connectivity strategy for AI and edge computing revealed at HPE Discover 2025.

Live from Discover with HPE’s Fidelma Russo and Dallas Cowboys CIO, Matt Messick - Six Five On The Road

Fidelma Russo, EVP & GM, Hybrid Cloud and CTO at HPE, and Matt Messick, Dallas Cowboys CIO, join hosts to discuss industry trends and the HPE-Cowboys partnership.

Navigating the New Infrastructure Reality: Strategies for IT Leaders in a Time of Transition - Six Five Media

Nirav Sheth and Raphael Meyerowitz join Patrick Moorhead to delve into the transitions within the IT landscape following the VMware acquisition, focusing on innovation and adaptation in virtualization and cloud strategies.

See more

Other Categories

CYBERSECURITY

quantum