Home

AI Compute is Driving the Need for Thermal Cooling Solutions - Six Five On The Road at Dell Tech World 2025

AI Compute is Driving the Need for Thermal Cooling Solutions - Six Five On The Road at Dell Tech World 2025

Andrew Pack and Tim Shedd from Dell Technologies discuss the necessity of efficient thermal solutions in data centers, focusing on the innovative eRDHx cooling system and the shift towards liquid cooling.

As AI and advanced workloads escalate data center power demands, efficient cooling becomes a critical imperative. 🧊

Dell TechnologiesAndrew Pack, Product Planner, AI & Dense Compute Cooling Infrastructure, and Tim Shedd, Senior Distinguished Engineer share details behind Dell’s cooling solutions with hosts Patrick Moorhead and Daniel Newman at Dell Tech World 2025. This isn't just about fans and air; it's about a fundamental reimagining of thermal management designed to keep next-gen infrastructure cool.

Highlights include:

🔹eRDHx: A New Era for Cooling: The conversation highlighted the transformative impact of Dell's newly announced eRDHx, a novel solution designed to simplify the adoption of Direct Liquid Cooling (DLC) and significantly enhance data center efficiency.

🔹Navigating Future Cooling Challenges: Guests shared the major thermal challenges customers face over the next 5-10 years, particularly with the escalating power demands of AI and dense compute environments.

🔹Powering Extreme Workloads: Insights were shared on the specific AI workloads driving these extreme power demands, underscoring the urgent need for innovative thermal solutions to maintain performance and reliability.

🔹Making Advanced Cooling Accessible: Dell's strategic approach to making DLC, eRDHx, and other advanced cooling technologies more manageable and accessible for customers, addressing common hesitations and the evolving role of air cooling.

Learn more at Dell Technologies.

Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Patrick Moorhead: The Six Five is On The Road  here at Dell Tech World in Las Vegas. Daniel, it's been quite the show. I mean, we've seen innovations in hardware, in software and services to kind of wrap a bow around everything.

Daniel Newman: Yeah, it's been a really good week. We're about halfway into the event now. We've really covered all the bases here. We had some great conversations, Pat. We've gotten deep into the technical weeds. We've had Michael Dell to come really tell and paint the big picture. Enjoyed a great conversation with Jeff Clarke today. And every one of these conversations brings a little something different. But what we do know for sure is this AI theme. Bringing it to life, bringing the customer to the front has been a big focus here at Dell Technologies World.

Patrick Moorhead: Yeah, Dan, one of the big themes throughout all our coverage, our research notes and stuff like that is the challenge in cooling the data center. Right. I mean, it's one thing, I mean, it's hard enough you can create a brand new data center just to do GPUs from scratch, but the reality is, enterprises just can't control, alt, delete and start over. So you have to get a little bit more innovative when it comes to cooling these giant racks of GPUs and storage. So why don't we dive in here? We have Andrew and Tim from Dell. How are you gentlemen?

Andrew Pack: Doing great. Thank you for having us here.

Patrick Moorhead: Welcome to the Six5. It looks like GPUs and ASICs are keeping you guys busy.

Andrew Pack: To say the least. Yeah, that's for sure.

Daniel Newman: So let's talk a little bit about ERDA checks. Andrew, I'll start with you. Tell us a little bit about what was newly announced and sort of how it's helping to transform and meet these data center efficiency needs.

Andrew Pack: Yeah, I'd love to talk to that. The E stands for enclosed. And then it's the same as a traditional rear door heat exchanger that you'd see on the industry today. But what's unique is because of that enclosed aspect of it, we can run this with warmer water temperatures. Now, when we talk about data center cooling efficiency, the warmer water temperature is a huge impact for being able to save energy, save cost. And that's what the enclosed rear door heat exchanger is really delivering for customers. What it's supposed to help alleviate. As we talk about these direct liquid cooled systems and we talk about these rack scale 100 plus kilowatt racks, there's still a lot of heat coming off as air. And that's what the enclosed rear door heat exchanger does is it helps eliminate that need for chilled water for capturing that hot air. We can do that with warmer water, capture the hot air, isolate it from the data center, really drive efficiencies, and drive savings back to the customer.

Patrick Moorhead: Yeah. So Tim, do you have anything to add on that in terms of, let's say, the specific numbers or how efficient is this?

Tim Shedd: Yeah, certainly. So the importance of raising the water temperature may be not completely clear. So if we have to keep water cool, like for air conditioning systems, we typically cool the water down to like 45 degrees Fahrenheit or 50 degrees Fahrenheit. That means in almost every place in the United States, we've got to use refrigeration to cool down the water that then flows through heat exchangers that then cools buildings like this one, that takes a ton of energy. Now, for a building like this, I don't know the exact number, but maybe they're needing a few hundred kilowatts of cooling. That's one rack. If I multiply that by a thousand racks, that's a lot of energy spent. So what Andrew just said about raising the water temperature up to 32 degrees C, that's 95 Fahrenheit. 90 Fahrenheit. Excuse me. That means that now I can just use the outside temperature. I don't need chillers to do the cooling. Saves a ton of energy. It's a lot cheaper for our customers. And we've designed this to be future proof for what we see coming in the near future. So we can cool up to 80kW of air cooled electronics. And then if we combine that with DLC and other things that we'll probably talk about, we can make that rack capable of handling nearly 500 kilowatts of heat, which is phenomenal. One rack, you know, about a meter deep, a little less than a meter wide, 500 kilowatts.

Andrew Pack: And if I might add, it will provide about a 60% savings over the other deployments from an energy perspective for any given customer.

Patrick Moorhead: So it's really a heterogeneous data center. And I'm curious, do certain workloads drive extreme power and extreme power thermal challenges more than others? I mean, it seems like training would light up everything, including not just the GPUs or the accelerators, but also the storage and also the network. Everything would seem to get hot. Or is there some baseline workload where it doesn't vary?

Tim Shedd: Yeah, I'll jump in there. So absolutely, training is one of the most intense workloads, especially because if I've built one of these really large clusters to do training, that's basically all operating at once to build those models. Most of our customers, to be honest, aren't doing that. They're building out specific smaller clusters that are doing inferencing. But with these, especially with these new reasoning models, they're lighting up not necessarily the entire rack for each query, but they're lighting up several GPUs at a time, full on. And if I'm Lowe's department store, like we heard yesterday, and I've got 300,000 customers all pinging that at once, I'm lighting up a data center right. Maybe it's not 100% all the time, but it's pretty close to full load as long as I'm doing that. So inferencing we're seeing can also bring pretty intense workloads. So this AI era, which is bringing huge benefits, we're seeing a lot of value add and our customers are seeing a lot of value add is in fact one of those workloads that's really lighting up the data center, power wise.

Patrick Moorhead: Got you.

Daniel Newman: So obviously agents, which we're hearing a lot about, is inference at scale in many ways, right? There's little bits of test time and other things that'll likely take place that'll optimize and continuously tune. But is it all AI? I mean, really the demand is all in your. And obviously I have a thesis, but it just feels to me like all this thermal, all the sort of data center design, all this energy demand is pretty, it's pretty straightforward. It's just AI, more training, more inference, agentic, physical. And as it evolves.

Andrew Pack: I'm glad you asked that because AI has certainly been an accelerator for all of this discussion about advanced cooling and alternative cooling. But these are things that we've been doing for nearly a decade with liquid cooling because there have been workloads for more than this time that have required it. High performance computing is very intensive. So they started with the CPUs getting hotter, needing that advanced cooling. Now we're adding the GPUs as we've seen over the last several years with our partners. And I think as we move into the future, we'll see additional components needing that additional cooling, all because these workloads are advancing so rapidly and requiring more and more compute, storage, memory, you know, networking, all of that.

Patrick Moorhead: That makes sense. So whether it's DLC or the new rear heat exchanger, it all sounds hard. But how does this actually make it easy for customers? It seems a little hard to me unless they're not having to go fish for a third party solution. Is it fully integrated by you?

Andrew Pack: Yeah, I'll take that one first. So Dell's integrated Rack series of products are fully integrated, just as the name sounds, they're integrated racks. Dell will take on the burden of handling 90% of the work here. So we'll take your orders, whatever the servers are that you want, Rack stack, we'll even test those in our factories with liquid cooling, with networking, all cabled up. And then we'll literally ship that entire rack to customers, roll it into their data centers, help set it up, get it all connected to power and cooling, do that last mile testing, hand over the keys, and they're off and running. So it really is a turnkey solution for our customers to really drive a lower friction model because it is new. DLC, alternative cooling, advanced cooling are all very new. But as we break it down into pieces, we can really help give it to customers in ways that they can digest and consume easily.

Daniel Newman: Yeah, so Tim, you know, and I'd like to hear from both of you as we sort of wrap up this conversation, but we're hearing about liquid cooling. It kind of seems to be the future as we get more density in these racks and more heat. And then of course, getting it to room temperature or even warm water, that seems to be even another goal. But not everything's going big. I mean, some of the announcements here at Dell Tech World focused on, you know, smaller enterprise deployments, things that can be done in a small enterprise data center.

Patrick Moorhead: These are going to be RTX thousand cards.

Daniel Newman: Likely air cooled in many cases. So, you know, if there's some hesitancy about liquid cooling, you know, how do you kind of talk about air cooling? It still has its place, right?

Tim Shedd: Absolutely. Air cooling is actually going to be core to our business for years and years to come. There's. We need liquid cooling to enable the dense computer that will enable many of these applications. But on the ground, especially out at the edge, it's air. And air is actually a great cooling fluid. You just need fans to push them, to push the air through the fans, use power. So we are also working in that space. Our engineers, some of whom are in our booth today, they're literally still getting into the weeds with our fan suppliers and tweaking fan angles and so on. Through Dell's engineering, we've increased the efficiency of the fans and the servers from the teens of percentage to close to 50% efficiency. So that means not only are we moving more air, we're using the same or less power to move that air through. So cooling is a holistic activity for us. It brings value to the customer because the more power that we free up from the compute, the more value they get out of every dollar they spend for their data center. Right. Or their data closet, whatever it is. It's really important to us.

Andrew Pack: Yeah. Just to reiterate some of the things that Tim said there, I think it's important to really reiterate that Dell's innovation engine is firing on all cylinders for air cooling. It hasn't slowed down because DLC is on the scene. We've certainly ramped up the DLC innovation engine as well. You've seen the innovations, the announcements we've made. I think air cooling has a place for a long time, years to come, as Tim said. Right. I think there's another element to this question, though, where there is a lot of hesitancy for adopting DLC because of its unknown. And we talked about how easy it is to adopt. And I think there's also elements of if you step back and look at the efficiencies you can drive with DLC, that is another impetus that can often offset.

Patrick Moorhead: Right.

Andrew Pack: Those costs that you're going to be consuming with air cooling. Every customer is going to be different, and that's why we're going to be ready to address the whole range, the whole spectrum of needs from our customers.

Patrick Moorhead: That makes sense.

Daniel Newman: Andrew and Tim, I want to thank you both so much. We got to wrap this one up here, but thanks for joining The Six Five here at Dell Technologies World 2025. Very interesting stuff. We gotta deal with the thermals, Pat. You got to keep the racks cool. And we're going to scale these things to thousands, tens of thousands.

Patrick Moorhead: It's literally hot technology.

Daniel Newman: So this is the hottest tech.

Andrew Pack: We're keeping it cool though, guys. Keeping it cool.

Patrick Moorhead: There we go.

Daniel Newman: All right, everybody, thank you so much for joining us here at Dell Technologies World 2025. We're the Six Five On The Road. We're going to take a little break. We'll be back with you soon.

MORE VIDEOS

AI PCs: The Ultimate Enterprise Edge Device - Six Five On The Road at Dell Tech World 2025

Sam Burd, President at Dell Technologies, shares his remarkable insights on the evolution of AI PCs as enterprise edge devices, highlighting pivotal innovations and customer feedback at Dell Tech World 2025.

Driving AI Infrastructure: Innovations from Dell and Broadcom - Six Five On The Road

David Schmidt and Jas Tremblay join us to share insights on shaping the future of AI infrastructure with Dell's and Broadcom's latest innovations. They delve into PCIe technology, AI server ecosystem, and the importance of open standards.

AI Fabrics and Addressing Key Networking Concerns - Six Five Media at Dell Tech World 2025

Saurabh Kapoor, Director of Product Management & Strategy at Dell Technologies, joins Patrick Moorhead and Daniel Newman to share his insights on AI fabrics and how Dell's AI Factory is addressing key networking concerns and performance bottlenecks in AI workloads.

See more

Other Categories

CYBERSECURITY

quantum