Home
Why Rack-Scale Architecture Matters: Preparing Data Centers for the Next Wave of AI – Six Five On The Road
Why Rack-Scale Architecture Matters: Preparing Data Centers for the Next Wave of AI – Six Five On The Road
David Schmidt, Sr. Director Product Management at Dell Technologies, joins hosts to discuss why rack-scale architecture is critical for data centers adapting to AI demands, with insights on operational priorities, cooling, and deployment lessons.
How are leading data centers keeping pace with the explosive demands of AI—and what does the shift to rack-scale architecture mean for the future of infrastructure?
From SC25, host David Nicholson is joined by Dell Technologies' David Schmidt, Sr. Director of Product Management, to unpack how data centers are evolving for the next wave of AI growth. They break down why rack-scale systems are becoming essential as power, cooling, and operational complexity spike; how liquid cooling and CDUs are reshaping high-density environments; and what’s driving the move from traditional rack builds to fully integrated, validated rack solutions. The discussion wraps with clear, actionable priorities CIOs and IT leaders should focus on over the next 12–18 months to stay ahead of accelerating AI workloads.
Key Takeaways Include:
🔹Rack-scale architecture as a catalyst for operational simplicity and scalability: Treating the rack as an integrated system streamlines deployment and management, accommodating AI’s unprecedented demands.
🔹The evolving necessity of liquid cooling and CDUs in high-density AI clusters: Liquid cooling and cooling distribution units (CDUs) are becoming essential as data center power and heat footprints increase.
🔹Lessons from early rack-scale deployments: Customers adopting rack-scale approaches report both expected efficiencies and some surprises as they scale implementations.
🔹Strategic priorities for IT leaders facing accelerated AI growth: The group shares forward-looking guidance for CIOs and infrastructure leaders to future-proof their data centers over the next 12–18 months.
Learn more at Dell Technologies.
Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
David Nicholson: Welcome to Six Five On The Road, coming to you from SC25 in St. Louis, Missouri. This is the Ultimate Supercomputing Conference, and this year just happens to be held in the home of the Supercomputing Organizations Org. So welcome to St. Louis, Missouri. And I've got a fascinating conversation on deck here with Mr. David Schmidt, Senior Director in Product Marketing at Dell, but specifically focusing on something that's very, very interesting in the age of AI and, of course, HPC, and that's this idea of rack-scale architecture. Welcome, David. Good to see you.
David Schmidt: Thank you, sir.
David Nicholson: It's good to be here. When we start talking about rack-scale, first of all, what does that mean? Sure. Then let's get into why that's important, why people are looking at things that way.
David Schmidt: So if you think about traditional compute infrastructure, we have been doing this for quite a long time. You and I were going way back just a moment ago. 19 inch rack and stack servers. That's part of my portfolio as product management leader of. designing those for enterprises. And that involves the customer going through the deployment lifecycle, the deployment, the rack and stack, unboxing, deploying that at day zero, and then the ongoing maintenance of that architecture until it's time to do it all over again. What we've seen now with the onset of AI deployments and AI architectures and the desire to get as dense as possible with GPUs, as dense as possible with CPUs, delivering north of 50 kilowatts, and now we're into the hundreds, going into the 200 kilowatts per rack. that's driving other needs within the infrastructure like liquid other other means of coolness. And so we've been talking to customers for many many years now about the next phase of that entire life cycle. I just walked you through. And that's what rack scale really is. We've been doing this for quite a long time, helping our customers be successful with those types of deployments. How do we deliver an architecture that can be completely racked and stacked before it even gets inside the customer's environment? And how do we design that, make sure it has the right components, make sure it has the right cooling components inside of it, manifolds, cooling distribution units, and basically build all of that so our customers don't have to. And then when it goes on site, it's ready to be connected to power. connected to the liquid networks, and then they're off and running and they're in production.
David Nicholson: So the idea of having something pre-configured, ship a crate with everything pre-configured for convenience sake, that's not a completely new idea.
David Nicholson: But is the real pivot point here kind of the dawn of liquid cooling because of the level of complexity that that introduces. It's the one thing for me to say, just go ahead and ship me the servers. I'll unbox them. I'll cable them up. I'll slide them into the rack enclosures. Now when you start talking about interconnected liquid cooling, it feels like the complexity goes up dramatically. Is that fair to say that this is a significant pivot?
David Schmidt: It is fair. And it's fair to also say that this wasn't the first time we were racking and stacking, because you're right, we had custom factory integration, we've been racking and stacking for customers and shipping it out that way for quite some time. But the question is, why are you using liquid? You're using liquid to remove heat. Why are you generating heat? You have an extreme amount of power. So the hand-in-hand response to your question is, It's really both. It's about the power and the power network that you're using, or the power distribution design that you're using, as well as that liquid element as well. One begets the other, right? And so we had to look at, number one, how do we build a standards-based architecture for our racks? And that's where we embraced ORV3 standards. We looked at the power bus bar and what that could deliver.
David Nicholson: And so ORV, yeah, ORV, just help with the acronym.
David Schmidt: Yeah, open rack version 3, right, open rack. So standards based. We felt like that was important. We didn't want to go drive yet another custom rack scale design into the industry. There's plenty of folks that are attempting to do that. They were already doing in the HPC exascale and supercomputing environment. We wanted to be standards based. So we came out about a year and a half ago now with the integrated rack scale solutions and our IR7000 rack. And it was really disaggregating the power components, changing that design from traditional power supply units inside of a server to a power bus bar in the back of the rack. And then you just blind mate into that bus bar and you're getting DC power direct to the systems. And then you're blind mating into a hot liquid and an inflow and an outflow. liquid connections, those were also blind made. So now you have a design that you can just slide the server in, blind made into your power, blind made into your cooling, and then you've got your switch connectivity and you've basically got it solved.
David Nicholson: Does implementing sort of rack scale, units of scale, change the way that folks in operations manage their environment? Or does it not really make a difference because it's all kind of virtualized anyway? How does it change things operationally, if so?
David Schmidt: That's a great question. It changes a couple of things. Number one, you've got this new thing, this new liquid thing in your environment that you have to manage. And so you're thinking more holistically about how power is utilized, because powers, as we're all reading, and we've seen this in the industry, power's at a premium right now. So you're monitoring your power usage. You're monitoring your liquid environment. All of that has to be managed through centralized, what used to be just basic server management. Is the server healthy? Do I deploy firmware updates? How do I get that done? Now you're doing that not only with the server, but you're doing it with the CDU. The cooling distribution unit, which is a pump that's moving liquid throughout the rack. You're doing it with the power shelf, making sure that's healthy and online. And so we're talking this week at Supercompute about all of those aspects. How do you manage that entire rack environment? Because it is a bit of a shift from traditional enterprise data centers. They're used to just kind of plugging into power, and then you go into the virtualization instances that you talk about, but if you can't get that stuff right at the beginning, then it doesn't matter what you're running as a workload, you don't have a healthy infrastructure on which to run those workloads.
David Nicholson: Yeah, you mentioned density, power requirements, heat dissipation requirements. And as we move into the future, it looks like increasingly we are in a liquid-cooled world. But let's just say hypothetically for a moment, you're a CIO who grew up watching movies where someone in a bathtub had a toaster tossed in to the bathtub with them. And so when you think about mixing electricity and liquid, you think, I don't want to do that. Are there folks who are kind of, are they sort of holding off on liquid as long as they can? And if so, where is the sort of, at what point is it like, no, you're going liquid and you're never going back?
David Schmidt: I'm going to carry the toaster in the bathtub with me the rest of the day. Fantastic. A fun analogy, but I actually think it's a little bit different. The IT executives that we're talking to, they look at it as just another way to remove heat from their data center. and they're trusting the industry to go provide the right designs. It doesn't mean that, you know, are there lots of different folks here at Supercompute that are showing off different technologies? Yes, they are. That's why it makes, if I humbly say so myself, it's why it makes a company like Dell critical in these designs, because they trust us to go get all those aspects that I just mentioned, to go get that right, and then to go do the right CDU designs, whether it's our designs, the Dell RCDU that we just announced, literally today, or whether it's partnering with other CDU vendors in the industry, they're trusting us to deliver that outcome to them. And just like you trust liquid to run through your home and your AC unit to remove heat, just like it runs through your car and your radiator. I love those analogies because we've been relying on those for years. I think that's where we are right now. And now the different angle is, The hesitation comes from the power. It's really about, do I have enough power to drive the type of infrastructure that will require liquid? And if they don't have the power, they're turning back to us for air-cooled designs. Those aren't going away anytime soon. So they have to manage both, right?
David Nicholson: Yeah, yeah. No, it makes sense. I have to imagine that rack-scale integration can ease a lot of those concerns. Absolutely. And there's no question that some people do have those concerns, whether they are rational or not. But you think there's sort of an inverse relationship between how many of these fittings you're going to do yourself versus, no, no, no, Dell has it taken care of. Do it for me. Do it for me. Have it all configured. Which is, you know, from a rack scale deployment perspective, it makes perfect sense. So what are the things that have maybe surprised you or that you've learned from earlier rack scale deployments as we move forward, as this has evolved? What are some of the lessons that you've learned?
David Schmidt: There's no one-size-fits-all, right? We have many different customers of all shapes and sizes, and they're all at different points in the journey. I know that we all look at really large-scale AI deployments, and they make headline news, but there's many, many customers that are embracing this at the, I would say, the higher end of enterprise organizations. So they're embracing it, but they all have different needs. They're all slightly different in the design. We've been able to standardize those building block elements that I spoke to earlier, but we're still moving at a very fast pace to help customers in all of these different scenarios around the world. And so that's been, I wouldn't say surprising, that's been maybe somewhat anticipated. The surprising part is the pace at which it's come. I think we were all wondering maybe two years ago, what is that inflection point going to look like, and now we're kind of on the other side of that inflection point and we know where the liquid deployments are going to be prioritized, we know where traditional air-cooled deployments are going to be prioritized, and so we have a good handle on it, but the pace has been a very surprising thing to us.
David Nicholson: I had a chance to visit your presence within the show. To say booth would be a disservice to Deltown. And one of the really cool things that was front and center was this sort of demonstration of an environment where you had the option to have a rack scale system where, of course, you've got direct liquid cooling for the processors and other things in the system, but also the option to have this giant cooling door on the back where the cabinet now becomes a sealed element unto itself so that all of the other residual heat that's being dissipated is being dissipated into this other system. And I was told that you have the option of doing it either way. So if your environment, if your environment, it's, hey, perfectly fine. I've got the ability to cool the air that's in the data center. You just go ahead and cool the GPUs and the CPUs, and I'll be fine. Versus, no, no, no, no, no. I don't have extra capacity for this new stuff. I want you to bring this in in a standalone system. How new is that in terms of, can you gauge the appetite for that? Or is this just simply another example of, look, Dell is going to have seven horses in the race and maybe there are only nine horses in the race. Chances are very good.
David Schmidt: I would say it's very much of high interest. We have customers that are already planning for that design. What that is is an enclosed rear door heat exchanger. So there's a traditional rear door heat exchanger where you're just taking the hot air, you're still expelling some air in a direct-to-chip liquid cooling environment because you're cooling the chip, but then you're blowing some air over the rest of the components. So cooling that air before it goes into the data center, but you're still having to manage that air and that airflow. What we've designed with our CTO team and our engineering team, all top-notch thermal engineers, mechanical engineers, is if we can capture that air in the back, recool it, recirculate it back to the front of the system, and it's basically what we refer to as 100% heat capture. And so in a lot of places, like in Europe, that's a requirement. It's a regulatory requirement. It's something that they value because I don't have to worry about any type of air getting expelled into the data center. I'm managing that air. And then, of course, I'm using some liquid in that rear door to remove some heat. But it actually lets you improve your operating costs. It actually lets you run at warmer water temps. It actually saves on the need for different cooling components within the data center. And then we've actually modeled out some really interesting claims about how much money you can actually save with, say, a 100-megawatt data center. You can actually return a significant amount of money, like tens of millions of dollars, back into your environment. You can choose to either say, I've saved this much money on power. I'm either going to pocket those savings, or I'm going to turn those power-saving dollars into more infrastructure. And in some cases, you can do like 80 more racks in a 100-megawatt data center if you're using that ERDHX design.
David Nicholson: Yeah. Speaking of power, what is this kind of state-of-the-art in terms of the amount of power that one rack? And by the way, these racks can be full of gear, can be over 3,000 pounds, and much taller than what most of us think of as sort of a standard data center rack. But nevertheless, how much power can one of those consume? I know it's not a race to the maximum, but it's sort of amazing that 10 or 20 kilowatts in a rack used to be a lot. Where are we now?
David Schmidt: I can't help but give you the maximum, because I'm super proud of what the team's designed. That's the cool number. Yeah, 480 kilowatts is what we're designing for that IR7000 design. We're not there today. We're about at the halfway point or below with some of the designs coming out now. But we wanted to build, and that's another key component of RackScale, we wanted to build a future-proof architecture so that in another two years, we're not having to introduce for customers a different type of architecture We knew that this had a certain amount of life in it as a rack scale design that they could actually count on, so that when it was time to refresh that AI environment, we had a consistent design.
David Nicholson: Is there any, does the physics dictate that it's impossible to have one megawatt coming through that form factor at some point in the future, or is the assumption that somehow we're going to be able to dissipate heat more efficiently? It's one thing to make the chips faster, but frankly a lot of that just means more power in. The physics associated with dissipating that heat is a little more straightforward, arguably. I mean, is there a limit? Do you think this 480 number is going to be something with us for a long time?
David Schmidt: So let me say it this way. There are designs and there's conversations around what one megawatt to a rack looks like, right? At that point, you're measuring liquid flow in terms of fire hoses. Your increment of measure is the number of fire hoses. It's this seven fire hoses worth of flow running through the system. So we haven't hit a peak yet. I'm sure someone out there has a theoretical maximum. But at that point, you are looking at maybe a larger architecture to accommodate that. That's maybe the next kind of over the hill, kind of next horizon after we cross this horizon that you'll start thinking of and talking about. But that's going to be for some very high, obviously high density. Obviously, you have the power availability to drive a megawatt to your rack, right? The entire data centers used to be measured in one megawatt. So just like your 20 kilowatts example. It's out there. There are conversations happening. And we have great design partners with our different vendors, whether it's NVIDIA, AMD, Intel, and then obviously our infrastructure partners. We're working with all of them to discuss what the future looks like.
David Nicholson: Yeah. Yeah. OK. So yeah, we'll leave that to John Rose and his team to make those decisions about just because we can do it, maybe we shouldn't do it in terms of the fire hoses and the megawatt.
David Schmidt: There will be somebody that attempts, and we'll be right there with them, and we think we're great partners to all of those customers that are attempting it.
David Nicholson: I'll bring a 12-pack and watch. I want to watch from a safe distance. But on that subject, when you think about an IT leader, the prototypical CIO, what do you think they should be prioritizing as they're looking 12, 18 months into the future? They want to make sure, so we've gone from this era of fear of missing out to fear of screwing up, fear of painting oneself into a corner, fear because they were about to make a decision for $50 million last week, but somehow they just couldn't get the administrative stuff done in time, and now they're thankful because something came out in the news that proved that that would have been the wrong way to go. Things are changing so, immediate obsolescence, people are so terrified of that. What are the foundational things that you would counsel people to focus on? These are people who want to keep their jobs, David.
David Schmidt: People with families. No pressure. There are enterprise environments where, like I said earlier, the power is at a premium and it's a precious resource that is now in the conversation in a way that maybe it wasn't three, four, five years ago. What I counsel enterprise customers is, look at your traditional enterprise environment. Look at what you can free up by upgrading that. We have quite a bit of infrastructure out there that is at least four or five years old. Look at how you can consolidate that down. You can often get seven to one ratios, five to one ratios. Can you consolidate that down? And then you're banking power savings. You're banking rack savings. Because it's not a one to one ramp up of the power consumed. 2400 kilowatt servers are taking away seven, maybe 800, 900,000 kilowatt servers. The math works out in that customer's favor to go consolidate their traditional enterprise footprint. Then they're playing with house money in terms of how I need to pivot so that I make the right investment. And then again, if I humbly say so, we've got the right expertise in the field to go be a partner to that customer, help them understand What is the right thing to bank on for the type of outcome you're trying to achieve? Our AI factory is set up to do exactly that and say, do you need the really high-end liquid-cooled type of deployments, or do you need something that is a handful, maybe four RTX 6000 Pro GPUs that can handle inferencing models because that's the kind of outcome you're trying to deliver? So we think we're really great partners in that regard. We're going to help them figure it out, but we're also going to help them modernize their infrastructure. so that they have the capacity to go achieve those things. That's the way to do it.
David Nicholson: Fantastic. David Schmidt, classic example of product management, juggling the science and the business aspects in technology and trying to come out with this sane, right up the middle solution for it, which right now is clearly rack scale as an increment of scale in a lot of cases. Thanks for joining us here at SC25 for Six Five On The Road. I'm Dave Nicholson.
MORE VIDEOS

Building an AI-Ready Enterprise - Six Five On The Road
Shannon Bell, EVP, Chief Data Officer & CIO at OpenText, joins Six Five to discuss strategies for building an AI-ready enterprise, including bridging the AI ROI gap, embracing unified data platforms, and cross-functional management of digital agents.

Securing the AI-Driven Enterprise - Six Five On the Road
Muhi Majzoub, EVP of Security Products at OpenText, joins Six Five On the Road hosts to discuss how AI is making an immediate operational impact on enterprise security, redefining "secure AI," and shaping the future of cybersecurity platforms.

AI Cluster Power: Liquid Cooling and Exascale-Ready Solutions from MiTAC at SC25 - Six Five In The Booth
Raymond Huang, GM and VP at MiTAC, joins host David Nicholson to share how MiTAC’s latest liquid-cooled, exascale-ready AI clusters are redefining data center sustainability, scalability, and performance—a must-watch for SC25 attendees and tech leaders.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.

