Home

Building for AI at Scale: Infrastructure Trends from Lenovo & Digital Realty - Six Five On The Road

Building for AI at Scale: Infrastructure Trends from Lenovo & Digital Realty - Six Five On The Road

Flynn Maloy, VP & CMO of ISG Marketing at Lenovo, and Rick Moore, Global Head of Platform at Digital Realty, join Patrick Moorhead and Daniel Newman to discuss the impact of AI at scale on data centers, sustainability, and the evolution of colocation environments.

How are enterprises adapting their infrastructure strategies to operationalize AI at scale, while balancing performance, sustainability, and flexibility in modern data centers?

From Supercomputing 2025, hosts Patrick Moorhead and Daniel Newman  are joined by Lenovo’s Flynn Maloy, VP & CMO of ISG Marketing, and Digital Realty’s Rick Moore, Global Head of Platform, for a conversation on the evolving demands of AI-driven infrastructure. The discussion centers on how enterprises are transitioning from AI experimentation to widespread deployment, the resulting pressures on data center environments, and how both Lenovo and Digital Realty are enabling organizations to scale AI workloads securely and sustainably.

Key Takeaways Include:

🔹AI is moving from pilots to production: This shift is creating HPC-level compute, networking, and storage demands across every industry—and raising the bar for scalable, enterprise-grade infrastructure.

🔹Data center density and power/cooling demands are rising: How this is reshaping data center strategy, accelerating adoption of colocation, modular builds, and purpose-designed deployment models that support AI growth without full redesigns.

🔹Digital Realty’s Innovation Lab: DRILis becoming a critical proving ground—allowing enterprises to test, validate, and optimize AI architectures before broad rollout, reducing risk and time-to-value.

🔹Sustainability has become non-negotiable: Both Lenovo and Digital Realty are prioritizing energy-efficient designs, liquid cooling, and operational best practices for high-density AI and HPC environments.

🔹The future of AI infrastructure is flexible, high-density, and sustainable: Colocation ecosystems will enable rapid deployment, efficient scaling, and secure global operations.

Learn more at Lenovo.

Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Patrick Moorhead:

The Six Five is On The Road here in St. Louis at Supercomputing 2025. Daniel, very consistent in some of the themes that we've been seeing here on the show floor, and also talking to different customers out there. Now, Enterprises have hit this stage where the biggest ones have done multiple POCs. And a lot of them, you know, not all of them hit, but you have to take some risks to get the downstream benefit. But a lot of them have hit, but they haven't scaled everywhere throughout the enterprise.

Daniel Newman:

Yeah, well, first of all, just here at Supercomputing, it's great to see the energy, the enthusiasm, the excitement, the momentum, that's all still going on. You know, there was this bit of a transformation of the show that took place following the Chachibitty moment, but that wasn't really what the show was about. But we're seeing this convergence of HPC and AI, so symbiotic in terms of these things. But we are, to your point, we're also seeing the show get a bit of an enterprise flair. Remember like MWC when it became that way? It used to be a mobile show, and then it became an enterprise show. We're seeing the exact same thing happen now. As enterprise AI is accelerating, this is actually becoming not just a high performance and research and academia, this is a center of AI. By the way, it brought a lot of energy to St. Louis.

Patrick Moorhead:

Totally. Two companies that are at the forefront of this revolution are Lenovo and digital realty. So, hey, let's talk, guys, about what's going on in enterprise AI. Flynn, you've been on the show before. Thank you for coming back. Rick, welcome to the show. First time on the pod. Thanks. Good to be here. Welcome to the show. Thanks.

Daniel Newman:

So, Flynn, I'm going to start with you. I'm going to give Rick a chance to witness goodness. You and I talk about this all the time. We've gone from experimentation to kind of small-scale POC to now people want to take deployments enterprise-wide. Big moment for AI. But it's also going to create all kinds of new demands. You got energy demands, you got network demands, you got infra demands, support demands. Talk a little bit about what you're seeing and kind of what are the new demands that are being created as we see AI scale into enterprise.

Flynn Maloy:

Well, it's good to be here, guys, and good to see you. I mean, I think exactly to your point about this show blowing up is, you know, everybody is looking to this community on the HPC community on how do you deal with the power? How do you deal with space? AI is driving all of these constraints. And, you know, I think we're all agreeing. I think the whole market's agreeing that we're just on the verge of a real big build out in enterprise AI. And to answer your question, I think everybody also agrees that it's going to be a hybrid environment. It's not just going to run out of the clouds. You're going to have some AI workloads out of the clouds. You're also going to want to bring, you know, where's the data? You're going to want to bring the AI workloads out to the private space, the private data center, to the co-location, and all the way to the edge. And that creates a whole other set of constraints and as well, the initial build out of technology for the last two years, the big CapEx spend is all about training AI, right? It's all about building models, huge CapEx out of the clouds to build these models. As we transition from building the models into using the models, right? That's gonna be AI inferencing. That's a different, less mature set of technologies. You've got memory, latency, security, energy issues. So as it comes out of the clouds, it starts landing in the private data center space and at the edge, You know, you've got to solve for a new set of technologies, a new set of stacks, you know, and that's what I think the industry is doing. And that's what makes the partnership with Digital Realty so legit is because we've got, you've got to come out and you can't handle this stuff.

Daniel Newman:

And as much as bursty, like we're training super bursty, like inference is going to be always on, which is super great for you. Actually great for both of you. Yeah.

Rick Moore:

You know, it's interesting the requirements are scale out requirements as opposed to the scale up that we've seen with training clusters. And as you see that start to happen, it's very much what Flynn said. It's a very hybrid environment. Some of that can happen with air cooled environments in a data center. Some of that requires liquid, the latest liquid cooled advancements. So it's hybrid in that sense as well from a cooling and thermal engineering perspective, not just cloud versus on-prem.

Patrick Moorhead:

So Rick, it's funny, I've seen the memes. You can't do AI in your current data center. The other one says, you know, you have to increase your CapEx by 40% and go get loans to be able to pay for it. But I think you come up with a unique solution that bridges the gap for customers looking to scale without having to redo their entire data center estates.

Rick Moore:

Yeah, yeah, you know, there's an element of modular growth that is critically important in the way this infrastructure is designed. Because these aren't sort of rip and replace infrastructure activities. As AI grows, customers have got to be able to grow those deployments in the data center as well. And what that requires is a repeatable reference architecture inside of a data center that says, here's what AI deployments look like, here's what they might grow to, here's what liquid cooling requirements will look like down the road. But it's not just liquid cooling, it's things like networking, it's power delivery to the rack, which becomes very complicated inside of a data center. You're looking at 400 volt, 800 volt power delivery, sometimes at the rack level even, and that's becoming increasingly important. This market's being driven by such innovation at this point that we have to be able to build and grow with our customers in a way that's not harmful, in a way that doesn't force them to rip and replace expensive infrastructure. And those modular design approaches, those reference architectures that we've built with Lenovo have helped us to make sure that customers can grow on sort of their timeline and as their workloads begin to scale.

Daniel Newman:

And that's such a great point because so much of the concern around the AI build-out is about the time between, like, say, GPU generations, server generations, and, you know, with CPUs, there was kind of this digestion period where you had, you would upgrade, but it was kind of very, it was a very organized process of going to next generation. And this is like, well, do my, you know, hopper a generation do I just rip them I mean there's some people think like we're just gonna rip them out and throw them all away and I mean I think it still needs to be very orderly and how this happens and you know you guys need to work together to how do how do these companies that are making massive investments make sure that they're able to upgrade and offer the best but at the same time you know how did what workloads get deprecated to to be used on older equipment then that stuff's more than capable of handling, so that we're not just tossing this stuff out and breaking the whole model. And I think some of what you're trying to do, Rick, is you have your digital realty innovation lab. I think you guys call it DRILL. That's, you know, only semi-ironic.

Flynn Maloy:

High five on that.

Daniel Newman:

Semi-ironic in the energy-consuming era of AI. You call it DRILL. I don't know. We're in Texas. DRILL, baby, DRILL. That's an acronym. Hey, I'm all for it. I love what's going on in Texas right now. But talk a little bit about the Innovation Lab and how it's helping, because some of it we just talked about with the timeline and upgrading, but it's also helping companies build POCs faster and then get them to scale faster.

Rick Moore:

Yeah, you know, we're excited about the drill. We launched the drill a few months ago with Lenovo and a few other partners that operated various points in the stack. And to me, the drill is all about taking some of the fear out of AI deployments. Reducing that risk so that customers can come in, in real-world scenarios, using real infrastructure, real workloads, real challenges and constraints that they're going to face, and do that in a risk-free environment. Bring that into a laboratory so that they can start to run against synthetic or production data, whatever they choose. And then they have a path, because our starters, it was something you said triggered a thought. The infrastructure roadmap has become critical. And if you're not planning that roadmap now, you're going to be caught behind the eight ball down the road. So the lab, the drill lab, and we've announced some expansions recently, the drill lab, we're going to take it to new locations on our footprint. That's really helped our customers. take some of that risk out of deploying AI in a real-world environment so they can benchmark, they can network performance tests, they can put their hands on liquid-cooled infrastructure. Neptune is class-leading, and they can start to see what that looks like and what that feels like before they go out and make those significant investments in a production environment. And then there are considerations to make sure that that offload from a lab out into production is seamless. is not something that is, as we said, rip and replace and difficult.

Flynn Maloy:

And that's back to your first question. What are those impediments to enterprise speeding up? Not having that level of de-risking, understanding, touching before you do it. That's what's really slowed this. So this really is an accelerator for everyone. I also want to mention the tech. The next generation of tech also is, you know, the first generation was about simultaneous upload of data to train your models. Like when inference comes, that's throughput, right? You look at the new generation of chips that are focused on inferencing, it's a very different set of tech. And so, can you use it for both? How are you going to use it for both? That's all part of the roadmap. Getting a sense of what I need to do? What do I need to do? That's where you're able to put those labs into production.

Patrick Moorhead:

10 days ago, I did a CIO roundtable in London, and a lot of them brought up sustainability. And there's always two vectors of how people talk about sustainability. It's good for the planet. Oh, and by the way, save a lot of money. Another vector is, hey, saves us a lot of money. Oh, by the way, it's good for the environment. But it's all very similar. And with energy costs going up, with the giant power sucking sound of GPUs and TPUs and full racks as the smallest layer of AI composability talk. Lenovo has the ability to figure this thing out for clients, have a better glide path to be able to meet their needs. Can you talk about maybe what Lenovo has done, but more importantly, what are you doing now to optimize sustainability?

Flynn Maloy:

At this show where it's on the second day, second two and a half days, dozen plus, just even for me alone, customer meetings over and over again, that is the subject. Power is the bottleneck, it's a constraint. And for some of them, it's just straight up, we need to reduce our power footprint for our shareholders, for our commitment to our employees and our customers. Others are, I got 250, I can't go over 250. How am I going to bring in that new AI stuff when I've got this power constraint? So being able to work with companies, we've got a number of tool sets, as does, by the way, digital reality, because it's not just the tech itself. It's a facility that's sitting in where you can sit down and say, all right, how much capacity are you trying to get? We can get you 40 kilowatts back, and then you can either drive that with performance or you can bank it. That power is used to be where's the data now it's where's the power that's what's driving so many conversations so sustainability to reduce energy as well as to increase your performance and address the needs of your business is where technologies like neptune liquid cooling as you mentioned industry leading we're in our sixth generation You know, it's important for everyone to know liquid cooling is not all equal out in the world. And we're proud of the fact that we drive 40% less energy. You know, it's all copper, uses warm water, not a chiller before the rack. It is the most sustainable solution out there and it's proven. And if you want to de-risk that, that's a choice to lean into. We've got energy-optimizing modular designs of our tech. And then as well, partnering with Digital Realty, they've got fantastic tools to measure not just what's coming out of the technology and the computer, but how it's being used in the facility, how you return it to the facility.

Rick Moore:

What's the effect on the overall water supply, power consumption, those sorts of things. We really think that measuring that impact for infrastructure before you put it into production is critically important. So you can have an understanding from a sustainability perspective, but you brought up a really interesting point. That and operational efficiency are closely interrelated. And we believe you can't separate one from the other. Liquid cooling, by way of example, is incredibly efficient. It gives us kind of a whole next generation opportunities to make data centers more efficient. Comes with a unique set of challenges that we're investing in. We've built out over 140 data centers across our footprint that are liquid cooling enabled. That's by far the largest in the business. And we're aggressively investing to continue that. With that growth that's really important to us from a sustainability and an operational efficiency perspective, because that's the way our customers are thinking about that.

Flynn Maloy:

So if you can't lend, if your data center or your partner isn't liquid cooled, but you want the technology, that's where our sales teams are working together, our partner teams are working together to answer those solutions as quickly as we can for our customers.

Daniel Newman:

Yeah, you kind of almost got to the punch of what I was going to ask you to kind of wrap this all up. But basically, the higher density deployments are going to require a rethinking within the enterprise of how they're going to deploy it. It sounds to me like there's a role that co-location is going to play that's significant to basically enable enterprises to do this. I mean, what do you see? What's the next evolution that you see for co-location?

Flynn Maloy:

Well, I think, and Rick talked about it earlier, the modularity built into the designs of today, as well as getting ahead on energy, as well as getting ahead on where the power is going to come from in the first place. All of these things, you know, working with the colo community, where you're out there, planning way ahead, road mapping, and then partnering with the technology. I mean, drill isn't just to showcase only to our customers, because it is, but it's also a chance for our engineers and your engineers to work together to build that next generation value prop for co-location, you know, plus OEM partnership to deliver the AI solutions of tomorrow. So that's where it's going. And it's not just a, you know, a drop and play. You've got to put the design teams, the engineering teams together in order to build out a better solution for the future.

Rick Moore:

Yeah, and that's an important point. To do it one time in one location, anybody can do that. To be able to stamp that, so to speak, or make that a repeatable reference architecture that customers around the world can deploy, that's a much more significant challenge. And that's why we're working together to build these designs and architectures inside of a data center so that the infrastructure is no longer a constraint. But in order to do that, if there's one headline from our perspective, it's you have to bring in that infrastructure provider early on in that process. Because there's such an incredible, and you referred to it earlier, such an incredible supply side conversation that's happening around the world right now for energy and for space in certain data proximate locations. And if you can't do that in multiple locations in a repeatable fashion for customers, then you're limiting and inhibiting their growth. And we don't want to do that.

Daniel Newman:

in turbines and concrete and, and, and, as I like to say, all hands on deck. Everything that can be made will be sold and will be implemented and will be built. It's an exciting time. It's a great time to be in the digital realty space, both literally and metaphorically, and a great time to be selling AI infrastructure to the world. Flynn, Rick, thank you both. Thanks, guys.

Rick Moore:

Thank you. I appreciate it.

Daniel Newman:

We are on the road here at Supercomputing 2025 in St. Louis, Missouri. Hit that subscribe button. Join us for all of our coverage here at the event. And of course, all the great coverage on The Six Five each and every week. But for this episode for Patrick Moorhead and myself, it's time to say goodbye. We'll see you all later.

MORE VIDEOS

Scaling the Next Wave of AI: Infrastructure Insights from Lenovo - Six Five On The Road

Vlad Rozanovich, VP of ISG Product Group at Lenovo, join Patrick and Daniel to discuss infrastructure innovations, sustainable scaling, and operational breakthroughs as AI transforms the data center landscape.

Driving Digital Transformation: Microsoft & HPE’s Strategic Partnership - Six Five On the Road

Wendy Welch, GM at Microsoft, joins host Will Townsend to discuss how the Microsoft and HPE partnership is delivering end-to-end solutions via hybrid cloud, edge, and AI technologies to drive business outcomes and accelerate digital transformation.

Making Your Agents Pay: The Future of Agentic-Native Commerce - Six Five On The Road

Allison Farris and Danny Smith from Stripe join Jason Andersen to explore how the Agentic Commerce Protocol (ACP) enables autonomous digital agents to securely complete transactions, and what’s next for agentic commerce.

See more

Other Categories

CYBERSECURITY

QUANTUM