Home

Scaling the Next Wave of AI: Infrastructure Insights from Lenovo - Six Five On The Road

Scaling the Next Wave of AI: Infrastructure Insights from Lenovo - Six Five On The Road

Vlad Rozanovich, VP of ISG Product Group at Lenovo, join Patrick and Daniel to discuss infrastructure innovations, sustainable scaling, and operational breakthroughs as AI transforms the data center landscape.

How can enterprises scale AI and HPC infrastructure as performance, efficiency, and sustainability demands accelerate?

From Supercomputing 2025, hosts Patrick Moorhead and Daniel Newman are joined by Lenovo's Vlad Rozanovich, SVP of ISG, to dig into what it really takes to scale advanced AI environments. They unpack how organizations are redesigning their data centers for AI and HPC growth, the technology partnerships powering the next wave of innovation, and the operational, regulatory, and sustainability pressures shaping next-gen infrastructure strategies.

Key Takeaways Include:

🔹 Scaling AI + HPC at new levels: Lenovo’s work at SC25 highlights the integration of cutting-edge NVIDIA GPUs, liquid-cooling advancements, and deployment best practices tailored for high-density AI workloads.

🔹 Designing for real customer constraints: Power availability, cooling requirements, and accelerated time-to-value are driving more customized, collaborative infrastructure design with Lenovo’s global customer base.

🔹 Sustainability + compliance front-and-center: Lenovo is proactively addressing environmental and regulatory requirements as AI data centers expand—ensuring high-density compute can scale responsibly and globally.

🔹 Provable outcomes, not promises: Customers are seeing measurable improvements in performance, energy efficiency, and operational throughput thanks to Lenovo’s next-gen architectures and deployment models.

Learn more at Lenovo.

Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Patrick Moorhead:

The Six Five is On The Road here in St. Louis for the Supercomputing 2025 show. Dan, it's been a great show so far. And imagine that, a lot of talk about AI, but also staying true to a lot of compute, a lot of networking, and a lot of data.

Daniel Newman:

Yeah, it's buzzing here, Pat. I mean, supercomputing has exploded over the last few years. We've seen it grow exponentially with AI. I mean, a city like St. Louis can barely handle this event anymore. They're probably gonna have to move this thing to Las Vegas pretty soon because literally everyone on the planet is eyes on what's going on here with high-performance computing because it is really the original. Yeah.

Patrick Moorhead:

Yeah, and there are challenges with scaling AI. I mean, there's nothing new about that. And one company that's right front and center at helping their customers deal with that complexity is Lenovo. And we have brought back Vlad, great to see you. Thanks, Pat, you too. We're really interested in how you're helping your customers here.

Vlad Rozanovich:

Yeah, Pat, Dan, so good to see you guys and thanks for being here at Supercomputing with Lenovo. You know, this is an amazing show. We've been doing this show for over a dozen years. In fact, I've personally been doing this show for probably close to 15 years. And I remember the first time I came here, it was government national labs, and some research sites on the university side. And what you've seen, you've seen this amazing transformation of high-performance computing to AI, just like you guys said. But what's amazing for us here at this show, and even as big as it is this year here in St. Louis, one of the things we're seeing is that there's a diversity of customers. We're seeing your traditional HPC shops doing things like physics codes and fluid dynamics and EDA. But now you're seeing, and on the other side, you're seeing these AI shops. and you see big CSPs and neoclouds actually using this show because of some of the techniques that have come from a supercomputing background, but then you're also seeing AI for scientific research, which is really a combination of the both. Sure. And so it really is why we're here from a Lenovo standpoint, is we want to service all three of those aspects of what I consider HPC to AI supercomputing, and there's no better place to do it.

Daniel Newman:

You kind of, you know, indicated all the momentum here, and you've been here quite a while, but in terms of like from a Lenovo standpoint, what is Lenovo's role in terms of enabling HPC and AI?

Vlad Rozanovich:

Yeah, Dan, there's a couple things that we see from a Lenovo standpoint. If I start with that Neo cloud space, you know, these are amazing corporations, companies like N-Scale and Boost Run that Lenovo has worked with. And they're coming into a new market, a new market where they are satisfying supercomputing-like hardware with big capital reserves. And how do they look at some of these application stacks? And a company like Boostron will actually come to Lenovo and really do a full soup to nuts type of evaluation deployment. They need help with services for deployment on-prem. They look at things like water cooling and how Lenovo can be there to install some of our warm water liquid cooling for our Neptune products. They look to us to give recommendations based on parameter sizes for maybe some of these large language models on which GPUs we should be looking at. What's really important about that Neo cloud space is this time the first token type of capability. And those types of companies are coming to Lenovo to ensure that when they're ready with their money, whether it's a CapEx or an OpEx type component, that they have this time to first token of here is where we're going to execute and here's where our customers are going to be able to start generating value. So that's one of the things that we're seeing in the Neo cloud space, which has been a really big focus for us on the HPC side. Hey, we've been doing Neptune liquid cooled for 10 years. 10 years, and so many of the customers, the institutions that I see here at Supercomputing, they're coming to Lenovo to really try to understand, hey, what are some of the new innovations? Because power rack density is going up, CPU power is going up, GPU power is going up, and they want to come to Lenovo to understand, how do I make sure I factor in all these components to get the right building block based on my power, my cooling, and the workloads that they're running?

Patrick Moorhead:

So on the Neo cloud side is it as simple to say as it's the frontier models driving the demand or is there something more on the double click and maybe also comment on you know traditional HPC as well. What's driving all this demand?

Vlad Rozanovich:

Yeah, I think with, let me hit the first one first with traditional HPC. I think what's driving this demand right now that we see is because of AI and how fast things are moving, there's a renewed focus in HPC to say, hey, we need to make sure we are ready to go and how do we perform more complex operations, how do we become more efficient in our data center, and how do we actually produce AI-like results, which has now been set as the standard through some of these AI type workloads. Now, is that more of a cost per flop? Oh, it's always, it is. It's cost per flop per watt. Got it. And that's a real important part of that HPC equation versus the NeoCloud equation where that per watt sometimes is a don't care because of the way they're servicing their large language models and frontier models.

Patrick Moorhead:

And is it more about tops in the Neo clouds?

Vlad Rozanovich:

I think it's more about, I mentioned it before, time to first token. And so it really is, it's that total performance as fast as you can get it based off of the new silicon that's coming out of people like NVIDIA and AMD.

Daniel Newman:

So in terms of, you know, your particular infrastructure, right? I mean, here we are, the show floor is filled with companies that are also in many cases trying to say they offer something similar. What is unique about what Lenovo is doing that's basically driving choice and making Lenovo the pick to help with AI infrastructure?

Vlad Rozanovich:

Well, Dan, here's one of the things we see, right? The permutations in the AI space and the high-performance computing space are growing by the day. You have x86 CPUs from Intel and AMD and you have ARM CPUs. Now you have GPU components from AMD and NVIDIA. Now you start seeing accelerators being compounded on top of that and NPUs and TPUs. And what's interesting is that so many of these customers here are trying to figure out what they want to do, they can't evaluate everything. And so what is unique from a Lenovo perspective on what we're doing, we're trying to guide customers based on their workload, what their parameter sizes may be, on here is what you should be looking at. Now, the other thing we give them is, it really is, it's this edge to cloud capacity that Lenovo has had. We've had so many deployments of things like AI at the Edge with customers like Kroger doing machine vision for theft detection, running with some of our unique AI innovator ISV partners, all the way to looking at giant eight-way racks that are operating at 150 kilowatts and moving up. And so the expertise that we give customers, Dan, it's exactly that. It's the Edge AI, the Edge HPC, all the way up to these really monster kinds of AI factories.

Patrick Moorhead:

I really love the Kroger example. And I do appreciate it, listen, I think we all love the tech. I want to give these DLC pipes a big hug. You know, I mean, like maybe get a picture of it. We can do that. But in the end, it's about driving real results. And that's either in the scientific world, the enterprise world, or whatever new world, the physical world at some point. So talk about some of those installations and some of the downstream benefits that they're seeing.

Vlad Rozanovich:

Yeah, no, Pat. And first of all, I'd love to see you both give, you know, those liquid cooling pipes a hug. It may be on Dan's Only Dan's page, but I'm glad that's coming out. Thank you. Yeah. See that. And, you know, he's got the drip on the shoes, too. So if you haven't seen these shoes, you know, if you want to see more of them, go to his. I mean, when you don't have the content, you have to go with the style. I think he's got it. He's got it. Pat, one of the things I'll tell you is that when I start looking at how we are helping guide customers today, the amount of power capacity that is required for some of these AI data centers or high-performance compute data centers, I have had 30 meetings just in the last two days. And some of the content conversation is, hey, how much power do you have coming into your data center? What is your cooling infrastructure for that data center? And then most importantly, what are your networking and storage requirements for that data center as well? And so this is somewhere where, from a Lenovo perspective, we will help with those customers to say, let's define the workload, let's define the business outcome, let's understand, are you using a colo like digital realty? Or are you putting it on-prem for latency, sovereignty, or cost reasons in some cases? Are you putting it on-prem? And if you're putting it on-prem, that's where Lenovo could come in and really do an entire hybrid cloud analysis and an installation service to really help them along the way. I love it. Hyper cloud analysis, that's cool.

Daniel Newman:

Hugging pipes and clouds with great drip.

Patrick Moorhead:

They come together.

Daniel Newman:

I love it. Do you have teenagers? I do. Okay. The word drip, it's like, that's such a tell. Like, there's no way that was your work, right? I'm there. I have them too. Well, actually now they're all in their twenties. Anyways, getting old, Vlad. I know, I know. So with all this going on, another thing is there's a lot of attention being taken. You started to kind of talk about, you know, cooling and sustainability a little bit. Regulation is going to start to creep up. When you've got this many gigawatts, people are concerned about the grid, the capacity here in the U.S. and everywhere else. They're concerned about raising electricity rates to people around those near these data centers. Like how can Lenovo help ensure that AI compute continue to scale? We get all the economic benefits, but that we do it sustainably and within a regulatory framework.

Vlad Rozanovich:

Yeah. And Dan, you're hearing some crazy things in the industry today. You're hearing about micro nuclear deployments because the power grids are getting taxed. You're also seeing a lot of, hey, I don't want that data center in my backyard either, right? That's been hitting the headlines across the US and across other countries around the world. You know, we see interesting regulatory things, you know, old Bitcoin miners up in the Nordics that have power, capacity, space, cooling, like, you know, like you. Yeah, and those are the ones that are taking advantage of their environmental conditions, which is good. It's good that they're actually utilizing that type of environment. What we also see is, how do we help a customer measure their PUEs? How do we help a customer look at things like, instead of cold water intake on a water cooling solution like some of our competitors do, what Lenovo does is we actually use warm water. This way you don't have to chill it on the front side, which is actually using more electricity. So if you don't need to use the electricity to chill it, and you can actually run it with a warm water loop, it actually saves on your overall energy efficiency. We're also using things like copper piping across all of our tubes. Some of our competitors will use PVC, which then has a factor for potential cracking over time. And the last thing you want is a liquid event in your data center. So this is something where sustainability, efficiency, liquid cooling, either direct open loop to chip or full-scale rack as we're producing in some of our Neptune N1380 chassis. Those are some of the areas that we're helping customers look at sustainability and efficiency across their compute deployments. Yeah, it's a big topic.

Daniel Newman:

It's one that we're certainly going to be watching very closely, Pat. In the end, energy does seem to hold the key. I mean, every day there's a kind of a new, what is the real threshold? Is it chips? Is it the ODMs and how fast you can build servers? Is it the amount of energy available? Is it the regulatory environment? Is it security and compliance? So we've got a lot of risk. One thing, you know, just for fun here at the end, Vlad, you know, Pat and I get asked this all the time. There's a certain company, a close partner of yours, that's going to report earnings this day while we're here. And there's a lot of this kind of, oh, AI is a bubble. You're out every day in sales. Do you have any of the conversations you have insinuate to you that this demand and this enthusiasm isn't real?

Vlad Rozanovich:

You know what, I'll tell you, Dan, I meet with a lot of customers, a lot of CIOs, hyperscalers, neocloud. Enterprises. Enterprises. Government. And both here at Supercompute and other events where I'm meeting some of these customers, I've asked the question, hey, are we in a bubble? And I have had so many people come back to me and say, Vlad, I wake up in the morning thinking this is a bubble, but then I look at the amount of compute that are driving some of these new requirements in the business sense. We have just hit the large language model training part of this AI. transition. What is next happening is, how do we look at enterprise AI? How do we look at HPC AI? That is going to drive a new increase in opportunity and usage. And some of those customers who I ask, is it a bubble? They say, every morning we think it might be a bubble. And they go to bed saying, I need to build 3x the capacity of what I planned for. And that has not slowed down. What we're also seeing is, customers are signing three, four year commitments to this product, to data centers. People are building data centers. They're going to fill it with something. And so I don't think this is a bubble because as soon as we get to that point of actually utilizing this to inference, boy, I think you're actually going to see even another spike before a plateau. Inference is pervasive.

Daniel Newman:

It's always on. It's the data. You know, we were at an event yesterday where they said like 90% of the enterprise data is still behind the firewall. And what I've heard another CEO say is something along the line of 99% of enterprise data hasn't touched AI yet. And so we're in the earliest innings. You heard it here. Vlad said it. I didn't say it. I say it all the time. Not a bubble. Pat says maybe a bubble. I'm just kidding. I don't actually know. Well, you have to check out another episode of Six Five. Pat and I debate that pretty regularly. But Vlad, a lot of fun sitting here and supercomputing, chatting to you. Thanks. It's always great to see you guys. I love what you do.

Patrick Moorhead:

Thank you.

Daniel Newman:

Thank you. And thank you everybody for being part of this episode of The Six Five on the Road. We are here at Supercomputing 2025 in St. Louis. This place is jammed wall to wall. Subscribe, be part of all of our content here at The Six Five. But for this episode, it's time to say goodbye. See you all later.

MORE VIDEOS

Reinventing Silicon for the AI Era: Marvell’s Vision for Compute & Connectivity

Chris Koopmans and Sandeep Bharathi of Marvell join hosts Patrick Moorhead and Daniel Newman to discuss macro trends in AI infrastructure, Marvell’s strategic shifts in compute and connectivity, and the company’s vision for staying competitive.

Driving Digital Transformation: Microsoft & HPE’s Strategic Partnership - Six Five On the Road

Wendy Welch, GM at Microsoft, joins host Will Townsend to discuss how the Microsoft and HPE partnership is delivering end-to-end solutions via hybrid cloud, edge, and AI technologies to drive business outcomes and accelerate digital transformation.

Building for AI at Scale: Infrastructure Trends from Lenovo & Digital Realty - Six Five On The Road

Flynn Maloy, VP & CMO of ISG Marketing at Lenovo, and Rick Moore, Global Head of Platform at Digital Realty, join Patrick Moorhead and Daniel Newman to discuss the impact of AI at scale on data centers, sustainability, and the evolution of colocation environments.

See more

Other Categories

CYBERSECURITY

QUANTUM