Home

Case Studies in Action: How Real-World Enterprises Are Deploying Hybrid AI Today - Six Five On The Road

Case Studies in Action: How Real-World Enterprises Are Deploying Hybrid AI Today - Six Five On The Road

Robert Daigle of Lenovo joins Patrick Moorhead and Daniel Newman on Six Five On The Road to discuss how enterprises are deploying hybrid AI in production and what determines whether those initiatives continue to scale.

Hybrid AI isn’t theoretical anymore. It’s running inside enterprises and delivering value.

From Lenovo Tech World in Las Vegas, Six Five On The Road shifts the focus on Hybrid AI from promises to proof. Patrick Moorhead and Daniel Newman are joined by Robert Daigle, Director of Strategic Product Management, ISG at Lenovo, to look at how large enterprises are actually moving AI from pilots into production.

Rather than speculating about what AI could do, the session focuses on what enterprises are already putting into production. Robert points to where AI is delivering measurable value today, which use cases continue to scale, and where organizations still struggle when moving from pilots to live operations. As deployments spread across edge, data center, and cloud environments, architectural and operational choices emerge as the factors shaping performance, reliability, and time-to-value.

Key Takeaways Include:

🔷 Production AI is already delivering value: Enterprises are seeing results where AI is tightly aligned to specific business outcomes, not broad experimentation.

🔷 Moving beyond pilots requires clarity: Organizations stall when objectives, ownership, and operating models are not clearly defined from the start.

🔷 Workload placement drives performance: Decisions across edge, data center, and cloud materially affect latency, cost, and reliability at scale.

🔷 Architecture choices compound over time: Early infrastructure decisions have an outsized impact on speed to value and long-term scalability.

🔷 Momentum is an organizational challenge: Teams that align technology, economics, and operations are better positioned to keep AI initiatives moving forward.

Learn more at Lenovo.

Watch the full video at sixfivemedia.com and be sure to subscribe to our Youtube channel so you never miss an episode.


Listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Patrick Moorhead:
The Six Five is On The Road here at Lenovo Tech World in the iconic sphere here in Las Vegas. You can see and feel the action of everybody coming in before the big tent event headlined by YY, Lenovo's CEO.

Daniel Newman: 

Yeah, it's going to be a big one here, Pat, and that's why there was lines pretty much wrapped around this sphere. Yes. All the way from the tip of the sphere to the sphere.

Patrick Moorhead: 

I mean, we barely got in for the pregame.

Daniel Newman: 

Not going to lie, I did have to pull my celebrity card out. There we go. And to get us in. And it was a good thing I was with you, because they didn't have any idea who I was.

Patrick Moorhead: 

You didn't have to show the concepts or anything.

Daniel Newman: I

 didn't do any of that. But yeah, it's going to be a big one. We're going to, you know, we got to be expecting a lot. Because like you said, YY, CEO of Lenovo is kicking it off. But, you know, we got Jensen. We got Lisa. We got Cristiano. We got Lipu. We got, you know, CEO, CEO, CEO, CEO of some of the biggest chip companies, now system companies, software. What are they all? AI companies on the planet.

Patrick Moorhead: 

Yeah, so obviously AI is this tremendous force all the way from pocket to cloud and everything in between. And you know, Daniel, as analysts, we like to talk about the ethereal nature of it. But at the end, people who have to deploy hybrid AI actually have to get the work done and make it turn vision into planning into reality. And here we have Robert from Lenovo to chat about that. Robert, welcome back to the show.

Robert Daigle: 

Thank you so much for having me back on. I really appreciate being here and talking about my favorite topic of artificial intelligence. Yes. What I live and breathe for the past decade of my life. It's amazing. You were in it before it was cool. I was. I was doing AI before AI was a buzzword. It was a household name. Wait, wait, wait. You said a decade? Over a decade. Yeah.

Daniel Newman: 

So until the old machine learning day. Yeah. Wasn't AI around even when you were a kid? Yeah, 1960s. Yeah. I mean you were like 20? Yeah, like when Pattern 20. Stop it, stop it. Anyways, but yeah.

Robert Daigle: 

You know, some of the first algorithms were actually designed before the first personal computer fired up in the 50s. There we go. So the history of AI has been around for a long time, but machine learning really took off about a decade ago, and over the past few years, of course, everyone's focused on generative AI.

Daniel Newman: So the fact that it's been around 70 years now makes it really concerning that some of these pilots are not fully deployed yet. No, in all seriousness, though, let's talk about that. You're paying a lot of attention working with the customers. What's delivering value? What are the use cases that you're seeing go into production and deliver value to customers? On the other hand, where are you seeing things fall short and not moving beyond pilots?

Robert Daigle: 

This is a really interesting topic and we get first hand experience because my team's the one that's actually working through proof of concepts and deployments with customers and helping them along this journey. And what's really interesting, there's some low hanging fruit that we see driving mass adoption. Things like code development is being completely transformed today with generative and agentic AI. I mean, we've heard of vibe coding. That is the future of code development, and people are moving at an extremely fast pace to adopt it. And that's where you see the fear of missing out coming to play with whether you're doing code development, customer service support, call center transformation. All of these are prominent use cases, even like marketing content generation. These are some of the most common use cases, and these are the ones that people have a fear of missing out, that if we don't adopt, we're going to be left behind. So I think that's the low-hanging fruit that we see a lot of customers starting with. And then I think where they struggle is really in either two places. One, and the biggest one, is scale. that they get a good POC working, they get something that's a really cool demo or use case that they can showcase that provides a lot of value, and then they just have a failure to scale. There's a number of reasons why that I'll unpack, whether it's doing the right change management, kind of the people aspect of it, or the data aspect of it. Data readiness is a big part of that. Data management is a big part of that. And I think those are probably the two biggest challenges, or not picking the right infrastructure architecture. And a lot of what we're seeing today is people are shifting to a hybrid approach to AI adoption. So those are some of the biggest reasons that we see people are having a failure to launch or failure to scale their AI initiatives.

Patrick Moorhead: 

Yeah, so what is it over the past couple of years, right, that has really accelerated the experiments, the POC, to actually scaling it? Is it experience? Is it new technology? Is it everybody had to go through a try it and fail? What is moving this ahead so quickly here?

Robert Daigle: Well, I think going back to the fear of being left behind, I think is one of the big driving factors. For a lot of organizations, if you had a big co-development team before, and now you're able to improve their efficiency by 50%, which is what we're seeing realistically from some of these use cases, That's a huge advantage that you have in the market and it has real implications on how quickly you're able to develop your applications or that you could have a startup come in and with half the size of your team be able to really provide a new rival offering. So that competition element I think is one of the biggest drivers that we see of why we have to speed up adoption of AI. And going back to what I said about some of the challenges, the companies that are putting the right infrastructure in place, they're building the right foundation, their data management strategy, the right infrastructure, a hybrid strategy. That's the ones that we're seeing be successful and being able to go from POC to production really quickly and to create this factory of iterating new AI initiatives. So they get the first one up and going, then work on the second one, third one, fourth one. And they can do that if they build the right foundation.

Daniel Newman: 

Another thing that's rate limiting, too, is workload placement. By the way, not a new thing with AI. It was a thing with Cloud One. But right, where do we run the workload? And by the way, maybe it's partially at the edge, partially in the cloud. But you're talking to the enterprise. How are they weighing that? Because there's cost, there's availability, there's scale, there's security, there's sovereignty. All these things are issues. How are they picking the best place to run the workload. It's a new problem of an old problem in AI.

Robert Daigle: 

It's not a new problem, but it is with AI. And I think one of the things that we've seen with a lot of our customers is where they're going to deploy that AI solution. You named a lot of the key points. I would say the biggest one is around data sovereignty. If you tell them that, hey, you've got to put all of your AI in the cloud, that means I've got to move all of my data to that environment. And that's where the hybrid approach is really taking off, because it's not all going to land in one spot. And we see this play out, and I really come back to data gravity being the driving factor. of data, where does your data exist today? If it all exists in the cloud, it's probably easiest to put your AI solutions and your AI infrastructure in the cloud. If it's already being generated at the edge and you're doing real-time analytics, then you probably want to put the compute closer because of the economics behind it, or the reliability of it, or latency, or those issues. And then the other place is in the data center. a significant portion of enterprise data still sits in the data center, as you know, and so we see a lot of customers that are deploying AI in their data center, and what we see as the limiting factor is how quickly they can get some of these POCs to scale and move from experimentation to large-scale inferencing.

Patrick Moorhead: 

So if you look at across your most successful customer AI deployments, what were some infrastructure or architectural choices that they made that really optimized for speed, reliability, and I'll call it time to first token?

Robert Daigle: Yeah, it's interesting. Time to First Token is becoming more and more of a language that even enterprise customers are using when they're talking about AI, which is really exciting to see because that shows that there's a maturity in their adoption. We used to hear this from hyperscalers that were deploying training LLMs and running large scale inference, and now we're starting to hear enterprises use that language. But the time to first token is not just about when I put a prompt in and how quickly I get it back, but it also goes back to when can I get my first AI solution up and running. How long is it going to take me to get that first one up and running? And so I think what we've seen is really what we've been doing to work with customers is really starting to package these up into more turnkey offerings that are easy to implement, easy to get up and running, that we've solved a lot of the architectural challenges and the testing, so they don't have to worry about the infrastructure. We've got that figured out. We've got the hardware and the infrastructure. and the management plane figured out really what our customers have to bring is their data and then their subject matter expertise to the table, and we can help them with the rest.

Patrick Moorhead:

 Interesting.

Daniel Newman: 

So, we joke about pilots and getting things to scale, but we also aren't joking. It really is a challenge, and a lot of companies, for what, two years, I feel like we've had some of these similar conversations here on the pod, like, all right, let's get it from pilot to deployed at scale. We're still here today. 2026. Remember the chat GBT moment was 2022. And that's when we started talking about enterprise, like this is going to get into the enterprise, four years. So everything's happening really fast. But at the same time, like a lot of stuff still looks the same. A lot of like stuff has not actually truly been deployed. A lot of the ways we interact with applications, still very similar. A lot of tools we use in our business, still the same tools. Maybe they have an agent layered into them at this point, but like, What do you see as cutting the wheat from the chaff? What will be that enterprise that succeeds will do this, and enterprises that get stuck will do that? How are you dividing the haves and have-nots in the enterprise AI future?

Robert Daigle:

 I also think it's an evolution, too, because what we saw in 2023 is an explosion of experimentation And there was really the wild, wild west. And I think a lot of people wanted to go out and do something so they could tell their board, they could tell their executives that we're doing something with AI. And it was good experimentation. I'm not saying that it wasn't the right thing to do, but I think there's a maturity curve that has to come behind it. So a lot of people went out. They bought licenses to something like ChetGBT or Copilot, and they said, we checked the box on generative AI. But they didn't back it up with, what's the return on investment to our business, and how do we build that into a business case that we can continue to invest in? So if you ask them, what's the value it's brought? Well, it's been great, but we don't know what the actual measurable value is, and that's where we're getting started on some specific use cases, like the co-development use case, or a call center support use case, and actually having a before and after and saying, this is the return that is driven And then building that right foundation, a hybrid AI infrastructure where we can get you started where your data resides, whether it's in the cloud, in the core data center, or out at the edge, that we can make it easy for you to get started wherever your data resides. And then also help you with the change management, because that's a huge piece of it that I think a lot of people You know, some things, going back to what you said earlier, Dan, some of the things that are old are new again, and change management is still a challenge, right? The people aspect and getting people to adopt these new tools and capabilities is still part of the challenge. So it's, you know, old digital transformation all over again as part of the journey that we'll have to go on for AI.

Patrick Moorhead: 

Your favorite term.

Daniel Newman: Well, Robert, I want to thank you so much for joining us here. We are less than 20 minutes from the keynote. It's been a blast chatting to you. Let's do it again sometime soon. I look forward to it, and thank you so much for having me. Thanks. And thank you, everybody, so much for being part of this Lenovo Tech World pregame at the iconic Xperia. Thanks so much, everybody, for tuning in. It's been a great time having you. We hope to see you all soon.

MORE VIDEOS

Intel’s Telco Commitment: AI in the Network and the Path to 6G

At MWC 2026, Intel’s Kevork Kechichian and Cristina Rodriguez join Patrick Moorhead and Daniel Newman to discuss Intel’s renewed telco commitment, the evolving role of CPU in AI-driven networks, and how operators can prepare for 6G without forcing a hardware reset.

The Six Five Pod | EP 294: AI Capital, Sovereign Cloud, and the Infrastructure Arms Race

AI funding rounds are getting bigger. Infrastructure bets are getting steeper. And the SaaS model is back under pressure. On episode 294 of The Six Five Pod, Patrick Moorhead and Daniel Newman break down the $110B OpenAI raise, Amazon’s expanded role, AMD’s $100B Meta deal, sovereign cloud momentum, and whether or not the SaaS premium is being permanently eroded.

Improving AI Inference with AMD EPYC Host CPUs | Signal65 Webcast

On this episode of Signal65, Ryan Shrout and Russ discuss how CPU selection and system-level design are influencing AI inference performance as workloads scale.

See more

Other Categories

CYBERSECURITY

QUANTUM