Intel’s Telco Commitment: AI in the Network and the Path to 6G
At MWC 2026, Intel’s Kevork Kechichian and Cristina Rodriguez join Patrick Moorhead and Daniel Newman to discuss Intel’s renewed telco commitment, the evolving role of CPU in AI-driven networks, and how operators can prepare for 6G without forcing a hardware reset.
At Mobile World Congress 2026 in Barcelona, Intel is demonstrating that it’s all-in on telco.
Six Five On The Road hosts Patrick Moorhead and Daniel Newman sit down with Kevork Kechichian, EVP & GM of Intel’s Data Center Group, and Cristina Rodriguez, VP & GM of Network and Edge at Intel, to discuss Intel’s recommitment to the telco sector, the evolving role of AI in the network, and what “6G ready” really means.
With inference and agentic workloads shifting AI conversations beyond GPUs, Intel is emphasizing the value of the CPU in delivering the right compute for the right workload across core, RAN, and edge.
Conversation highlights include:
🔹 Why integrating NEX back into DCG signals Intel’s long-term telco commitment
🔹 How inference and agentic AI are increasing CPU affinity in telco environments
🔹 Why AI in the RAN is not CPU vs GPU, but right compute for right workload
🔹 Real-world momentum with Xeon 6 across core, RAN, and edge collaborations
🔹 What “6G ready” means without forcing a hardware reset
🔹 How open platforms, power efficiency, and software evolution define the path forward
As operators look to monetize 5G while preparing for 6G, efficient scalability, power optimization, and open ecosystems are becoming foundational.
Watch the full conversation at sixfivemedia.com and subscribe to our Youtube channel for more insights from MWC 2026.
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Kevork Kechichian:
We're showcasing what can be done with all Intel assets and Xeon being the cornerstone of it. There's lots of aspects of inference that don't need the GPUs or maybe 10% of it is GPU centric, the rest is all CPU based and we're demonstrating that over and over again.
Patrick Moorhead:
The Six Five is On The Road here in Barcelona. We're at Mobile World Congress 2026. Danley, it's a good show. A lot of 6G, a lot of AI, and a lot of endpoints.
Daniel Newman:
Yeah, well, you know, it's always great to get the events going here. It's one of the big ones of the year. We know that AI is always going to be top of everyone's mind, but it's a little different here because, you know, there's a focus, there's the edge, there's automotive, there's, you know, operators and telcos. And this is still a really big part of how we deliver all these AI services to the edge. So you can see the energy, the place filled up really quickly again, and I expect the rest of the show to be a good one.
Patrick Moorhead:
Yeah it's my 12th or 13th year. I lost track but it's it's the core conversations remain the same which is how to operators evolve their business models. I feel like we're having the same conversation 10 years ago into delivering more than the pipes. And The discussion here is all about 6G and the AI properties and how they can transform their business model. One of the key suppliers to this industry is Intel. And they take many different forms all the way from the edge, and many forms of the edge, all the way into the RAN. And it is my distinct pleasure to have on the 6.5, Christina and Kevork from Intel. Great to see you. Thank you. Great to see you, too. Look, we had to come all the way from Austin. We all live in Austin, surprisingly, to Barcelona to see each other. Great to be here.
Kevork Kechichian:
That's how it works, right?
Cristina Rodriguez:
Great to be here!
Kevork Kechichian:
That's how it works. You have to travel to meet friends and colleagues.
Daniel Newman:
It's always great. Well, you know, when we're home, and it's probably rare for this entire contingent here when we're home. You know, we're home. You don't want to see work faces. Yeah, of course. Unless it's a random fire drill in a parking lot. Isn't there a story
There?
Kevork Kechichian:
There's a story there.
Daniel Newman
You all won't get it. Listen, Kevork, I want to start off with you. There's been a lot of change at Intel. I know you're pretty new in the role. You brought NEX. back inside of the data center group. Talk a little bit about the change that's going on, why you made that decision, and how it's really showing your commitment to this space, to the operators. Yeah.
Kevork Kechichian:
Well, almost to the day, it's my six-month anniversary. Congratulations. Almost, did I say? Well, it started on the 8th of September, so a few more days, right, by end of week. One thing that attracted me to Intel, I mean, obviously, lots of things, and there's the LiPo factor, right? But it's how many assets Intel has, right, plus the foundry. And I've been a product guy my whole career. And I knew how to put products together. And you look at the assets, and you say, my god, right? There's so many assets. So what happened the first couple of weeks I was in, I was looking at the NEX business. And I was questioning some of the decisions that divestitute. And obviously, there's a context to everything, right? Things don't happen in a vacuum. But when I looked at the assets, there were lots of stuff that we can use within Xeon, within the board level, rack level. So my recommendation was, how about we give it another chance? And that's how it started, a conversation with Liboo. And of course, Christina and the crew were always for staying within. And that's why we said, OK, let's keep any X and there's lots of synergy between the Z on like the core business at any expert product perspective. And then eventually I mean the transition also happened that AI became part of the business unit. almost like full circle. We hired a couple of good people, Eric from Qualcomm and Nick Dubay from Arm, previously HP. So we're reinforcing that message from edge all the way to the accelerators and we can talk about that. But mostly the attraction for me was really the number of assets that we can put together. So that was the greatest selling point for me to join Intel.
Patrick Moorhead:
So Kevork, it seems like for the last three years, it's been 24 by 7 by 365 AI, been great for analysts, been great for tech companies. Most of the conversation, though, has been all about GPUs, GPU training, and even inference. And it does make sense, given the investments there. But the reality is, there was still a lot of work being done on CPUs that not a lot of people would talk about. But just recently, CPUs became cool again for data center AI. Can you talk about why this happened, what happened, and why it's even relevant here at Mobile World Congress? That's right.
Kevork Kechichian:
So we're seeing lots of shifts. Actually, I mean, The CPUs for AI were always there. We're just seeing a shift into the rhetoric. People are talking about it again because for various reasons, right? The inference workloads, the genetic workloads are becoming more and more forefront. I mean, there's only a few companies that are doing frontier kind of model training and that's where you need a heavy GPU. But most of the workloads we're seeing right now, they don't need GPUs. It's about data movement. It's about virtualization. It's about, we talked about confidential compute. So all of that is perfectly fine on CPUs. So what we're trying to advertise and emphasize is really the right processing engine for the right workload. And workload is the key factor here. We're trying to go to a solution kind of a space where we're showcasing what can be done with all Intel assets, and Xeon being the cornerstone of it. There's lots of aspects of inference that don't need the GPUs, or maybe 10% of it is GPU-centric, the rest is all CPU-based. And we're demonstrating that over and over again. That's why we are very excited about this new wave of thinking that people are looking at. We're also seeing other areas where the ratio to GPU to CPU deployment is becoming much higher than traditional. And people are acknowledging that, right? And, you know, I mean, at the end of the day, the GPU deployments are very heavy, power, you know, hungry, very CapEx intensive. And we have a very wide base of Xeon deployed and we're encouraging people and showcasing them how to take advantage of all of that.
Daniel Newman:
Makes sense. So, Christina, welcome to the show.
Cristina Rodriguez:
Great to be here. This is fun.
Daniel Newman:
So we're here at Mobile World and Mobile World is very focused on operators. You know, when they're bringing AI to the edge and their constraints are different than the cloud data centers. How are they prioritizing and how are you helping them to be able to handle what their needs are to really deliver AI to their business, to their customers?
Cristina Rodriguez:
I'm glad you say that, dan, because it's very specific Requirements when it comes to run. So you have to be careful what Kind of tools you use when you want to do, when you want to Deploy ai workloads in the network and especially in the run of course. We're going to do that. Of course, it's going to be a very important part of today's network and the future. If you look at what the operators are looking at, what is realistically top of mind, what can you do there? We call it, in general, AI for network and network for AI. And that's a very useful kind of look at the problem. But a little bit more in details, what do they want to do? Number one? There's two sides from my perspective, one or two aspects. One, how do you make the radio algorithms more efficient? How do you get more spectral efficiency? How do you do better link adaptation and channel estimation? The radio algorithms that we thought were super, super handcrafted and really good, can they be made a little bit better with AI? And that is happening. We have here, because we're in MWC 26, we have the booth there, our booth. We have really cool stuff from Ericsson, from Samsung, doing exactly that with our solution. And the other side is, you take AI for making the infrastructure better. So now we're talking about how do you have better power management. How do you reduce the power consumption? How do you do predictive maintenance? How do you go into automation? So those aspects are very important for the operator, very meaningful for the operators. And then from there, how can you use those slicing capabilities to monetize and to do more services?
Patrick Moorhead:
I like the way you put that AI for network and network for AI, kind of reflexive. And someday we're going to have AI making networking and networking making AI. Does that make sense?
Kevork Kechichian:
Sci-fi kind of territory. It's a bit scary, but yeah, why not? Especially with the agentic kind of… Bots becoming independent and talking to each other. You never know where it's going to lead us.
Daniel Newman:
We give the agents agency.
Kevork Kechichian:
Agents agency, yeah.
Patrick Moorhead:
That's good, yeah. Open claw for carriers. Here we go. Hey, we've been talking at a very good high level here strategically, but I want to talk about some specific things you're showing off here at the show. And Christine, I'll start with you. You're showing collaborations across RAN, Edge, and the Core. Talk about kind of the real world examples and true benefits that this is bringing to the business.
Cristina Rodriguez:
Yeah, no, absolutely. And one of our propositions, right, to the industry and to the world is having the same platform that goes from Core to RAN to Edge. Now imagine for an operator and for our customers what it means, the lower complexity that that brings when you can use the same platform, but on top of that it's an open platform that millions of programmers know that you have many software vendors. writing application and bringing innovation into that platform. It's super powerful. And you can see that, for example, when you talk about power management. This is important for operators. And you see in our booth, we have examples at the core, the core network. We're doing that with the Ericsson and Nokia of the world, bringing the power, bringing down the power, and you see the demos here. Same thing, same principle applies in the RAM. Also bringing, we have capabilities in our Xeon processor that allows to do C-states, bringing cores to, putting core to sleep, and then bringing them back to life very quickly, very low latency. This is real meaningful. You're talking about benefits from the operator, power reduction, right there, as an example.
Patrick Moorhead:
That makes sense. Kvork, how about you?
Kevork Kechichian:
Yeah, I mean, just to build on top of that, I mean, there's lots of goodness within the Xeon architecture, right? Whether you're talking about the compression engines, encryption engines, you're talking about confidential compute. You know trusted domains. I mean all of that is kind of freeing up the bandwidth from the from the CPU to do other tasks, but we're also seeing your different use cases right with with the high density cores on clear water forest. You know and with the kind of a fixed form factor of some of our customers so they have to fit in a higher compute density within a given space given power budget given your physical form factor and we're seeing lots of improvement there. I mean you know with Ericsson for instance we're seeing I mean that's. their numbers with Clearwater, you know, compared to the previous generation. We're seeing 30% on performance, 60% on power reduction. I mean, those are all goodness that our customers are harvesting, right? Not compromising with the quality and, you know, like Christina said, keeping the software base, the deployment consistent, right? So we're trying to showcase and accelerate, you know, that type of a capability that In some cases, unsurprisingly, people weren't aware. I mean, some customers are, but we see a wide spectrum.
Daniel Newman:
So, Kevork, I'm going to hit you both on this one. We have a little fun here to end this. All right. 6G. 6G, yes. It's the thing. Everyone's talking about it. Probably really a full deployment in the next decade, right? But it's happening now. That's right. We had the same sort of wave with 5G. I remember doing 5G readiness assessments in 2015, 2016, and realized it took It took some time. I remember those days. Yes, yes. And here's the thing. 5G, we're in 2026, in case anybody didn't know what year this is. And we're still trying to help the operators really monetize. They're still figuring out. They've invested a lot in the infrastructure. Now we're saying, here's 6G. How do we help? How does the industry drive forward the reset at the same time without necessarily forcing massive hardware overhauls and enabling the operators to continue to be profitable and successful with this next wave of technology.
Kevork Kechichian:
I mean, when 5G started, I mean, the promise was low latency, high density. So all of these kind of started seeing fruition and almost immediately, right? People in development knew it, right? And there was tons of applications there, I mean, especially on low latency aspect of it. And the deployment, like you said, I mean, it's very expensive, takes a decade and people to harvest that. Now, we're trying to make sure that from that aspect, we're delivering More capabilities within the 5G framework with the deployed base now 6G. I mean 6G is born within the AI whichever paradigm you want to call it and again there were seeing lots of help for the operators how to run the systems right. So I was talking about AI just to optimize the whole system, and Christina mentioned that prior. But also, we're going to start looking at services that run on 6G from a consumer perspective, right? Just the things you can do there better and more efficient way. So it's going to be, I mean, it's an interesting ride from our perspective. It's still like an exploration for us just to partner with the operators and figure out how we can help on both sides of the spectrum, right? AI part of their operation, but also how the consumer, whether it's subscription models or different things that can run on there.
Cristina Rodriguez:
Yeah, I totally agree with Kevork. 6G, we have to help the operators to monetize the network. We all know that nobody was super happy about the monetization of 5G. So 6G, everybody is thinking about that. And there's two aspects. They start with, number one, The network has to be optimized, has to be better optimized than before so the cost, it reduces for the operators. So you have to be careful when you're using your tools as an operator, what kind of deployment you're going to have, what kind of architecture you're going to have. You have to be an architecture that makes sense. Economically right financially and that's where we bring from the cost point of view and that's where we bring our products. We believe that having see on products on the painting that architecture will give them that advantage as opposed to having to use more expensive for more. power consuming or more complicated complex solution. So that's number one. That will give them the capability to use things like, maybe we go back to deployed radio slicing in a more end-to-end kind of solution where they allow them to have different SLAs and have new services that they can monetize. And then it's AI right? AI is going to be very important. We didn't have AI before. But with AI, you can do the radio slicing better. You can have more services depending on the consumers, the business. So that's another aspect. We'll have the tools there. And then finally, I'm going to say something that I expect to see is more on the edge side. and both Telco and Enterprise Edge. We have a couple of demos here, by the way, that you should check out on the Edge. One with a Cisco Unified Edge platform, the other one with a Dell XR8000 platform. And this is bringing the network and the connectivity to the Edge. And you can have enterprise vertical functions with connectivity, with network functions, all done on-prem. So really, really, you guys should check it out.
Daniel Newman:
So neither of you said robots.I felt like you were going to have to say robots somewhere here.
Cristina Rodriguez:
Well yeah, no, of course …
Daniel Newman:
I'm just playing, but 6G really should be the enabler, right, to bring that type of capacity and persistent connectivity. You're going to need to be low power, so a lot of the designs are going to have to be very thoughtful about what kind of networking they put in these devices, vehicles. All the things that people are sort of starting to see emerge, 6G is going to really be the
Cristina Rodriguez:
Yeah, and the ISAC, the Integrated Sensing and Communication, that's one of the main requirements of SICC that we believe we can enable with our solutions.
Daniel Newman:
Yeah, no, absolutely. Well, Cristina, Kevork, I want to thank you both so much. A lot of fun sitting down, chatting here at Mobile World Congress, and I look forward to checking in later. And by the way, we don't have to come to Barcelona to do it. Not that we don't want it, but we can do it right in town. Yeah, we can do that. Pat has a great space where we can film right at his condo. And all right, everybody, well, thank you so much for being part of this episode of The Six Five. We are on the road here at Mobile World Congress 2026 in Barcelona, not in Austin. We appreciate you checking out all our content here at the event. And of course, all of our Six Five content. Be part of our community, subscribe. But for this episode, we have to say goodbye. See you all later.
MORE VIDEOS
The Six Five Pod | EP 294: AI Capital, Sovereign Cloud, and the Infrastructure Arms Race
AI funding rounds are getting bigger. Infrastructure bets are getting steeper. And the SaaS model is back under pressure. On episode 294 of The Six Five Pod, Patrick Moorhead and Daniel Newman break down the $110B OpenAI raise, Amazon’s expanded role, AMD’s $100B Meta deal, sovereign cloud momentum, and whether or not the SaaS premium is being permanently eroded.
The Rise of Companion Silicon: Rethinking AI Architecture from Edge to Cloud
Patrick Moorhead and Daniel Newman break down the week’s biggest AI signals, from $650B in hyperscaler CapEx and Anthropic’s breakout momentum to the SaaS repricing debate and a Flip segment on how fast AI can realistically disrupt white-collar work.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.


