Home

From Cloud to Concrete: Making AI Work in the Physical World with NXP CEO - Six Five In The Booth

From Cloud to Concrete: Making AI Work in the Physical World with NXP CEO - Six Five In The Booth

Rafael Sotomayor, CEO of NXP Semiconductors, joins Daniel Newman to discuss how AI is moving from cloud experimentation into real-world systems and what defines success as intelligence becomes embedded across industries.

When AI leaves the cloud, the rules change.

AI only becomes meaningful when it operates in the real world, under real constraints. In this conversation, Daniel Newman speaks with Rafael Sotomayor, President and CEO of NXP Semiconductors, about what it takes to move AI from “promise to production” as intelligence shifts into cars, factories, and connected systems.

Sotomayor outlines where AI adoption is gaining traction today, where progress continues to stall, and why physical-world constraints fundamentally reshape how AI systems must be designed, validated, and deployed. They also look at how software-defined architectures are beginning to reshape traditionally siloed industries, while still requiring deep domain expertise across automotive, industrial, and IoT environments. As AI becomes more distributed and less visible to end users, Sotomayor shares his perspective on what leadership looks like in this phase, and why long-term advantage will be defined by execution at scale rather than experimentation alone.

Key Takeaways Include:

🔹 AI Is Moving Into Selective Production: Some capabilities are shipping at scale, while others remain far more complex to operationalize than early narratives suggested.

🔹 Physical Constraints Redefine Success: Safety, latency, reliability, and predictability become first-order requirements outside the cloud.

🔹 Industry Convergence Is Uneven: Automotive, industrial, and IoT markets are aligning around software-defined approaches, but still demand specialized solutions.

🔹 Edge AI Prioritizes Control Over Peak Performance: Decisions about where AI should run are being driven by trust, risk, and accountability.

🔹 Leadership Is Proven Through Scale: Competitive advantage comes from integrating embedded intelligence across industries over time, not isolated deployments.

Learn more at NXP Semiconductors.

Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Listen to the audio here:

Transcript

Daniel Newman:
Hey, everyone, the six five is in the booth here at CES 2026. Another year begins. It's very exciting times. This event is crazy. We are in the middle of a revolution here. And you can see it because companies are betting big. They're showing up in force. And I'm at one of those locations are in the booth at NXP. This is going to be a great conversation. So excited for the first time to have Rafael Sotomayor, CEO, NXP, join me here on The Six Five. Rafael, welcome to the show. Welcome to 2026. Thank you, Daniel. Thank you for having me. Fast and furious, right? Every time the year just ends and it's like, I don't know about you. I went on a little vacation. I won't go into details. It was a little chaotic, actually. But and then it's like the first day back. It's like we're here. We're at CES. And this isn't like a kind of get yourself warmed up event. This is like full go. I mean, how fast and furious is this one going to look for you?

Rafael Sotomayor :

Yeah, no, it's just it feels it feels. Listen, it's starting 2026 with CES is always great. Yeah. Right. It's always fantastic. And I think it's just I wouldn't want to be any any different, to be honest with you.

Daniel Newman:

Marc Thiessen Yeah, well, I mean, you get to spend so much time with so many customers. You guys have a really cool deal here. For everyone out there, if you're at CES, there's this giant convention center, but NXP has this massive complex that's built right out in the parking lot. It's constructed for a king. I mean, this place has got so many demos going on, so much space, so many rooms, two stories. I spent, you know, six an hour, two hours just to get through it and tour it. So anyways, but, you know, every year, right, you come to CES, you know, it's going to be the year of robotics. It was the year of IoT. It's the year of artificial intelligence. You know, you're talking to a lot of customers. What are you seeing, you know, what's shipping today? And what are the things that are still harder, you know, than kind of people think, despite all the hype that goes on here at the event?

Rafael Sotomayor:

Yeah, no, I think that you're right. I think every year seems like the news on AI are just always there, right? It's like, and to be honest with you, I think that's what's great about CES. It kind of gives you what is the innovations are shipping now, we're about to ship, but also, you know, what is coming. Sometimes they show ideas that still need time to cook. And, you know, sometimes you kind of have a hard time kind of parsing what's real and what's not real. But I think what's different, and I think I felt this since last year, what's different is now the AI deployments are really happening. especially at the edge, and this is where NSP is focused on. AI at the edge, some people call it physical AI. We see deployments now, I mean, you see it, the most common one that everybody hears about is the autonomous driving. That is an edge AI application. But not only autonomous driving, we see in-cabin sensing, there's a lot of AI features in-cabin on the cars. Some customers now are taking advantage of the fact that they went and invested heavy on software-defined vehicles and now using the platform to actually deploy AI features to really augment core functions in the car, whether it's braking, whether it's suspension, whether it's motor control, we see now the deployment there. Industrial, we see vision systems being deployed, we see finally predictive maintenance, I think things they have in common are things that need real-time performance, that they need low latency, they need security, deterministic behavior. I think those are the things that have been successful of deploying AI at the edge. And I think that's what's happening. You ask a question about what's hard, I think The whole notion of thinking of deployment of AI at the edge and compared to the cloud, I think leads to probably misconceptions. I think that this general AI deployment at the edge is harder than people think. I think systems at the edge are constrained by power, they're constrained by cost. They're different, okay? I think they're narrow, they're use case specific, and I think that is a little bit harder than people kind of give it credit for.

Daniel Newman:

Well, if you think about it, right? I mean, first of all, I've sat with a number of your peers, even since I've been at this event, and physically I was what they're most excited about in terms of this multi-trillion dollar expansion of TAM. We all like to shuffle out robots, and we like to show cars that are going to drive coast-to-coast autonomously without a driver and all these things. Very neat. And by the way, you can go up and down the strip here in Vegas. I mean, there are fully autonomous Zooks running up and down the strip. It's very cool. So this is happening. But this stuff is hard. I mean, people thought we'd get the first maybe L4 I think a decade ago. We had threats of, hey, we're going to drive our first Tesla coast-to-coast. Didn't happen. And one of the things, Rafael, is that Accuracy is so important, like safety, latency. You can use Nella Lab and you can ask it a question and it gives you some hallucination and it tells you a bad answer or it gives you a bad recommendation for a restaurant or, you know, the world doesn't end. Nothing happens, but a car makes a mistake and people can duck. A machine that's in manufacturing messes up a, you know, using AI and inaccurately, you know, deterministically does something. It can cause damages. It can cause major financial and economic losses. So it sounds to me like complexity here is almost because at the edge, it's not just good. Good doesn't work. It has to be perfect.

Rafael Sotomayor :

Well, I think real life is very unpredictable. And I think that's what makes the deployment of AI at the edge kind of unique compared to the cloud. In cloud, you can tolerate latency. You can tolerate mistakes. But in real life, That's simply not possible, right? I think nobody will compromise their brand, right? Reliability, determinism, performance, safety, security, that takes priority over AI deployment. AI and the edge is not a feature. It's part of a very complex system that lives in a very complex environment.

Daniel Newman: Yeah. And that's kind of, I think we're, we're, we're agreeing on here is that what's really hard, if we want to sum it up for that audience out there is it's really hard to make sure that it's going to be flawless, that it's going to be what it really has to be perfect or close to perfect in many cases. Whereas, like I said, kind of this consumer stuff we use, it's like, ah, it's okay if it's a little off, it's very cool to see it do this. Right. You know, this agent can go out and, you know, try to buy you a new scarf. It's like, oh, that's really neat. It sent me the wrong scarf. Oh, well. But like I said, you have a manufacturing plant, it could cost millions of dollars, or it could physically harm somebody if a machine goes AWOL because you're using AI and it's not correct. And one of the other things that's really going on, I think, is platforming. We're seeing all this move to software. I mean, as NXP, you used to kind of look at industrial, automotive, you'd look at manufacturing. All these things would be like different silos in your business. Are you seeing as this stuff sort of platforms and it becomes more software defined that you're building more this kind of intelligence platform that can then kind of span across these different verticals? Are we seeing convergence here in your world? How's that progressing?

Rafael Sotomayor :

No, I think that this concept you mentioned, convergence is indeed happening, but it's not happening everywhere.

Daniel Newman:

Okay.

Rafael Sotomayor :

But it's happening at the level I think that you alluded to, which is at the system architecture level. What's happening today, the system, I think the software-defined systems that you mentioned, whether it's auto or robotics, are driving the need for, and this is what is the convergence is, they're driving the need for compute platforms, software frameworks, a particular frame for safety, a particular framework for security, so you can see that convergence at that software system level, the architecture level. Where you don't see convergence, obviously, is at the implementation. And the reason for that is, and that may seem obvious, not to the audience, but the requirements are different. The requirements are different. I think the value proposition and the attributes of those products are different, and that drive different priorities. Let's look at auto. Safety, important. Meeting regulatory across the world. It's mandatory. The ability to actually have You know, kind of be very specific to certain mission profiles of the car. Those are really big, important requirements. In industry, you have different requirements. Maybe you have longevity of 15 years. You have uptime requirements. Maybe in consumer and IoT, the requirements are different. There may be their power or cost or their fast cycle times. Those attributes that create the value proposition for the product drive different implementations, and I think that's where the divergence is. But clearly, there's a convergence in terms of the way you think about platforms, right? And that's probably the reason why this convergence on the platforms is allowing certain customers to actually move in, you know, from auto to robotics is because with the same level of concept that you apply from a platform perspective, a slightly different implementation allows you to do some expansion. I think that's very powerful.

Daniel Newman:

Yeah, we're definitely seeing how models at the core frontiers in large language, and then you're seeing models being developed for very specific use and implementations. And these things, they're collaborating, and they're working across architectures, and then there's obviously the data center to the edge, all the way out to the vehicle, to the manufacturing plant, to the smart device. I mean, this is happening very quickly, and this intelligence has to flow. between all of these different things. But there's also this bit of a trade-off. Like I said, I think your company's super interestingly positioned because as intelligence flows out, everything needs to be connected. Everything needs to be smart. We kind of saw wave one of this with IoT. I think when you added AI to IoT, this really comes to life. The TAM gets really, really big. And there's also a lot of trade-offs. I mean, because we've talked a little bit about safety, security, but a lot of these use cases, you want this intelligence to be local. You don't want this intelligence to necessarily have data transporting back and forth as much from the cloud. Like how do you, you know, when you're talking to your customers and your partners in this, how are you navigating kind of that trade off between where, where intelligence stays local and secure versus where it benefits from that sort of flow of intelligence all the way back to these massive cloud.

Rafael Sotomayor:

I think that's a fascinating topic. I think our customers are grappling with, with this, uh, these decisions with respect to total system, right? I think what is obvious is that anything today on an edge device is real-time. That is, that requires that safety, that security, that deterministic behavior. Those decisions of what AI kind of allows those attributes to be there on the device, they go in locally. Anything that is training the model, optimizing the model, anything that benefits from aggregation of data or multiple nodes that can easily be done at the cloud, you can see those kind of workloads being actually done at the cloud. I think where there is an area that is gray, that is fussy, is an area where it's all about trust and control. Because you could have options that it could be work both on the cloud or at the edge, but customers like to have control over data. They like to have control over the IP ownership. They like to be very in control of what the performance of the device is. And they grapple with it, with the options. Can I put them in the cloud? Can I trust the cloud? Or do I just go into the edge and just put it at the edge? And I think that's the area where right now is debatable. I think what you will see move more and more is it's a hybrid system. where real-time, mission-critical features, the AI dependent, run at the edge, and then depend for over-the-air updates and optimization for some cloud capabilities.

Daniel Newman:

It's actually pretty explainable to everyone out there. It's like vehicles, right? Vehicles have to run local and make a lot of decisions very, very quickly. No V2C, no cloud. That's right. But at the same time, all that, how we get really intelligent and create great models for driving over time is that it is logging all that data as you're doing it. It is going back and it is creating the opportunity to train that model. But that could be taken to any industry. You can sort of take that. The real run time is all zero latency. It's got to be intelligence locally. And then by the way, another thing that we kind of talk about, but just kind of brush around is the world's data set. You know, cause like right now we're only trained on like 5% of the world's data. You know, like, so all this AI, all the money that's going and flowing around there is really using this kind of open publicly available data. But where you're super interesting is your companies, that data that's being created, the edge, whether it's industrial, it's agricultural, whether it's automotive, it is creating all this data that these. Open AI's and these open source models, they have no access to, and this is where companies are going to build an edge, like, you know, double on, uh, What do I call that? You know, metaphor. Early in the morning, you got the right word. No pun intended. You're going to build an edge, though, is that you're helping customers build intelligence that helps them localize and capture unique data. And I think in the AI transformation that's going on, companies are going to need that data to to build an advantage and to stay relevant. Yeah.

Rafael Sotomayor :

I think let's double click on that one because I think that's an area where I think the edge is very different is we are dealing with data that is not necessarily clean. Yeah. You know, when you compare it to kind of the data that is used in the cloud. is incomplete, right? Like real life is really unpredictable, data is incomplete. Also, data needs to be combined with the uncertainty and the complexity of real life. What I mean by that is a model that runs in a pristine environment in the cloud has to withstand vibration. temperature changes, you have sensors that drift, and you have behavior from users that is completely unexpected. You don't even think about it. We as human beings behave very differently in different environments. So I think dealing with that level of You know, that lack of clarity, I will say, on the data at the edge is going to be key to actually deploying AI at the edge in the way we need to.

Daniel Newman:

But if you think about a robot, right, just for fun, since we're at CES, like a robot, you know, being able to walk up and down a hill. And then like there's that variable where someone dropped a brick in the middle of the hill. And now the robot has learned, it's been trained to walk down, it's learned how to handle the changes in telemetry. But also it doesn't know about the brick. And until it's seen a brick that was dropped so many times and being able to recognize that's a brick, there's physics on here and I need to step over it. So what I'm saying is like, and by the way, this could be, like I said, a conveyor belt with a slight error in a location of a product on the belt that might mean you need to stop the belt to correct. So like something AI might detect, you have to train it over and over again. So what I'm saying is those are the uncertainties you're talking about. It's just, it's not the same as, hey, I need to ask a model that was trained about what's the best NFL team in history. You're doing some type of search and now create a list for me of the best players and all the cool things you can do with ChatGPT. It's just not like that. It's so complicated, but that's why I said the unique data that companies have has to be part of how they build an advantage. It's like, hey, we've done this same repetitive thing. This car has gone down this road and made this point A to point B 7 million times now. We've started to figure out what all the possible variations of this trip can look like. By the way, there's still more. But that data becomes very rich.

Rafael Sotomayor :

It becomes very useful. Essentially, that model, Daniel, becomes the key to a lot of the use case. And that model becomes the crown jewels of that particular device. And guess what happens now? Now you have a security issue. Meaning is you must protect the model, which means that there's another knowledge requirement that you have to worry about, which is I need to make sure that nobody messes with this model because they can tamper with the functionality of my device, which is my brand, my reputation. And so now you overlay an interesting requirement, which is I need to have really, really good edge security to protect the model. That is actually creating the value proposition for the product.

Daniel Newman:

So let's wrap up with this. You're one of these companies that, in this whole edge environment, you touch everything. I'm sure you'd love to have the name out there even more. NXP is a globally known brand, no question about it. But I would say sometimes I don't even think it gets enough credit for how many of these edge experiences NXP is probably in, whether it's automotive, whether it's manufacturing. How do you see positioning your leadership in AI for your company? And overall, what's going to define the leaders in AI as we go forward? Because there's been this era one, I think it was very centric to models and data centers. But in era two, it's going to be about the physical world. How do you make sure that NXP lands and that NXP is synonymous with physical AI?

Rafael Sotomayor :

I think NXP is very well positioned. I think in the intersection of intelligence and an edge, right? I think that's what we're built for. And we said, when the AI moves from the cloud to the edge, there are attributes that are super important requirements, whether it's latency, security, functional safety. But to both put aside, I think the companies who want to lead of AI at the edge need to be assist, they need to take, like we do, a system-first approach. I think that's what is the most important. AI is not an add-on functionality at the edge. AI is embedded, it's seamless, it's completely invisible, it's part of the system. Companies who actually do the hardware, the software, the safety, the security, And they really understand the attributes of the system are the ones who are actually going to be successful bringing AI to the edge. And that's where we're focused.

Daniel Newman:

We, I will tell you this, I am a hundred percent aligned with you about, you know, I call it kind of ubiquitous and, you know, it's experiential and it's invisible. I like that word is because realistically like AI, it's just part of how we experience everything else. You know, like when you're talking to your device, your handset device, or you're in a manufacturing plant, when AI is used to make sure that we're optimizing production. Either case, the expectation is you've made this investment and it's the outcome, the output, the result, the productivity gains are there, but you're not like thinking about how did that AI happen. And so it becomes pretty invisible.

Rafael Sotomayor:

And the one thing that happens, and maybe we need to be a little bit more explicit, is when you're creating an AI framework that's going to be in the field for 10, 15 years, what essentially is happening is you're creating infrastructure. And when AI becomes infrastructure, the most important part of infrastructure is always resilience. And so that's another aspect of success at the edge on AI is going to be building it as an infrastructure that is resilient.

Daniel Newman:

Make five nines seem like it should be 10 nines. Exactly. So many nines. Rafael Sotomayor, CEO of NXP, thank you so much for joining me here on The Six Five. Let's have you back again soon.

Rafael Sotomayor :

Thank you, Daniel.

Daniel Newman: And thank you everybody for being part of this 6-5. We are in the booth here at CES 2026, Las Vegas, Nevada. What a great conversation. Check the show notes, learn more about what NXP announced here at CES and just more about NXP in general. Great company, very exciting to follow the work that they're doing. Subscribe and be part of our community here on the 6-5, but for this episode, it's time to say goodbye. See you all later.

MORE VIDEOS

AWS Containers & Serverless Update from AWS re:Invent 2025 - Six Five On the Road

Barry Cooks, VP of Compute Abstractions at AWS, joins Patrick Moorhead and Daniel Newman to discuss the latest innovations in AWS container and serverless technologies, offering valuable insights for developers and IT leaders looking to accelerate business outcomes.

The Six Five Pod | EP 289: Infrastructure, Capital, and the Reality of AI Scale

Patrick Moorhead and Daniel Newman explore how infrastructure constraints, capital dynamics, software consumption shifts, and regulatory friction are increasingly determining who can scale intelligent systems, featuring an exclusive “Off The Record” conversation with Martin Casado, GM of the Infrastructure fund at a16z (Andreessen Horowitz).

Dell’s PC Reset Unveiled at CES 2026 - Six Five Connected with Diana Blass

On this episode of Six Five Connected, host Diana Blass and Dell Technologies leaders discuss Dell’s PC reset at CES 2026 and why real improvements in battery life, thermals, design, and displays—not AI hype—will drive the next upgrade cycle.

See more

Other Categories

CYBERSECURITY

QUANTUM