Home

Escape the AI Divide: Why Some Companies Are Surging Ahead

Escape the AI Divide: Why Some Companies Are Surging Ahead

Rob Thomas, SVP and Chief Commercial Officer at IBM, joins Patrick Moorhead at IBM Think 2026 to examine the AI divide: what separates the companies driving measurable margin growth from those stalling in pilot mode. The conversation covers AI operating models, the on-premises gap and Sovereign Core, real-time data architecture via Confluent, and how IBM's WatsonX and Bob address the model orchestration economics most enterprises have not solved.

S&P 500 operating margins have climbed from 14% to 17% over the past five years and are projected to reach 19%. At IBM Think 2026, Rob Thomas argues that this isn’t a broad market trend. A relatively small group of companies has figured out how to truly operate in AI mode, while everyone else is watching the gap widen and deciding which side they want to be on.

Patrick Moorhead sits down with Rob Thomas, SVP and Chief Commercial Officer at IBM, unpack what separates enterprise AI leaders from the rest: why an AI operating model matters more than any single tool purchase, what’s still missing from hybrid and on-prem AI infrastructure, why real-time data architecture is becoming foundational, and how IBM’s WatsonX Orchestrate and Bob are tackling the model orchestration challenges most enterprises still haven’t solved.

Key Takeaways Include:

  • The AI divide is measurable and widening. A small group of companies is pulling away because they’ve embedded AI into how they gather intelligence, automate decisions, and govern outcomes. Thomas calls this the difference between the light bulb and the assembly line.
  • Buying a tool is not a strategy. Every competitor has access to the same models. Differentiation comes from the operating model: how an organization integrates AI into its core processes rather than layering it on top of them. 
  • On-premises AI is the single biggest gap in the enterprise stack today. Sovereign Core addresses the kill-switch problem that enterprises and governments have raised: the inability to guarantee continuity when relying on cloud-only infrastructure. Hybrid cloud is the architecture that finally enables AI wherever enterprises actually need it.
  • Real-time data is the next architectural shift. Most enterprise systems were built around data at rest. IBM’s acquisition of Confluent and its Kafka, Flink, and TableFlow capabilities reflects a larger bet: production-scale AI agents will require a real-time data layer to communicate, coordinate, and act autonomously, without constant human intervention.
  • The economics of AI depend on orchestration. IBM’s Bob dynamically routes workloads across frontier, open-source, and specialized models based on cost and performance. Thomas’s analogy is simple: you don’t take a Ferrari to buy milk. Most enterprises still are.

The conversation ends on a clear point: the companies waiting for AI to be perfectly accurate before deploying it are the ones falling behind. The leaders are already in market, testing, iterating, and learning in real time. More importantly, they’re building the organizational mindset and operating discipline needed to work in AI mode. That’s the real divide.

Watch now and subscribe to Six Five Media for analyst-led coverage from IBM Think 2026.

IMPORTANT: Note to Uploader: VIDEO EMBED – added by uploader.

Disclaimer: Six Five Media is a media and analyst firm. All statements, views, and opinions expressed in this program are those of the hosts and guests and do not represent the views of any companies discussed. This content is for informational purposes only and should not be construed as investment advice.

Transcript

ROB THOMAS:
you have to realize any one of your competitors can do the same thing. So your differentiates is gonna be how do I operate in an AI mode? I think that's gonna be the difference because that's how you move from just the light bulb to something that will change your business.

PATRICK MOORHEAD: 

The Six Fives on the road here in Boston, Massachusetts. It's been a great show so far. It is not over yet. We're talking about the AI divide. new AI operating models and all the things that IBM is bringing to the table to serve their clients and hopefully get them across that divide. We've had many conversations with Rob Thomas about the evolution of enterprise and government AI, and we're going to bring him in to get the very latest. Rob, great to see you. Pat, nice to see you. You know, I don't know what I prefer better, kind of being in the snow and winter of Davos, sitting on the porch, literally freezing every year, or sitting in here basking in the heat and the lights. I don't know. It might be colder in here with the air conditioning. That's true. It's hard to say. That's true. But they're all good venues. Nice to see you again. Yeah, you too. Let's talk about this AI divide, essentially the difference between those companies who are making real progress and those that are pretty much stalling. The big thing that Arvind even talked about and you talked about is actually widening. Let's talk about what leaders are doing right just to kick this off.

ROB THOMAS: 

Well, first of all, you can see it in the numbers. And this is what I was trying to draw attention to in what we covered yesterday. In the last five years, operating margins for S&P 500 have gone from 14% to 17%. I don't think we've ever seen that kind of move in a five-year period, heading towards 19%. The scary part is it's driven by a relatively small number of companies. So when I talk about the AI divide, my point is some companies have figured this out. And everybody else now has to decide, do I want to be on the right side of this? Or do I want to fall behind? Obviously, nobody wants to fall behind. But the question is, what will it take to get on the right side? The historical analogy I've given is electricity is invented. One of the first applications is a light bulb. Goes into manufacturing facilities. It's safer than oil lamps. It doesn't catch on fire, but the light bulb didn't really change industry when electricity was applied to the assembly line. Then industry changed forever. So companies need to think about. Everybody says we're doing AI. I think in many cases that's a light bulb. It's like email summaries. It's document preparation. People need to think about what will the modern day assembly line of agents look like for AI. It can't just be a light bulb. And I'm afraid the companies that are falling behind are kind of enamored by the light bulb.

PATRICK MOORHEAD:

No, I did love that analogy and sometimes, you know, I look at the S&P 500 and it's screaming and I, you know, we're all trying to figure out, well, what is the difference? What is the difference there? And it is interesting. I feel like there are some companies that are just waiting for various reasons. But I think hopefully there will be a call to action of that. I'm curious, what is the inflection point between the haves and the have-nots, or the leaders and the laggards in this transition?

ROB THOMAS: 

It gets to what I positioned this week, which is around companies need to have an operating model for AI. That is different than we're acquiring a tool or we're putting in some technology, that is good, but you have to realize any one of your competitors can do the same thing. So your differentiation is going to be how do I operate in an AI mode, if you will, which is how do you collect intelligence, how do you create actions out of that, how do you automate your operations, how do you build trust around what you're doing in AI. I think that's going to be the difference because that's how you move from just the light bulb to something that will change your business. You know, one of the clients we had yesterday was Nationwide Building Society from the UK, and they're talking about how they're now using real-time data to inform all of their agents, which is driving a huge change in underwriting and their processes around retail banking. Like, to me, that is an example of moving well beyond a light bulb.

PATRICK MOORHEAD:

Yeah, I liked the presentation on stage and also that you just that you didn't acknowledge that it's not nationwide the insurance company. But it's really a success story of what they've been able to do with agents. And quite frankly it wasn't a company that I had expected to be able to be leaning in this so hard. So, we've talked a lot about agents as of late. It's crazy to think that three years ago, we really weren't thinking about them. Maybe some of the research groups were, but they were really in the context of stringing together certain actions that were derived from LLMs. And now we're talking about having, you know, 1,000 to 10,000 to some people say 100,000 agents per person. And it really is crazy at how quickly this came on. It's not just about having agents. It's about having a full stack though. that can support, protect, and put that into action. I'm curious, what are pieces of the stack that might be missing or that people really need to be thinking more about to get to this inflection point in AI?

ROB THOMAS: 

What I'm encouraging people to think about is, I describe it as there's a you before you see AI, and then there's a you afterwards. There's a pre-you, there's a post-you. My biggest fear is most people haven't truly seen it yet. You don't see what this does by using an app on your phone. Like when I saw it, which actually I don't think was that long ago in the whole scheme of things, maybe 14 months ago, it was at that point we had put Watson X-Orchestrate in IBM. We were hooking it up to our enterprise data so we could look at things like strategy documents, financials, org design. And then you start asking questions like, all right, so how should I redesign the organization for what we want to do in the next three years? And the output you get is incredible. That's the moment I was blown away that there's truly something here. But it's only because we had an agentic layer and it was using our proprietary data. Most people haven't had that aha yet. They're just using a public model. And I think the faster we get everybody to their post-AI self, Then the imagination kind of kicks in and then I think people can go faster. But that's that's what we have to cross here.

PATRICK MOORHEAD: 

Yeah it's interesting even though the size of my company is not your key target market or a small company. I personally saw the light. When I fed all of my company data in it securely and started querying and they would ask ask me hey what if I did this for you like you can do this and we became 50% more efficient. in a lot of our research activities. And that's where I turned the corner. So you got to the aha moment. Exactly. Once, I think, hitting exactly what you said, which was integrating our company data, where it could see things across different elements of even our stack. And it was literally binary for me. I'm going in from we need to be careful, we need to really, you know, let's slow roll this to we're going to accelerate, and that's exactly what we did. What is missing? You know, let's look at, I love layer cakes, right? And I think I saw one in, I think, Dinesh's. I have to see, it's the way my brain works, to say that there's reality or meat behind something. What is missing from the enterprise AI stack today? to help people at scale, meaning is it data, is it governance, is it everything? Something different.

ROB THOMAS: 

I think the single biggest gap at the moment, which is what we're going after with Sovereign Core that you mentioned, is on-premises AI. There's not really a great answer for that. There's no foundation models that run on-premises or on private cloud today, although stay tuned on that. And because of that, I think it just changes, a little bit to our pre-post discussion, it changes the willingness of people to do some things that they may otherwise do. So when we, I would say we're envisioning Sovereign Corps, this was talking to companies and governments around the world that said, we agree this is critical, we think it's fundamental to growth in our country, but we cannot be subject to a kill switch, meaning somebody can instantly turn it off. That's what we're delivering and solving with Sovereign Core, but I think we're just scratching the surface of AI wherever you want it. Some will be on-premises. Obviously, there will be a lot done in the cloud. There will be some done on the edge, but this kind of goes back to our whole strategy around hybrid is enabling Hybrid AI requires hybrid cloud, and I think we're just getting started there.

PATRICK MOORHEAD: 

Yeah, Rob, I don't love that just because I'm sitting at an IBM event. We wrote about this no less than two years ago on on where AI would be, and that's everywhere, right? Orchestrated across different types of modalities, and you nailed it, you can't get AI on premises in a full stack. And I'm surprised it took this long. And quite frankly, if you go cloud, cloud guys are offering, again, hybrid doesn't mean on-prem only. It's orchestrating across different environments, whether it's Edge, with SAS data, with public cloud, on-prem, Sovereign, even Colo. So I think we finally arrived and you you know, have an offering that fills this gap. And I think people should be pretty excited about it because people have choice. And I think the other thing we've learned, too, is that IBM said this three years ago, which is going to be a multi-model world. and whether it's proprietary, open, expert models that do something really well that are really small, and you need an architecture that can arbitrate between that, and then we added agents, which you have to have agents that talk with other agents. You need agent systems that talk with SAS agents, that talk with hyperscaler agents. So yeah, if you can tell, I'm pretty excited about this.

ROB THOMAS:

On multi-model, this is, I would say, the real secret in plain sight with IBM Bob, which is we've got an optimization and orchestration engine in Bob. So if you're in there building software, Bob will decide which model to use. Constantly optimizing price and performance so third maybe 40% of the time it will kick to a foundation model that's expensive but you get something great right then sometimes it goes to misdraw sometimes it goes to smaller open source models and so the economic equation is actually vastly different when you're doing model orchestration and optimization. I believe this will become the default for how every company uses AI models because I think as one of our, the GM of, General Manager of Bob said for us, you don't take a Ferrari to buy milk. That's overdoing it if you're just going to the grocery store. And I think many people are doing that today. It may not be necessary.

PATRICK MOORHEAD:

The mic drop at the end of the Bob demo was, oh, and you can choose any model you want. Yeah. So I like that a lot. So I want to auger in on data. You know, it's funny. Three years ago, I was sitting in the OpenAI announcement out in Redmond, and they were doing recipes, right? And what should I do for my vacation? I tried to translate that into enterprises. And the first thing that I thought of was data. This is going to be really, really hard here. Right now, today, right, we've had this data conversation for three years and, oh, by the way, I think you may have mentioned this on stage, which was, you know, here we are, we always talk about data. Whether it's big data, remember this conversation of what's holding us up. And even before this, but I'm curious, what's actually preventing operating in real time today related to data? Is it lack of streaming data applicability and confluent plugs the hole there?

ROB THOMAS:

I think this is more cultural than anything related to technology. Okay. If you think about the history of databases and data warehouses, everything has been about move it to one place, so it can sit there at rest, and then you can do your analysis. So every company, every business, every process has been designed around data that is stale. It was better than the alternative, which is no access to data, but everything has been designed for that. The thesis for why we acquired Confluent was we think the world has changed dramatically here, which is everything's going to be about real-time data. Obviously, yes, it has to be stored at some point, The work that Jay and the team at confluent have done around things like table flow and flink where you can do real time processing around Kafka. This is how you're going to see data managed in the future for AI because they give I agents are doing work. Agents, like humans, don't really work that well together today, but if you're writing to a consistent layer underneath agents, which we think will be Kafka for real-time data, then your agents can actually communicate and interact without their constantly having to be a human in the loop. So I think this is the next fundamental shift in data architectures.

PATRICK MOORHEAD:

Even the thought that what comes out from an agent immediately needs to be processed and then that is real-time data, and any of the actions that are coming from the hundred sub-agents that are helping you accomplish that task, I liked the demo that you did to show that, and it's good to see. So, I want to end, I want to kind of simplify things here, right? AI can be so complex for people, there can be people sitting on the sidelines, and as AI becomes really embedded into these core operations, What is going to separate the companies that turn this into real business success from, I'll call it the laggards, and where do they start? It's hard to imagine people haven't started yet, but they've done POCs, they've done the experiments, but they just can't scale.

ROB THOMAS: 

I think everybody's started, everybody's doing things. I think, to kind of come back to this word culture, You have to think about leveraging AI and probably a different way than you do anything else. The thing I always kind of chuckle about is I will talk to clients say, yeah, we're using AI. The problem is we only got the right answer 90% of the time, 95% of the time. Therefore, we can't do it. I'm like, come on. We have a lot of humans in our company, and they're like wrong a lot. So maybe you shouldn't be holding the AI to a higher standard. Obviously, we need governance. So I'm not suggesting be reckless, but I'm saying let's be practical that it's not going to be perfect. But if you're willing to keep pushing the envelope, it will continue to get better over time. So I think there's a little bit of a cultural inertia here, but it goes back to where we started. There is clearly an AI divide. If people accept that, then you have to say, how do we make sure we're on the right side? And you're going to have to lean in, probably more than you're comfortable at first, to avoid falling on the wrong side.

PATRICK MOORHEAD: 

Yeah, I appreciate. I call those sage words. And it was funny how we got to the point of, you know, where 90% wasn't good enough for certain tasks. I mean, sometimes you need deterministic where it's 100%. And sometimes, I mean, as humans, and, you know, we've had this discussion with self-driving cars. And here we are having, I'll call it self-drive autonomous enterprises of where it can go. It also reinforces the human element of humans are very much still very valuable and getting folks to work to higher order, taking out the monotony. of I like to call it Kanban, moving this piece of paper in here to this place over here, and in getting people to drive more revenue, come up with new ideas, be more creative, be more efficient. I think that people will be more self-actualized as well. So I know that's not the question you answered, but I'm always, you know, interested in how this thing plays out. And quite frankly, some of the conversations going on that are just like nuts to me, right? You know, my son, who's a software engineer, wasn't supposed to get a job coming out of college, right? And here we are, and he is busy, and everybody's as busy as ever.

ROB THOMAS: 

By the way, I think this is the biggest myth of them all. I think we're going to have massively more engineers in the next decade, because the engineers that we have, they are the best at utilizing these capabilities. So I think this is going to have the opposite of what people are predicting. Now, I do think the role of software developers probably evolves. Yes. You don't have to write every line of coding right that's great news so you can spend more time on what should we be building does it really solve the problem and thinking about how that integrates into what we're doing so I think the role evolves but the notion that there's going to be fewer or none. That's insane.

PATRICK MOORHEAD: 

No, I appreciate that. Rob, always great to chat with you. I appreciate you taking time out of your busy schedule up on stage, meeting with clients and all the above. So I appreciate that. Thanks, Pat. Great to be with you. This is Patrick Moorhead with Rob Thomas at IBM Think 2026 here in Boston. We covered a lot from the AI divide to new AI operating principles that have tangible meat and products underneath it. Can't wait to dive in. Hit all of our IBM Think content at Six Five Media and more Insights to Strategy, of course. Hit that subscribe button. We'll talk to you later.

MORE VIDEOS

Anthropic at $1.2 Trillion, AMD's Blowout Quarter, and the PE-Backed AI Enterprise Play | Ep. 304

Patrick Moorhead and Daniel Newman dig into the week's biggest moves in enterprise AI: Anthropic and OpenAI launching PE-backed enterprise JVs on the same day, Anthropic filling its compute gap with SpaceX's Colossus, Cerebris filing for a $3.5 billion IPO, NVIDIA going deep on co-packaged optics with Corning, and a full IBM Think and ServiceNow recap. Plus, for The Flip, hosts debate whether Anthropic, at $1.2 trillion, is the most important company in enterprise tech.

IBM's Channel Chief on AI Maturity, Ecosystem Strategy, and Building Kareem.ai

Kareem Yusuf Ph.D, SVP, Ecosystem, Strategic Partners & Initiatives at IBM, joins Tiffani Bova at IBM Think in Boston to discuss how AI maturity is reshaping partner ecosystems, how IBM is deploying AI inside its own channel operations, and what it actually took to build Kareem.ai, an internal channel health analytics tool now in active pilot.

Transforming Retail SMBs with Practical AI, Security, and Scale

Retail SMBs are under pressure to deliver seamless customer experiences while managing tighter margins and limited resources. AI offers a path forward, but only when applied to specific operational challenges.

This conversation explores how targeted AI use cases, integrated security, and ecosystem partnerships are reshaping how SMB retailers compete and scale.

See more

Other Categories

CYBERSECURITY

QUANTUM