IBM Says Quantum Computing Is Closer Than You Think. The Race for Quantum Advantage
IBM outlines the path to quantum advantage, highlighting real-world validation, hybrid computing models, and the urgency of post-quantum security.
Quantum is no longer theoretical. It’s entering the phase where it must prove real value.
Daniel Newman sits down with Jay Gambetta, Director of IBM Research, at IBM Research in Yorktown to unpack what “quantum advantage” actually means and why the timeline is tightening.
Quantum is shifting from controlled experiments to real-world validation, where systems are no longer compared to simulations, but to actual data. This transition marks the beginning of quantum becoming part of the scientific method and, eventually, enterprise workflows.
At the same time, the stakes are rising. From roadmap transparency to ecosystem development and post-quantum security, IBM is pushing to accelerate adoption while the window for preparation is already open.
Key Takeaways:
🔷 Quantum advantage is defined by real-world performance, not theoretical benchmarks
🔷 The shift from simulation to real data marks a turning point for quantum utility
🔷 Hybrid models combining quantum and classical systems will define early enterprise use
🔷 Post-quantum security is an immediate priority, not a future concern
🔷 Ecosystem development is critical to accelerating real-world quantum applications
The question of “advantage” is simple: when does quantum become cheaper, faster, or more accurate? It won’t happen overnight. It’ll be integrated step by step into real workflows.
Watch the full episode at sixfivemedia.com
IMPORTANT: Note to Uploader: VIDEO EMBED – added by uploader.
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Jay Gambetta:
I think of advantage as fundamentally a scientific debate. When will a quantum computer or a quantum and classical computer working together do something that is cheaper, faster, or more accurate than a classical computer alone?
Daniel Newman:
The Six Five is on the road. We are here in Yorktown at IBM Research. Great day here, had a lot of time with the team, hearing about what's going on, innovation and quantum AI and across the company. But most excitedly, I am here today, having the opportunity to sit down with IBM's Director of Research, Jay Gambetta, joining me once again here on the show. Jay, it's good to be with you.
Jay Gambetta:
Yeah, same.
Daniel Newman:
It's great to be back here and what a backdrop by the way.
Jay Gambetta:
Yeah you enjoy having a quantum computer behind us. What is that just for everyone that might wonder? That's our latest IBM Quantum System 2. It has three Heron processors inside it. The noise you're hearing is it's actually running right now.
Daniel Newman:
Oh yeah, it is running right now. And if you don't hear that noise, that's because we probably used some sort of small language model from IBM to take that noise out, right? To take away the background. Yeah, that's a lot of fun. Well, Jay, it's good to be back with you. You know, we've sat with you before, but I think you were in a different role. And last time I saw you, you were about five days into this role as Director of Research. But let's just talk a little bit about that now. You know, you're getting your feet wet. You've been in the role for a little while. Talk about how that transition's going and some of the big things that you're focusing on now as Director of Research.
Jay Gambetta:
I think one, it's probably the best job you can imagine, directing research for the company of IBM and really trying to lean into what's the future. So I'm having a lot of fun. I think if I was to say, I look at it as The future of computing, to me, is the greatest time right now. We're seeing all these amazing things with AI to come very soon, quantum computing. How do we actually make sure that we bring all this to reality and then bring it to clients and users? I get to champion this through the Director of Research. So for me, I think it's make sure quantum computing goes. We have to deliver. We have to deliver on our roadmap. We have to keep building bigger and better quantum computers. understand how AI matters for the enterprise. And then I'm adding algorithms and how we do algorithm research so that we can get new use cases. And really, to me, it's the most exciting time because it's like going back to the 70s when computers just came. We are now changing the way computing gets done. And imagine if you were in the position to really make sure that happens.
Daniel Newman:
And I'm glad you pointed out AI because your experience here, you were deeply entrenched in quantum. But now as director of research, you're focused on the whole gambit. And Arvind has been very clear, very AI-centric, hybrid cloud-centric. Of course, quantum is the future. Just as that remit Wyden, I just love to hear from people like you that are thinking about AI at a high level. How are you thinking about AI through this role, both kind of as an internal voice, because you're championing inside, and also about how the company is going to continue to innovate forward outwardly?
Jay Gambetta:
AI itself is here to stay and it's not going anywhere. I think we're just going to see more and more great things done. What I've actually enjoyed the most is, like you said, is take a step back, learn and understand where AI is. And so we're asking questions now that I don't think we were ever asking before. How does quantum inject into AI? How do we change quantum by using AI? And then ultimately, how do we push AI even further and further forward? The team that you were talking about is talking about some of their small language models that we want to start to mix with different modular approaches as we bring forward and introduce it with different types of tools. Other areas that I'm excited by, as I said, is like Imagine if we could speed up algorithms discovery in this new future of computing by using AI. I think everyone should be using these tools. Probably they have to become fundamental to their job. And if you can start to actually show that we can accelerate the rate of discovery and really get these future systems or new GPUs into the hands of more people, I think you're going to see a lot more.
Daniel Newman:
And it's proliferating really quickly. I love that you said discovery, by the way. When you left, you know, we spent some time here at Research with Jay. When he left the room, we had an interesting debate with your quantum team about, is it our algorithms discovered or are they invented? So, discovered.
Jay Gambetta:
I think that is the key, is you will have researchers that are going to come up with their ideas so they can say, invent, I'll let them call that invent. But now if we put them into those tools, we can discover how to use them much faster. So I know- Everybody gets a piece. I know exactly what they're saying, but I think of it as like, it's both sides. You got to acknowledge when a researcher is inventing something that has never existed before, that's meaningful to them. But what matters is, can we take that invention and implement it into use cases and discover how to use it?
Daniel Newman:
So let's talk about quantum advantage and quantum is real. We'll start with advantage. You said 26 is the year of quantum advantage. Just for everyone out there, give them just, what is quantum advantage?
Jay Gambetta:
So we put the quantum computer first on the cloud in 2016. And then we set the goal, can we make a quantum computer bigger than what we can simulate with a classical computer? And we set that for 2023. We called that quantum utility and we achieved it. Then we set for 2026 quantum advantage. And what we mean by quantum advantage is, can we do something on a quantum computer or a quantum computer plus a classical computer that's cheaper, faster, or runs more accurately than just on a classical computer alone. And We're getting so close in the scientific world, there's more, we've created this thing called this Advantage Tracker, where scientists are submitting their ideas, they're running them, they're comparing them with classic algorithms, and it's iterating in real time. Like, there's one example if you go to the Advantage Tracker. where an idea was submitted, within two weeks it was shown to not be advantage, new idea was submitted that looked closer to advantage, and then within another month it was shown not to be advantage. So this real iteration is happening. But I think of advantage as fundamentally a scientific debate. When will a quantum computer or a quantum and classical computer working together do something that is cheaper, faster, or more accurate than a classical computer alone? But the other question you had there is what I think is more meaningful, is when will a quantum computer and I don't know whether it's invent or discover, be real for a scientist or eventually an enterprise to use it in their workflows. And what we're actually seeing now is there's a few papers from scientists at national labs as well as universities where they're taking real data and comparing real data to quantum computers. And if you look at these pictures and you say, this is measured a real data or some neutral scattering or real data of some STM image or some molecule, and you compare it to the simulation on the quantum computer, you can't tell the difference. So the quantum computer is now becoming a tool for them to investigate either new materials or to come up with exotic types of molecules. And eventually we want to see it get used to an optimization or other enterprises. So I think there's this switch now from no longer comparing quantum computers to classical simulators, but we start to compare quantum computers to real data. And that's what I mean by We're going to see quantum advantage as scientific demonstration, but now we're going to see it become real and become part of the scientific method.
Daniel Newman:
Yeah, I'm glad you sort of jumped ahead into my next one. I was going to ask about in the enterprise, when does it become real? I think that ends up being, you know, so often the debate, when do I bring my quantum computer to work?
Jay Gambetta:
So I think that obviously, that's going to be determined by the user and the enterprise. But I like to think, right now, it's probably going to become real for scientists. But we have a roadmap to scale to what we call fault-tolerant quantum computers. The details of that doesn't matter. But it means build one of these quantum computers much, much bigger, allow it to run more. I think by the end of this decade, we will have what we call a full-torrent quantum computer that will scale what we can do that will matter for the enterprise. And so we're setting our goal for the end of the decade to achieve.
Daniel Newman:
That'll be like the chat GPT moment?
Jay Gambetta:
I don't know if it'll be the chat GPT moment or it will be a slow replacement of everything behind that you won't even know that you're actually using a quantum computer. So that to me, I think there's two different types, right? Like chat GPT made everyone aware of AI, but that was, but what is more important is it just as changing everything behind. I think quantum is going to be basically changing all the algorithms behind that most people won't even know that it's happened.
Daniel Newman:
Let's talk about architecture for a minute. There's a pretty significant debate in the space. There's quantum startups rising out of the woodwork. There's been, you know, an ongoing, you know, you've got neutral atom, you've got ion trapping, you've got superconducting, you know, a bunch of them. You feel your path, superconducting, IBM is the right path. Talk a little bit about that, the selection, talk a little bit about the ecosystem. What gives you the confidence when, you know, right now there seems to still be a debate?
Jay Gambetta:
So I'm going to first answer that question by saying I don't think the modularity matters. In the end, we have to build a large quantum computer that works. And so when you think about trying to scale something to a really large scale and to do it as quick as possible, we should leverage a technology that we've shown that can do amazing things. Silicon. Like, building things on top of silicon, we've done in the compute industry for ages.
Daniel Newman:
Yeah.
Jay Gambetta:
So why I really like our technology, and there are other technologies like it, is it's built on top of silicon. So if you can actually manufacture it, if you can actually drive those scales of learning, what you're gonna see as we go forward, and this is where I think most people get a little bit confused, they say, well, we're at, I don't know, fuck. In 2016, we put five qubits on the cloud. 10 years later, we're at 100. And I'm saying in in three years, we'll be at 10,000 cubits. And then in in six years, we'll be at 100,000 cubits. That's a huge scale. And to achieve that type of scaling, you got to be built on on something like silica. And so why I really like our modularity, and there are others out there like, even spin type systems but i think they've got other difficulties is it's built on a foundation that we can scale and we know how to scale it at speed and we're just getting into the point like last year we we put all of our future chips are now built on 300 millimeters and we can basically put many different metal down and we can start to scale out this uh these quantum chips
Daniel Newman:
Silken's a pretty amazing, yes, it is pretty amazing. It's funny, it just made me think about, do you remember probably 2019, 2020, when the world got bored of semiconductors? Remember like people were stopped talking about infrastructure and it was all about software?
Jay Gambetta:
It feels long ago, but yeah.
Daniel Newman:
Yeah, I just thought about that. I remember I was writing an op-ed and I had an editor tell me that nobody cares about chips anymore. Not interesting. And at the end of the year, I wrote an op-ed in 2020 that said, semiconductors will eat the world in the next decade.
Jay Gambetta:
I think you were the one that was right.
Daniel Newman:
I got it right. But I just, you know, first of all, I just needed to call that out. But second of all, it was just very, very, um, it just, you know, seeing, because again, what it's done for AI, but now also what it's doing for quantum and basically how the entire world's economy is basically riding on the innovation that takes place here.
Jay Gambetta:
I think, yeah, I think hardware and how much the silicon has led to the technical advances is amazing.
Daniel Newman:
Hardware is cool again. I mean, hardware is cool. You guys do something that's sort of interesting in deep tech. You know, most companies that play in deep tech don't like to give their the road map away, especially not beyond what's kind of in sight. You're doing two things. One is you're sharing with your competition what you're doing, and you're doing it pretty openly. And two, you're making hard proclamations, meaning that you're going to be held accountable. Talk a little bit about what drives that, because people don't expect it. You don't have to do that.
Jay Gambetta:
It's a great question. An easy answer would be we would sit in our lab, we would build it, and then we would basically then tell the world when it's done. so you use we talk about hardware being cool and we're talking and software was software before was considered i still think software is fundamental but if you want your algorithms and software to be built like one solution is you build the hardware and when it's done you release it and you build the software I'm too ambitious. That's going to take too long. Like it's just going to take too long. So if I can release a roadmap and I can show confidence that we're going to achieve it. And even if we miss by one or two years, which we haven't yet, but like, I don't want to rule out that it could happen. It allows us to build a whole community of partners that can do the algorithms and the software on top. And so my goal is why when this computing comes in, why can't we make it impact the enterprises and the world much faster than it took us to really embrace classical computing? And so for that reason is why we're being very transparent. We show exactly what we're building so people can develop on it and we get it in the hands of the users as fast as possible. There is some risk, but we try to build predict, like if you look back on the history of the roadmap, you would have seen that I never talked about fault-tolerant quantum computers until my last update. And that's because one of the hardest problems that we had was, what is the architecture for a modular fault-tolerant quantum computer? So we could break it into pieces and put it all together. Once we felt confident about that, that's when we put it on the roadmap. So we are doing things outside the roadmap for even beyond the roadmap, but I call them more exploratory projects. And the ones on the roadmap are the ones that we believe, well, we have strong evidence that that is the path forward.
Daniel Newman:
So let me pull in the thread about the partners, because I do agree. I think you're seeing it a lot in the AI space with infrastructure, talking about one, two generations out. The six-month cycles are pretty intense there. You have a little more of a horizon, but a lot of it is about the builders on the platforms are what make these things go. The platforms are great, but you need the builders. So you have a rich, vibrant ecosystem. You mentioned academia, research, startups. enterprises, what's sort of the underpinnings of how you think about building the ecosystem? Because by the way, for Quantum, because this is very much your baby, you know, coming into this role, right?
Jay Gambetta:
Yes. So I think you hit it spot on. If you don't create an ecosystem and a built like, I think these tools of AI, one of the things that will leave us with is that the rate and pace of acceleration of new discovery or new algorithms being implemented on hardware is not going to be matched, like it's going to be completely different to the past. So why don't we, exactly like we were talking about new AI hardware coming out, why don't we put the quantum there? And it's always been a fundamental. So we have, whether they're startups that are working on different types of algorithms or their own types of proprietary methods for correcting some of the effects of noise, as well as what's getting more exciting, we haven't even talked about it yet, is I don't think the future is just quantum. I actually think if we could rename a quantum computer, we should call it like a quantum accelerator or something like that. It's going to be QPUs, GPUs and CPUs all working together. And so now you're going to have algorithms that run on both of them. So how do you do that workflow? How do you do that scheduling and across these different types of architectures where both cost money? So today, if you use GPUs and CPUs, GPUs basically cost a lot. CPUs are free. So you just make sure you use full capacity of the GPU. But if you're bringing GPUs and QPUs in this sort of architecture I like to call quantum centric supercomputing, both cost a lot of money to run the algorithm. So you don't want any of them idle. So how do you do all this is a great question. And I think this scheduling and this architecture for the future, I think the best is not to be done yet.
Daniel Newman:
Well, you know, we believe in that, we told you. We're already building tools. I'm looking forward to see some of these tools. We may just have to show them off for you before you go. So a little bit about, you know, the quantum, post-quantum encryption, Q-Day, you know, there seems to be some growing narrative that the end of RSA, the, you know, our Bitcoin is not safe. And I mean, a lot of people have talked about end of this decade, but like how, what's the kind of your, what's your viewpoint there? What's the message out there to the enterprises too?
Jay Gambetta:
We refer to it as either PQC or post quantum cryptography or quantum safe. Yeah. The good news is, It's a solved technical problem, how to switch from encryption that quantum computers will break to encryption that quantum computers can't break. So a solved technical problem. That doesn't mean it's an easy problem to implement. And so we actually have discovered algorithms that we don't think a quantum computer will ever break. And it's related to this really nice mixture of noise and complexity that we have strong belief that won't be broken by a quantum computer. So we have a solution. But like now you ask the question of when do you need to switch? My answer is you should be switched or you should be understanding that switch already. If you want to get a number of when you think we'll have crypto relevant quantum computers, my estimates for those, and we've made it clear from the time we put our roadmap out, you need a quantum computer about the size of our BlueJay system. But you're right, in the last couple of weeks, there have been more and more papers on bringing down the overhead of breaking these algorithms. And even some of these papers are suggesting the systems are much smaller than our BlueJay of around 10,000 qubits. So closer to the Starlink system might be the right order. I personally think it will be beyond those type of systems. But you basically, we have, as we talked about, we had a roadmap where we predict the size of the system. The unknown variable of when it really is gonna matter is how fast will those algorithms come down now that people are discovering more better ways of running those algorithms. And that is coming down really, really fast that I think the answer should be now. And we've, for things like in our mainframe, we've already had quantum safe for a long time. Actually, IBM is being part of the creation of these quantum safe algorithms. And we've been embracing quantum safe since 2016.
Daniel Newman:
So nobody should take the approach of why do today what you could do tomorrow?
Jay Gambetta:
No, we should just be. Now you should look and understand because if some of it's gonna be upgraded downstream, you might as well be educated on when that will happen and make sure you're capitalizing on the downstream work. But then the things that can be upgraded already you should be doing.
Daniel Newman:
Seems worthwhile for sure. So wrapping up as, you know, quantum advantage for all, right? So when you talk about quantum is real, right? So everyone's version of quantum is real is their quantum advantage. What are you sort of advising the enterprise leaders, what I deeply care about, industry, getting meaningful traction, ROI, like what's the path, what should they be thinking about, where should they start?
Jay Gambetta:
So they should start by understanding their risk to quantum safe. I think that is immediate as we discuss. I think the systems that we're building by the end of this decade are going to have meaningful ROI. And so for the ones that I often say to the enterprises that understand computation, be it a enterprise that does a lot of simulation, maybe they solve a lot of CFD type problems, a lot of diffusion simulations, or they solve a lot of simulation of quantum materials because they want to build some type of new material, things like that. They should be looking now, and then as we go forward over the next few years, that'll become more and more, more and more go from the chemistry to the optimization to the partial differential equations, and I think that that is going to be the run out. So I think it's going to be first driven by verticals, but my advice to them is to start understanding. My hope is that Often people think of AI and quantum as competing. Why can't AI just come across quantum? And though, as they do their AI innovation and they start using these tools, quantum becomes an effective backend that just accelerates them through their computation.
Daniel Newman:
Well, they certainly are particularly good at solving different things. And if they work together, they can solve some really hard things really efficiently, yeah?
Jay Gambetta:
So I'll push a little bit on that. So where we're excited with quantum algorithms for AI, so if we're doing simple language or simple classification of like, is this object a bird or something? I don't think quantum's ever gonna matter for that. But what we can prove is if we have data that is very correlated, that you would require infinite data to classify that on a classical computer, we can actually tell that on a quantum. So I actually think it's more natural to think AI will go across, and then we'll have different accelerators for understanding different parts of the problem.
Daniel Newman:
By the way, I think we're agreeing. Because I think where I was getting at is when there's not a lot of data, quantum is particularly good at that. And when there's more data, AI can And bottom line, though, is they're symbiotic. Yes. And IBM has the portfolio to do both. And your world is both. Yeah, I agree completely. Jay Gambetta, thanks so much for spending some time. Great to be with you. And thank you, everybody, for being part of This Six Five. We are on the road here in Yorktown at IBM Research. Great conversation. It was a great day here. Hope you all enjoyed it. Tune in, subscribe, be part of our community for this one. I got to go. See you all later.
MORE VIDEOS
Compute Wars, AI Reality Checks, and the Infrastructure Breaking Point
AI is now an execution race defined by infrastructure. Patrick Moorhead and Daniel Newman break down how compute shortages, energy constraints, and security risks are reshaping the race from building models to actually running them at scale. From chip supply and hyperscaler strategy to AI-native security and the growing case for regulation, this episode maps the pressure points defining what it really takes to turn AI investment into production reality.

Modernizing Manufacturing Without Disruption, How SMBs Move from Visibility to Autonomy
Manufacturing SMBs are under pressure to do more with less while navigating workforce challenges and rising operational complexity. AI is shifting the industry from visibility into more autonomous decision-making.

How Agentic AI Is Transforming Mainframe Workforce Training
Daniel Newman and Greg Lotko speak with Darren Surch, CEO of Interskill, about how agentic AI is reshaping enterprise IT training. The conversation explores why mainframe organizations still need deep knowledge, structured learning, and human judgment even as AI becomes a larger part of the workflow.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.

