AI Is Writing the Code Now: Cisco’s Vision for the Agent Era - Six Five On The Road
Enterprises are under pressure to move beyond AI pilots and deliver real outcomes. This conversation explores the infrastructure, data, and operational shifts required to scale AI effectively across environments.
AI is moving from experimentation to enterprise-scale deployment, but the biggest constraint is no longer the model. It is the system.
At NVIDIA GTC, Daniel Newman speaks with Cisco’s Jeetu Patel, President and Chief Product Officer, about what it actually takes to operationalize AI across the enterprise.
This is not a conversation about potential. It is about what is already breaking, what is accelerating, and what leaders are underestimating.
As organizations move into production, the constraint is no longer the model, it is the system. Infrastructure must handle heterogeneous compute, data pipelines must withstand continuous agent-driven workloads, and trust has to be built into every layer. At the same time, AI is shifting from a tool to a workforce layer, where agents operate alongside humans and reshape how software is built, decisions are made, and value is delivered.
The advantage will not go to those experimenting faster, but to those building systems that can absorb rapid AI advancement and consistently deliver outcomes.
Key Takeaways
🔹 AI is transitioning from pilots to production, with 2026 marking a major inflection point in enterprise adoption
🔹 Infrastructure is evolving into a heterogeneous model, where GPUs, CPUs, and accelerators must work together
🔹 Trust and security are emerging as the primary blockers to enterprise-scale AI deployment
🔹 AI-generated code is already reshaping software development, with large portions of future products expected to be built by AI
🔹 Agents are redefining work itself, shifting AI from a tool to an operational collaborator inside organizations
Watch the full conversation at sixfivemedia.com and subscribe to our Youtube channel for more insights from NVIDIA GTC 2026.
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Daniel Newman:
Hey everyone, the Six Five is On The Road here in beautiful San Jose, California. It is GTC 2026 week and the streets of San Jose are crowded. I have to say every single year I come back here for GTC. Since the big 2022 Chachibit inflection, this event has gotten bigger and bigger and bigger And it's great to be here. It's great to have our team here. There's gonna be a lot of coverage coming in from the Six Five, but it's good for me to have the chance to start off with a person that has joined me many times here on the Six Five, one of my favorite people to sit down and talk to. I have Jeetu Patel, President and Chief Product Officer. That almost slipped from me, but he's the man at Cisco.
How you doing, Jeetu? Good to see you.
Jeetu Patel:
Good to see you.
Daniel Newman:
Gosh, you know those where I always fall flat. You know, they give these jobs to anyone these days. What a fun time to be a product guy.
Jeetu Patel:
It is a fun time.
Daniel Newman:
You and I were talking off air about this a little bit. I'm telling Jeetu, I'm like, hey, Jeetu, I'm vibe coding. I'm vibe coding stuff, and then I'm beating up my team. Because it's good stuff, but it's like I'm vibe coding a CRO strategist, like a chief revenue officer. And it's like reviewing my business, and I'm piping and plumbing it up to talk to sales force and NetSuite. And it's coming back, and it's giving me this great info. And I'm like, why aren't you building anything yet? And I'm talking to everybody. But I mean, in your world, like, When you came from a place where you were so developer-dependent, so engineer-dependent, and you still are, and you still love that, but you've given them 10x, 100x, 1,000x scaling, how much fun are you having right now?
Jeetu Patel:
For a while there, it sounded like a few of us who were really going to try to push for this were crazy because a lot of people were like, well, the moment you go further down in the stack, it doesn't work as well. If you want to do a website, that's great, but when you want to do memory management in the infrastructure, it's harder. that we have our first product now that is 100% written with AI. We were the first design partner of Codex and AI defense is 100% written with AI. Of course you have to have code reviews and all that are being done by humans. By the end of the year, our projection is we'll have half a dozen products written with AI 100%. And by the end of next year, 27, our thinking is that 60% to 70% of our entire portfolio of products will be 100% written by AI. So then the question is, well, what do the engineers do? I think when you think about engineering, that's only 30%, 35% of the time is actually spent writing code. The other time is identifying what code to write, what the architecture is, what kind of problems to solve. And I think that's going to be even more important
Daniel Newman:
Well, it's really about moving us up the stack.
Jeetu Patel:
That's right. That's exactly right. It's a great way to do that.
Daniel Newman:
It gives us a hell of a lot more time to spend. It's like, just because my emails can now be drafted for me, or even like blogs and social posts I used to create, like now it can I do. You know, I did it this morning. I'm like, all right, I've been on TV 10 times talking about GTC. We did our six-five weekly pod, talked about it. I've been tweeting about it. I'm like, write me a quick preamble, Twitter thread. I went on Opus46, and it wrote this great thread for me. And it's like, is that cheating? No, that's not cheating. That is actually taking my voice, and then I, of course, train the memory to know how I think and how I talk. It's a companion. But what I'm saying is, instead of me spending an hour of my day doing that, it's a minute. Now, what do I take the other 59 minutes back and do? By the way, so much of this was at your AI Summit. We heard so much of this because you brought it. By the way, one thing, one question for you. Which model do you use the most? My favorite for narrative-driven is Opus 4.6. I use 5.4. I like Gemini for anything that's web research. So Google has that kind of advantage. And by the way, I love Perplexity, because Perplexity does the model distribution for me. So I've been playing a ton with Perplexity Computer. I don't know if you've played with this thing yet. It's super fun.
Jeetu Patel:
Yep, yep, yep …
Daniel Newman:
Because I like that it actually, because each model, we've sort of started to see a little bit of commoditization in people's heads. each do certain things well and so as humans it's not necessarily in our best interest to have to figure that out or to have even the aptitude to but when you have something that can actually do that.
Jeetu Patel:
So that's your use case for perplexities, I just wanted to go to any of them all, does that make sense?
Daniel Newman:
Well it's fun because when you watch it code you do it actually because or I'm using this skill, and I'm going to apply this model, and I'm watching what skill and which model it's applying so I can see where it's finding best model for each different particular. But it is a lot of fun. And by the way, we were getting a preview of that, because literally, as Summit was happening, your AI Summit, great event, by the way, huge event. Next year, you've got to check it out.
Jeetu Patel:
It's available online for people that want to watch it.
Daniel Newman:
Yeah, I mean, you had just the actual walkthrough of some of the biggest names in tech already.
Jeetu Patel:
We tested everyone's stamina for sitting through Fireside Chats. We had like 12 hours of Fireside Chats and no one left. But Jensen was there. Sam Open, Marc Andreessen in the middle. If you had me up there, it would have been a perfect day.
Daniel Newman:
You were there! See what I did there? I wasn't there. But we were previewing some things about Kodak, some of the things you just shared. And by the way, it's really funny, because right about these black shoes, sort of in that, between Davos and MWC, which by the way, I saw you at all these things. We did.
Jeetu Patel:
We've actually been in that. We've been everywhere together.
Daniel Newman: We've been everywhere. I call it the Blackwell moment. So what happened was the Blackwell moment. So like a year ago, there was this weird debate, Jeetu, about, you know, first of all, there was the ad bubble debate. which we can talk about that. But then there was the AI is so good it's replacing every industry debate that suddenly happened. And I think this is what's happened with this Blackwell inflection. Codex next generation, Opus and Clods next iteration, complexity computer. These things are all trained on. This is the first era where we're seeing both the training on Blackwell and then the inference of these next generation chips. It's pretty incredible.
Jeetu Patel:
A big shift I think that's happened that, you know, if you think about this being the second major phase, first one was chat GPT and that inflection point. What is the second chat GPT moment I think is open call? And the way I think about this is agents primarily. And the way to think about this is the enterprise that's actually really taken off in a meaningful way. The big mental model shift that we had to make at Cisco was it wasn't just a tool, it was an augmentation of a set of agents that actually got entered into the workforce that you as an engineer were now managing. And so a scrum team in the future will no longer look like eight people, it'll look like two people and six agents. And you will have four scrum teams for every scrum team you have right now. And so then the question becomes also, are you going to have reduction of capacity? Not if you have no reduction of ideas. But if you do have reduction of ideas, then you will have reduction of capacity. That's the way it will occur.
Daniel Newman:
Oh, yeah. And it's happening really quick. But that is another big use case for agents. It's also a big inflection for infrastructure. Yes. Massive. Because like, one of the pre-trading era was like GPUs. GPU, GPU, GPU. you know, we saw the, we kind of get to this inference area and agent area. Inference was like accelerator plus GPU, and now you're like GPU plus accelerator plus CPU is back. Like the CPU is kind of like.
Jeetu Patel:
I was actually, you know, Lin Bu and I were talking, we had gone to a game this weekend, and he was saying, how's the business going? He said, the CPU is on fire.
Daniel Newman:
We're going to see a renaissance of that. We're going to see the historic and the x86 architectures, the ARM architectures, you're going to see RISC-V pop up. And there's going to be a ton of investment into this because the orchestration is better on CPU. Like, what I'm saying is that we're finding, and I think this is what we're going to hear a lot about at GTC, is going to be about this heterogeneous era. Because you've got the heterogeneous computer era. You've got networking, which is going to be a huge opportunity for Cisco. We're going to have a security, but we're going to have memory. We've got this constraint there. We've got energy. By the way, you've talked about that kind of pervasively. You've talked about these various bottlenecks and what's kind of… you know, the AI, what's kind of bringing it up, and this is an infrastructure show, but Cisco's very focused on enterprises. You work very closely with the world's enterprises. You know, what are you seeing from that story you were telling kind of a year ago, I remember you telling it consistently, to where we are today, to helping these customers move from kind of these pilots and these early iterations of AI and enterprises to kind of go in full scale and getting benefits?
Jeetu Patel:
I think 2026 is the year of, you know, going into production. The pilots were very interesting in 25, but the absorption rate of, there was a capabilities overhang where there was far more technology available than there was being absorbed by the enterprise. I think 26 is where you actually see an exponential curve. is going to look like a vertical line and the absorption will happen at a much faster pace than we had anticipated. And the reason for that is these technologies, what I tell people on my team is you have to assume that whatever deficiencies you have in these models right now or these agents, by the time you actually get yourself prepared to take advantage of them, they will be solved. And so you have to almost fast forward three to six months and say, what is that model and the agent going to look like? What is it going to be able to do? And plan for that world rather than planning for the world of today.
Daniel Newman:
Yeah, I love that you say that, because that's one of the things I think where a lot of leaders are getting a little hung up. I think I hung up three months ago, because they said AI can't do X, Y, or Z, or AI hallucinates. I remember hearing that article. And then we literally went from one generation to the next on both the training, the compute, and the architecture. And now, I mean, it's not perfect. But the quality of the code, the quality of the content it creates, by the way, its ability to self-regulate, meaning that you can say, hey, back check this, double check across these sources, and come back to me. And it can do a list of QA.
Jeetu Patel:
I think it's hard to debate the exponentiality of this. The thing that I think we have to make sure we switch our focus to is, Trust is going to be the biggest area of impediment here. We have to make sure that people trust these systems. And right now, the way that people are using some of these agents is very brute force. If you think about the fact that Apple minis have been sold out, that says, I don't trust this to put it in my main production system, so I'm going to go out and have a second system that's going to be for agents. And over time, you're going to go from hardware to a virtual private server and then you're going to need to make sure that you have guardrails that are dynamic. Asian security is going to be a huge industry moving forward and we will have to make sure and it's not unlike in the past where security was something only the CISO cared about. Every user cares about security right now because something going sideways with an agent would actually have a meaningful consequence to you as you're using it. So I think that the education around security is actually going at a much faster pace than I would have thought and that's an area where there's a tremendous amount of demand.
Daniel Newman:
You mean you don't want it like deleting your fraud database? Exactly. It's like I said, there's only two schools. There's the people that are so smart they know they need to put it off their product network, and then there's the people that are so dumb that they put it off their product network because they don't know how to actually connect it.
Jeetu Patel:
You have 150 million developers on GitHub right now. I think you will have, in the next two years, three to four billion developers in the world.
Daniel Newman:
It looks like everyone in the world is going to have an app soon and none of them will have any users.
Jeetu Patel:
You're the agents of the users.
Daniel Newman:
One of the things you've also been really prodding up for a long time is AI ROI. I think you got up on the summit and you made a comment about that. You made a slide about the ROI. Where's that at in your mind? What does it take for an enterprise to start realizing returns?
Jeetu Patel:
Sure that you have, you move from vanity projects and pilots to actual production. And coding was clearly the first one that was taken on. I think if you look at customer support, that's a huge one. I think legal is going to be big for contract reviews. You think about it, they're ready and they're going at it pretty fast. I think healthcare will be One where you will actually continue to see a fair amount of momentum pick up. Education is one that you're starting to see baked into the fabric of how the education system is transforming. So I feel like you will start to have a fair amount of momentum. And the thing that was very interesting was the sequence with which these use cases came about. So because coding is the first one, it changes every other use case. Because AI is building AI. And as a result of that, there's no kind of delay mechanism. Like, from the time that you say, from the time that we tell some of the model providers that we're the first design partner for Codex, from the time that we tell them that something's missing to the time that that actually gets fixed, it's a very compressed amount of time. And so that, I think you have to keep that in mind, that that shifts your entire approach on how you think about work. And it can't be, you know, like Kevin Wheel said something really interesting at the AI Summit, He said, whenever I go into a meeting, and if I've not given the agent a task, I feel like I lost an hour. Because what the agent should be doing is working for me 24 by 7.
Daniel Newman:
Absolutely. You got that rolling?
Jeetu Patel:
At the point where every minute that I have, the agent's working for me. But we're getting there.
Daniel Newman:
I have a lot of tasks that are rolling. I have a lot of stuff like discovery tasks, like just constantly, pervasively.
Jeetu Patel:
I'm just not organized enough to remember to keep giving the agent a task. Well, but if you do it right…
Daniel Newman:
I'm doing a sub-stack experiment right now where I actually asked it to go research and build a sub-stack. Autonomously. And then I had to build a social plan, a content plan. And every time I go back it's just new stuff there. Now I'm still doing the human in the loop and review. I haven't gotten to the point yet where I'm willing to let it log in and post stuff.
Jeetu Patel:
Research is a great use case where there's enough that you want to learn about and if you just give it all the tasks. My biggest bottleneck in research is the actual consumption rate that I can have from going out and consuming the information.
Daniel Newman:
Yeah, but it's also just self-learning. You train it to continue training itself. Meaning, now compare the output of this last research I wrote and the last thing I published. Now, continuously train and learn to get better and make it sound more like me. And it's really good. By the way, your legal case is a great example. I mean, I've been looking at some contract reviews and literally going through red lines are hard to look at. I literally go create a summary report for me. Here's the last version. Here's the one I got back. And then tell me what issues are the big ones. And by the way, it'll create these nice color code. It just opens up. That's what I'm using. And it's just perfect. And it kind of be like, here's the two things to focus on. Because there's 90 markups in this thing, but these are the two things that basically you are still very far apart if you want to get something done.
Jeetu Patel:
I think work is fundamentally changing. And what I do think the tech community will have to do more of is, We will have to work on making sure that the reskilling happens for people on a constant basis. This is not something that you can just say, yeah, it's going to be fine. People are going to be fine. It's just like everything. There's no priors to AI. It's too fast. It's too fast. And I do think there's going to be disruption that we're going to have to collectively make sure that you actually think through. You can just ignore that piece.
Daniel Newman:
Elon Musk came out and said something along the lines of people won't have to work if they want to because they are able to make society so rich, which is one school of thought. And I think last week on CNBC, Bill McDermott came on and said something of 30% of new college grads, the unemployment rate of new college grads will be 30% in the near future. And I mean, he's been a, he's a super goal and been one of the guys that kind of has always resisted the idea that AI would disrupt. And I mean, even their kind of, and I think it's, I think it's, I do think there's going to be a lot of jobs created just like every industrial revolution in the past. The problem is we've never seen tech in such a compressed timetable do so much. I think the timetable is what actually is disorganized. So we're going to end up with this like, you know, I call it digestion. It's going to be an indigestion period. Things happen so fast and now companies are trying to figure out, well, if I don't need people to do these,
Jeetu Patel:
long-term structurally bad decisions, like for example, not hiring early-in-career college grads, I think is a strategic error, because at some point in time, you want to make sure that fresh blood comes in to just challenge your assumptions you've had for many, many years. And I just don't think we'll do that. So one of the areas that I'm very excited about is continuing to hire people that are early-in-career.
Daniel Newman:
And they're also native to the tech. A lot of them, or all of them, like you said, collaborating with, you know, I have a daughter in graduate school, a daughter in college, like, they're using this stuff every day. To them it's neat. Meaning, like, some of us, you're asking, like, a developer that's been doing it for 30 years to go out and make a change. Now, if you're into computer science, like, hopefully in college, hopefully, hopefully, they're actually teaching the students to code with this stuff.
Jeetu Patel:
Because if they're still teaching them the old way, it's going to be- I think the term teaching will also go away because I think the self-learning is going to be so exponential. I mean, you just- We do have to read.
Daniel Newman:
I did. I'm a full stack engineer. There you go.
Jeetu Patel:
God help us all.
Daniel Newman:
Hey, you too. So give me a contrast, a year. You and I, we could probably go on a journey, have a little fun, we could document your and our conversations in a year. In the last 12 months, what would you consider now a best-in-class AI operation, and how does that contrast with even a year ago?
Jeetu Patel:
I think the big structural shift that's happened is coding has become real, which was the first demonstrable use case that this is going to have meaningful impact on every way that we actually do work. That was one huge change. The second big change that's happened is the shift of people not thinking about this as a tool and thinking about this as a coworker. And I think that is not something that people have fully rocked yet. This is not just a tool. You actually have to securely delegate work. But once you've delegated the work, it's going to just do the task. It's going to do the job, and it's going to come back to you. And that, in my mind, has a profound set of implications on how we work, how we configure work, how we make sure that we have our organizations built out. We've moved from a waterfall development technology to Agile, and from Agile now to a spec-driven development where your biggest skill is building a markdown file that actually has enough context and guardrails that you can provide to the agent. and then just making sure that the agent works on your behalf. That is a fundamental shift right now.
Daniel Newman:
Do you metaprompt? Do you do metaprompting?
Jeetu Patel:
I do. Actually, I started metaprompting about a year ago. Someone told me about it. And it was fascinating to see how it actually worked. It's great, isn't it? It's the best thing I've built. Image generation is where I started metaprompting. Because it wasn't giving me the right images. And then I'm like, give me the prompt to go out and create an image for this.
Daniel Newman:
By the way, it was life changing when I figured that out. I realized in that moment, one, is the whole idea that front-end engineers are a future job was not a thing. But two, I just realized how much better an agent can function if you actually let the computer give it its proper instructions in the computer's best language. Yeah. So we're here at GTC. Maybe just a moment. By the way, I love that you mentioned Grok, because you made that Grok deal. And I'm expecting that to be a big moment here. So you fit that in nicely.
Jeetu Patel:
Sure.
Daniel Newman:
Premises of the name was actually the same. I was an early investor in that company.
Jeetu Patel:
We were too. Cisco was. Yeah. We both did okay. We did alright. We did okay. And I love Jonathan Barnes. He's a good guy.
Daniel Newman:
He's been on the show many times. I know. Yeah, Jonathan's been on many times. Same place as Davos. Let's finish this up and talk about what Cisco's going to talk about here at GTC.
Jeetu Patel:
So there's three or four areas that we are going to be doing a lot with at GTC. So one, we're making announcements on the AI networking side. We announced our G300 chip in Amsterdam. I think you were in Amsterdam, right? I was in Amsterdam. You were not in Amsterdam. It's one country, we weren't together. So we announced our G300 chip over there, and then what we are doing is we are providing choice to customers. So we had announced a Spectrum 4 support for the Nvidia chipset. But that will be one area that we'll have announcements in. The second one is around the secure AI factory. We'll make sure that we have Hypershield now. We'll have Bluefield support, which is our hybrid mesh firewall. We'll have support for the DPUs from NVIDIA. And we'll also make sure that, as NVIDIA provides agent-based technologies, AI defense can go out and secure those. I don't want to reveal the name until they actually come out with it, but we'll make sure that we're going to be supporting that. And then the third area is around the ecosystem. We're going to make sure that we actually continue to keep partnering up with the ecosystem in different areas. And then the last one, which is interesting, is token generation should not just be limited to data centers. It should actually be happening on the edge. We announced a unified edge. And we'll be using NVIDIA Spectre 4500 chip in our Unified Edge as well. So those are kind of the different category of announcements. There's four key announcements, one around networking, security, edge, and then ecosystem.
Daniel Newman:
And the big edge moment is coming. I mean, if you think about like the Nemo Claw, the Open Claw. By the way though, these are all little edges. What's interesting in the branch side of the house is
Jeetu Patel:
Simple to go out and configure these things but it's still monitored centrally and so the way that we've built a unified edge is you can actually have just one plug-and-play system at the edge but it's fully monitored centrally. That was a very important kind of design goal that we had as we were building our edge platforms.
Daniel Newman:
I expect to hear a lot more about that. Jeetu, have a great GTC week. It's always a big one here. Congratulations on all the success.
Jeetu Patel:
Thank you, thank you very
Daniel Newman:
Everybody, you heard it here. Subscribe, be part of the Sixth Five. We are on the road at GTC 2026 in San Jose. We appreciate you being part of our community. Stay in touch, stay tuned in. We'll see you all later, bye bye.
MORE VIDEOS

Lenovo’s AI Acceleration: From Device Innovation to Global Impact | Six Five On The Road at MWC 2026
At MWC Barcelona, Luca Rossi discusses Lenovo’s record market share, the launch of Qira ambient AI, hybrid AI architecture, and how long-term supply commitments are positioning Lenovo for sustained growth.
The Six Five Pod | EP 296: GTC Expectations, Copilot’s AI Shift, and Apple’s Low-CapEx AI Bet
AI is reshaping how software gets built, how infrastructure gets deployed, and how platforms compete for relevance. On Episode 296, Patrick Moorhead and Daniel Newman break down GTC expectations, Microsoft’s Anthropic-powered Copilot shift, Adobe’s leadership transition, Apple’s AI strategy, and the infrastructure debates shaping the next phase of enterprise AI.
The Six Five Pod | EP 295: Vibe Coding, AI Infrastructure, and the Future of the App Economy
AI development is getting easier, but building production-ready systems remains a challenge. From vibe coding experiments at Mobile World Congress to shifts in AI silicon, networking infrastructure, and the evolving app economy, Patrick Moorhead & Daniel Newman explore what’s actually changing inside enterprise technology on this episode of The Six Five Pod.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.

