Home

Google Cloud Next 2026: The Signals That Matter for Enterprise AI

Google Cloud Next 2026: The Signals That Matter for Enterprise AI

Patrick Moorhead and Daniel Newman recap Google Cloud Next 2026 live from Las Vegas, breaking down the week's most consequential signals for enterprise AI: the TPU-8 training and inference split, Google's full-stack co-design argument, the data layer's return to the center of the agentic conversation, the Wiz acquisition's integration into the agent security platform, and Google Distributed Cloud's maturation as a sovereign AI deployment path for regulated industries and governments.

Google Cloud Next 2026 did not arrive with a single headline announcement. It arrived with a coordinated shift across silicon, networking, data, security, and sovereign infrastructure, each layer reinforcing the same argument: enterprise AI is moving from proof of concept into production, and the companies that have not committed to a production-grade stack are already behind.

From Las Vegas at Google Cloud Next 2026, Patrick Moorhead and Daniel Newman break down the biggest takeaways from the week.

They cover Google’s latest TPU-8 architecture, and why the company is separating hardware for training AI models from hardware used to run them. The conversation also looks at how Google is designing its systems end to end, and what that means for companies choosing between cloud providers. Google’s renewed focus on managing data, the impact of its Wiz acquisition on how security is handled, and how Google Distributed Cloud is evolving into a serious option for organizations with strict data and compliance requirements.

Key Takeaways:

  • Google's TPU-8 generation splits into two distinct architectures for the first time. TPU-8T is optimized for training scale and TPU-8I is built for inference economics, reflecting the reality that the training-to-inference ratio is shifting hard toward inference as agentic applications proliferate.
  • Google's full-stack co-design approach is turning into a competitive edge. The Prism benchmarking tool extends that argument by letting enterprises validate performance against their own workloads rather than relying on vendor-run benchmarks.
  • Google's data layer is back at the center of the conversation. Knowledge Catalog, Smart Storage, and the unified data layer enable agents to reason over enterprise data in place, without migration pipelines, a prerequisite for any production-grade agentic deployment.
  • The Wiz acquisition marks a shift in how Google addresses security at agent scale. Integrating Wiz directly into the agent platform moves security from a bolt-on to a structural component, covering identity, governance, and compliance for workflows that span clouds and cross borders.
  • Google Distributed Cloud has moved beyond marketing around data sovereignty. Real on-premises infrastructure from known hardware partners, running Gemini with confidential external key management, gives regulated organizations and governments a credible path to frontier AI without sacrificing data isolation or operational control.

The week's collective signal is that Google is closing the gap between infrastructure ambition and enterprise deployment reality. The competitive pressure on AWS and Microsoft to respond at the stack level, not just the feature level, is growing.

Watch now and subscribe to Six Five Media for analyst-led coverage from Google Cloud Next 2026.

Disclaimer: Six Five Media is a media and analyst firm. All statements, views, and opinions expressed in this program are those of the hosts and guests and do not represent the views of any companies discussed. This content is for informational purposes only and should not be construed as investment advice.

Transcript

DANIEL NEWMAN:
They are the gold standard at this point of companies that have built their own full stack from core to enterprise and application.


PATRICK MOORHEAD: 

The Six Five is on the road here at Google Cloud Next 2026 in my second home here in Las Vegas, home of tech shows. Daniel, it's been a great show so far. Yeah, but not all tech shows are created equal. That's right. That's right. This one was definitely unique. A lot of announcements. But I think the key message, I think, was a single unified platform with the entire pieces of the layer cake, all the way from infrastructure, all the way to building your own agents and pretty much everything in between. And that was, I think I heard TK talk about that multiple times.

DANIEL NEWMAN: 

Yeah, there was a lot of talk about the full stack. And that's been something that's differentiated Google for some time. Remember all my rocket ships along the way. But I also think there was a lot of focus on, and I think Thomas Curran said it when he went on stage in the big keynote, there was a big focus on POC to production to scale. This is the inflection point. This is the year that kind of all of the side projects and the could this possibly work? And to your point, the infrastructure, the cross networking, the tooling, the software and development environments themselves have made this something that can be built and deployed and utilized and derive value in real time.

PATRICK MOORHEAD: 

So our experience started the night before, which was all about infrastructure, a very super secret event, really didn't know what we were going to get into. It was at the F1 track and ended up being a a presentation of the new TPUs, basically the hypercomputer and a lot of the details around that. And as we saw, not just one TPU for compute, but two TPUs for compute, one for training, one for inference, Axion processors, buttressing, all of them that are also homegrown with a new networking stack.

DANIEL NEWMAN: 

Yeah, it's interesting. The debate about heterogeneous computing has come to an end. I think everyone agrees about heterogeneous now. But there has been some questions around whether it's one big monolithic GPU or TPU or XPU, whatever you want to call it. Or, you know, it was interesting to me to see them actually split the compute tiles and have one for training and one, I mean, it's not actually all that surprising, but that actually has been kind of a different approach than what a lot of the industry has taken. But obviously they are working for two major things. You do these massive training workloads that require a very specific type of compute. And they're kind of saying, look, a lot of the value is going to come in inference. You need that memory. You need that low latency. And it isn't the same. So it was interesting to kind of see that spin back up after that had been kind of out of the conversation for a little while.

PATRICK MOORHEAD: 

Yeah, it is funny. There were some people who did have special training and special inference. It made sense. And then it came together. And now it looks to be bifurcating again. And who knows where the market goes might come together.

DANIEL NEWMAN: 

You always talk about the accordion, cloud on-prem, cloud on-prem.

PATRICK MOORHEAD: 

It's the same. One of the performance figures, you know, I get a lot of questions on social media, hey, how does this compare to NVIDIA? So first of all, Google made big announcements about NVL 72. They've been working with NVIDIA for a decade on things. And invariably, it's the, well, how does it perform? And only Google knows at this point, but I'm very inspired by something that Google had cooked up, and I think they open sourced it, called Prism, where it's a way that enterprises can come in and run their own code and run their own workloads to see which Google path that they want to take. And I think that that's a mature way to look at things, and it makes it easier for smaller companies who don't have the resources to go out and do some of that.

DANIEL NEWMAN: 

You know what I'm going to say, right? We don't have enough compute. Yes. You know, all the people that keep saying, oh, is this the end of NVIDIA because of a new TPU or, you know, Google's challenge. It's like they have a deep integration partnership and the two companies are working very closely together. But of course, Google, by the way, has been doing this for over 15 years has been also wanting to never become fully reliant on outside merchant silicon. They've been building for their application specific workloads. They've been doing it. They probably are, you know, Pat, I think you would agree with me. They are the gold standard at this point of companies that have built their own full stack. you know, from core to enterprise and application. Um, but, uh, this is not an or thing, you know, I just think it's a silly one. So I just want to put that out there. Like it is boring.

PATRICK MOORHEAD: 

It, uh, it gets headlines and, and, and tweets and rocket ships, but it's really not, not the big, not the big story.

DANIEL NEWMAN: 

No rocket ships, man. But this isn't, it's an impressive, not focusing on how they compete with other brands, but just within itself. It's an impressive generation to 

generation improvement that it was able to make in a, in a single year.

PATRICK MOORHEAD: 

It is. I was at the original Google I.O. where they announced the TPU details for the first time. I was still in high school. I don't think so. So Daniel, a lot of amazing infrastructure stuff. Did anything else stand out for you? On infrastructure or in general? No, just in general. Other announcements.

DANIEL NEWMAN: 

Yeah, I mean, you know, all the stuff they're doing on Gemini Enterprise Workspace, everything they're doing around agent development and the agent environment. You know, you and I, you know, everyone that's in our 6.5 community knows that we are now pretty much full stack developers. And, you know, we are building on tools.

PATRICK MOORHEAD: 

If we can do it, you can do it too.

DANIEL NEWMAN: 

But for most people, Operating inside of a contextual-based environment where you can talk your workflows, you can collaborate deeply with capable models, and you can build and deploy agents in real time is going to be the future of how people partner with agents to build highly scalable, usable productivity tools. Google demonstrated on stage in their keynote, I mean, being able to bring someone on stage, you know, average technical capabilities, being able to explain semantically what they're trying to accomplish and being able to build and deploy, by the way, connect to all the right APIs, connecting into all the right tools, accessing all the right data. This is really what makes these agents work. And I think Google is pushing the envelope in terms of simplification and making agents real. for the non-technical. And I think that actually just adds value as you bring technical teams back in.

PATRICK MOORHEAD: 

Yeah, it's funny. One of the ways to make it simpler is there's a lot of ways you can connect apps, apps to data, apps to apps, agents to agents. You can create apps via API, but also through MCP. I mean, I am smart enough to do MCP if Google now enables you to tap into all of their MCPs, which is pretty amazing. So theoretically, any app anybody builds can tap into a Google MCP and make it happen. That's a huge commitment, I think, out there. And them fully, full-throating, endorsing MCP is, I think, a big deal and also is a proof point for their openness. Because Google does get criticized that, okay, nice closed system you have here, but this is clearly an open element.

DANIEL NEWMAN: 

It's probably worth noting too that they had somewhat gotten away from talking their data book. as they got really focused on infrastructure and AI. But as we know, the quality of a model, the quality of an agent is only going to be as good as its data. So it's unified data layer, it's knowledge catalog, it's smart storage. It has a lot of the inter-workings of providing that really important data layer, enabling you to connect the back end to the front end of your systems. This is a historic joke, people. People have to listen back to the 300 plus episodes to understand the back end, front end thing. But I think having a little bit of a renaissance around its data, seeing that come to the forefront, because in the end, any of these tools, really are only going to be as good as the data they sit on top of. And Google has a really comprehensive data suite that sometimes I just think kind of gets swept under the rug. Like we're busy talking about the new chip, we're busy talking about the Asian platforms, but we're not talking about the data layer, which is so important. And by the way, probably the most comprehensive data set in the world.

PATRICK MOORHEAD: 

Well, the irony is that that's exactly how Google got into driving serious volumes in the cloud business. The enterprise cloud business is they had this gigantic consumer business with planet scale capabilities, Google index search, maps, et cetera, and they use things like Spanner and BigQuery. That was their first big, analytics and data was their first big business in there, and then everything else followed. The Eugenic Data Cloud that they announced, definitely a shot, direct shot of Snowflake and Databricks, some really interesting, collapsing the BI layer into the data layer. What happens with that?

DANIEL NEWMAN: 

Oh, no, sell it all.

PATRICK MOORHEAD: 

Exactly.

DANIEL NEWMAN: 

Here we go. It's been disintermediated. No one's ever going to use a data warehouse again. I'm joking, in case you can't tell. Pat, the market's really large. The market's really large. This isn't new, by the way. So people hear these things, and it becomes kind of like set off the fire alarm. But it's like every one of these cloud providers has had a mix of the in-house tools that they offer through the cloud, and they partner. And I mean, Appreciate Databricks and Snowflake are both here.

PATRICK MOORHEAD: 

No, listen, they are. Maybe I'm just trying to get clicks. But it is important to recognize there are other people that that offer this. So, Daniel, we haven't talked about security yet. And Google made this massive acquisition in into the Wiz. And there are a lot of other vendors to buy from with scale. The Wiz brought an incredible amount of technology to it. But this is where officially the Wiz shows up as a real product in my estimation, right? Security built directly into the agent platform as opposed to what I would call a bolt-on. No single vendor owns this. and no single vendor will, but I do feel like this got Google to a much more competitive and integrated offering than they've ever had before.

DANIEL NEWMAN: 

Yeah. Google had Mandiant, Wiz is the next big inflection, AI native or agent native security. We've all heard that the next Claude model will disintermediate the need for any security outside of Claude. And I'm joking because I'm pretty sure, didn't Claude get hacked? It just happened. The fact is that security still has a unique need. And I actually think that market's going to get a lot bigger. And I think Google sees that and that's, you know, these guys are not doing, they're not spending $30 billion on an acquisition for charity. I mean, they understand these complexities, security, and I don't think we've really talked about it, but governance. Like when you take these agent scale projects, put them into production, you put them across borders. You put them, you know, you have sovereignty, you have compliance, you've got governance, you've got privacy, you've got security, you've got all kinds of different infrastructure challenges that exist with that. But the security, privacy, governance. engine is going to be massive for large companies operating on a global scale. And so, you know, Google has to address that and they make the offers available. And of course, they partner with the CrowdStrikes and Palo Altos and everyone else. But I think security is going to be a bigger topic.

PATRICK MOORHEAD: 

It will be. So Daniel, the last thing that I noticed, you know, we've been tracking this whole idea of sovereign cloud. And, you know, it's really kept, it's really increased in popularity or the demand for it. Two things. First of all, the fear with countries that they might get cut off from their services or not have the keys, but also AI, having your own AI cloud becomes an element of national security. And for years, it was like, OK, this is great. You're just trying to repackage your meaning everybody in the cloud repackage cloud stuff and call it call it sovereign by having certain. restrictions around the data. But one thing that I really like about Google GDC is it is real on-prem infrastructure from some of the brands that you and I know well, with Google software and management and now Gemini, elements of Gemini, on that infrastructure, confidential external key management, and a lot of things that make it real. In fact, I was in Washington, D.C. at the Google event, a federal event in and around the Dell timeframe that may or may not be the infrastructure underneath GDC, where I was talking to the ex-CIO of the Trump administration, 1.0, kind of talking through all of those, like the need for this. But it's good to see the industry reacting to those sovereign needs, whether they be in the United States or over in the UAE or wherever.

DANIEL NEWMAN: 

Yeah. I mean, certainly the U.S. is more progressive. There are some states that are trying to clamp down, but on a global scale. you know, Europe, parts of the Middle East, parts of Asia, sovereignty is going to be very, very important. So that's also, by the way, probably being underestimated in most of the infrastructure counts of how much deployment of infrastructure in the region is going to be required to actually stand all this up. That's because all the analysts have been wrong. I have underestimated it every turn. But having said that, too, is the cloud providers that have really built an edge for themselves have all been really thoughtful about developing those on-prem. We can call it what it is. I mean, it's basically cloud on-prem, right? Yes. Sovereign cloud is cloud on-prem. And so, yes, say that fast five times. So those partnerships matter, like you mentioned, you know, Dell, but also just having that experience be ubiquitous, frictionless, meaning that you get all of the tooling and capabilities that you get with Google Cloud in a traditional public cloud sense. But now you get all the same benefits that you get when you would historically run your data center locally. And with AI scale, that becomes really important because it wasn't like when you had a 10 megawatt data center in the past, like, you know, the amount of power you're going to need to be able to put one of these on-prem and the type of technology. I mean, think of all these, You know, like all these liquid cool data centers like all over the world. I mean, I don't know, Pat. I mean, now that we're developers, does every technology company become a large scale data center implementer? Or what happens? Every non-tech company? I don't know. We're having fun thinking about it, though.

PATRICK MOORHEAD: 

No, we are. So Daniel, anything that we missed, we had talked about, that you'd like to add? Or do you want to take us home here, buddy?

DANIEL NEWMAN: 

You know, I would just overall say that the proliferation and scale that we're seeing is unprecedented. I mean, it is it's just an incredibly exciting time. And, you know, I continue to say that the intellectually curious individuals and companies will be the pacesetters of innovation into the future. And a lot of that was on display here this week. It's been a really great week. Google said something about search that really stuck with me this week when they talked about just their search team's job is to build the best search. And then their advertising team is to sell it. And it really pushed my imagination in terms of what makes a great company great. And the fact is, if that's the same strategy they're taking with their cloud, build the absolute best cloud, and then let the commercials work for itself, I think they're on a great trajectory. And the numbers have really proven it. We'll see the next iterations from AWS and from Microsoft and from everyone else in the future, but Google continues to do a really great job in the AI era of setting the pace.

PATRICK MOORHEAD: 

What I'm really glad about is that we're not having the conversation and the debate about downstream impacts of AI. Are all the jobs going away? No, I know. That is a conversation, but no longer are we having the conversation of whether AI truly is going to make a difference downstream with software. And it's funny, it only took a handful of models until everybody stopped talking. And you just can't have it both ways. You can't have AI so powerful that it destroys all these jobs, yet AI so meaningless that it can't pay for itself.

DANIEL NEWMAN: 

Yeah, we'll get there. How does that work? The economics are coming. We are going to figure that out. So I did see the first ever, uh, I saw yesterday the first AI layoff or someone laid off the five AIs to hire back two developers. I'm joking. I saw the tweet.

PATRICK MOORHEAD: 

Yeah, that was a good, that was a good meme.

DANIEL NEWMAN: 

But is that true? I doubt it. But the economic value is coming, and it's just such an exciting time. We want to thank everybody for being part of our Six Five On The Road wrap-up here at Google Cloud Next 2026 here in Las Vegas. It's been a great week. Follow us. Check out all of our coverage here at the event. But we're going to say goodbye for now. See you all later.

MORE VIDEOS

AI-Native Defense at Scale: Google Cloud's Security Strategy for the Agentic Era

Francis DeSouza, COO and President of Security Products at Google Cloud, joins Patrick Moorhead at Google Cloud Next 2026 to examine the shift to AI-native enterprise security. The conversation covers the expanding threat surface, Google's AI-powered SOC agent strategy, internal deployment results including 90% faster threat detection, and the integration of Wiz into Google's multicloud security platform.

From Infrastructure to Intelligence: How Google Cloud Is Architecting the Agentic Enterprise

At Google Cloud Next 2026, Patrick Moorhead and Muninder Sambi, VP of Google Distributed Cloud, examine the five infrastructure shifts enterprises must execute to support AI agents at production scale. From Fluid Compute and Agent Gateway to sovereign AI deployment via Google Distributed Cloud, the conversation maps the architectural decisions that determine how far agentic execution can scale.

From Models to Agents: How Enterprises Are Scaling AI with Google Cloud

Daniel Newman and Oliver Parker, VP of Global Gen AI GTM at Google Cloud, examine the enterprise AI inflection at Google Cloud Next 2026. The conversation covers the drivers behind the shift from production capability to scale production, how inference cost structures are shaping what gets deployed, vertical AI execution through industry-specific customer deployments, and the emerging FinOps framework for evaluating agent ROI against labor cost equivalents.

See more

Other Categories

CYBERSECURITY

QUANTUM