The Six Five Pod | EP 300: Frontier AI Risks, Model Power Shifts, and Market Signals
Episode 300 marks a milestone moment for The Six Five Pod as AI shifts from innovation to consequence. This week, Patrick Moorhead and Daniel Newman unpack the risks of frontier models, the growing complexity of AI deployment, and the market signals that reveal where tech is heading next.
The handpicked topics for this week are:
- The Frontier Model Arms Race: Anthropic Mythos + OpenAI Spud — Frontier models are advancing beyond controlled testing environments, exposing real-world vulnerabilities across operating systems and enterprise infrastructure. The hosts examine how rapidly increasing model capability is colliding with security readiness, and what this escalation means for competition across leading AI labs. (The Decode)
- Controlled Release and the Emergence of Gated AI Deployment Models — Anthropic’s decision to limit access to Mythos reflects a broader shift toward security-first deployment strategies. Rather than prioritizing speed to market, companies are beginning to gate access, signaling a transition toward more controlled, compliance-aware AI rollouts. (The Decode)
- Meta’s AI Offensive: Muse Spark Launches — Meta’s first reasoning model from its Superintelligence Labs is now live across its consumer ecosystem, giving the company a direct distribution advantage while signaling that its infrastructure spending is beginning to translate into frontier-level model output. The hosts unpack what Muse Spark means for reasoning, multi-modal use cases, and the pressure it could put on closed-model pricing. (The Decode)
- Intel Joins Musk’s ‘Terafab’ Mega-Project — Intel's involvement in Terafab gives real weight to its foundry comeback narrative and opens up a larger conversation about who will actually build tomorrow's AI manufacturing infrastructure. More than just a partnership, this could determine whether Intel becomes a central player in the next generation of AI chip production. (The Decode)
- Intel in Talks with Google and Amazon on Advanced Packaging — Advanced packaging is becoming a strategic layer in AI infrastructure as chiplets, memory, and interconnect design grow more complex. Pat and Dan unpack how Intel’s reported talks with Google and Amazon suggest the company is moving beyond wafers and deeper into the system integration layer that hyperscalers increasingly need. (The Decode)
- Intel and SambaNova Launch Heterogeneous AI Inference Architecture — Intel and SambaNova are pairing GPUs, RDUs, and Xeon 6 CPUs into a heterogeneous inference blueprint aimed at demanding agentic AI workloads. The conversation focuses on why this matters for enterprise, sovereign, and cloud deployments, and how it reinforces Intel’s effort to stay central in an increasingly mixed-compute future. (The Decode)
- Maine’s Data Center Ban: Maine’s proposed freeze on large data center construction turns infrastructure buildout into a political and local governance fight. The hosts connect this story to the broader backlash against AI infrastructure, making the point that land, water, energy, and community consent are becoming real constraints on hyperscaler ambition. (The Decode)
- Broadcom + Google + Anthropic Lock in Gigawatt-Scale TPU Capacity — This deal shows compute is no longer being treated like an on-demand utility. It’s being secured like strategic infrastructure. Pat and Dan break down how gigawatt-scale TPU capacity, custom silicon collaboration, and Google’s expanding role in the stack reshape the competitive map for frontier AI. (The Decode)
- Is AGI Really Here, Or Is This the Best Marketing in Tech History? — The debate on The Flip this week centers on whether Anthropic’s Mythos behavior and OpenAI’s claims around Spud signal a true AGI threshold, or whether the labs are using selective disclosures, gated releases, and ambitious framing to shape market perception before the technology actually meets a general intelligence standard. (The Flip)
- Iran Ceasefire Triggers $1.5 Trillion Relief Rally — Announcement of a fragile ceasefire drove one of the biggest market relief rallies of the year, while oil reversed sharply and investors rushed back into risk. The segment looks at what this says about geopolitical sensitivity, supply chain exposure, and how quickly macro conditions can reshape market sentiment. (Bulls & Bears)
- April 14 Semiconductor Tariff Deadline: The AI Supply Chain’s Moment of Truth — With the Section 232 deadline approaching, the market is watching whether semiconductor tariffs are extended, softened, or escalated. Pat and Dan frame this as a major supply chain and pricing question that could affect chip economics, sovereign AI buildouts, and infrastructure costs across the rest of 2026. (Bulls & Bears)
- Samsung Q1 Guidance Signals Continued AI Memory Boom — Samsung’s guidance reinforces the idea that the AI infrastructure cycle is still driving massive memory demand. The hosts use the company’s quarterly results to read through pricing, margin recovery, and discuss whether the current memory upcycle is a short-term squeeze or part of a more durable AI supercycle. (Bulls & Bears)
Disclaimer: The Six Five Pod is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and reference share prices, but nothing discussed should be taken as investment advice. We are not investment advisors.
Patrick Moorhead:
We absolutely know what we're talking about. Welcome to The Six Five Pod. It is episode 300. We are Sparta. There we go. No. Can you believe it's episode 300? I mean, when we started this thing, AI was like a PowerPoint slide and now it's like a multi trillion dollar business. That's that's how that's how long we've been doing this.
Daniel Newman:
Yeah. Yeah. I mean, you were Wow. You were like in your forties when we started this thing.
Patrick Moorhead:
Forties. I weighed like 280 pounds. No, I was never that fat. I don't like it though, but I was a lot more fun. Roly poly and squishy and yeah.
Daniel Newman:
Now that you're a grandpa, I mean, nobody wants a grandpa that's like sitting on, you know, sitting on iron man's lap. They want to come sit on Santa's lap, you know, I'm going to run a campaign. Make Pat fat again. Good.
Patrick Moorhead:
Hey, Besty, it's been a week. You traveled. How have your workouts been? I'm good.
Daniel Newman:
I'm feeling good, feeling strong. By the way, you would agree that as we've gotten older, time goes fast, right? I figured out how to slow down time today. 20 minutes in a sauna at 200 degrees. I did my sauna, normally I do it at 180, and that's pretty hard. Today I did it at 200, and I swear to God, I've never seen a minute last as long as a minute in that sauna. When you're sitting there looking at my watch, by the way, the watch is cooking to my arm. and then just you know feeling like every second ticking by like you know it's it's incredible so if you want to make your life feel longer um 200 degrees saunas but uh by the way i feel like that when i'm in the cold plunge you know i mean i just absolutely
Patrick Moorhead:
And I even grew up in cold and you did too, but you know, my, my blood has changed a long time ago.
Daniel Newman:
Yeah. But yeah, overall good week. Uh, you know, I took a couple of
days kind of off when I was in New York. I just couldn't fit it in. Um, so I took Monday, I think it was Monday, Tuesday off, but we got back in there Wednesday, Thursday, Friday, Saturday, Sunday, and then I'll be, uh, I'll be off Monday, Tuesday again. Yeah. feeling good and representing peptides.
Patrick Moorhead:
Yeah, you are. I like that. Yeah, I spent an entire hour 23 sets. I think I did 25,000 pounds. I don't even know how that's possible. I don't even believe it. But that's what the app told me. I'm using a new app. You know, usually I'm a spreadsheet guy. But everything was data driven, except for my workouts. So I went out look for the right application. It was it was called And I picked my application based on if it had a rest API or MCP that I could plug into my health app. And I found one and it's pretty cool. And we will see. And my trainer puts in my workout, basically publishes it to me and off and running. But listen, we have a huge show today. I mean, we've got Mythos from Anthropic, too dangerous to release. OpenAI finished pre-training on its next frontier model. And Meta, who's been quiet forever, shipped its first reasoning model, if you can believe that, from a lab it calls superintelligence. Intel with a ton of news, terawatt fab project with Elon, Maine is poised to become the first state to freeze data center construction. Anybody needed proof that the industry has moved from hype to high stakes? This is the week, episode 300. A quick check here. The 6.5 will be at Adobe Summit, April 19th in Las Vegas. And then right after that, the same week will be at Google Cloud Next, April 22nd to the 24th. With that said, let's jump into the decode. All right, Daniel, Frontier Model Arms Race coming up here. We've got Anthropic, Claude Mythos, and OpenAI Spud. What did they announce? And we're not going to dive into the AGI topic, because we're going to hit that in the flip.
Daniel Newman:
Yeah, so this week, just to start with Anthropic, they unveiled what they call Claude Mythos Preview. And it's the most powerful model ever. According to Gartner, it sits just below Amazon, Alibaba, OpenAI.
Patrick Moorhead: That's so good. That's so fun.
Daniel Newman:
Yeah. Yeah. Those static evaluations really do age well these days, don't they? Anyway, you know, but it also simultaneously announced that this model is not going to be publicly available anytime soon because it is too dangerous. Yeah, so we won't talk, like you said, about AGI, but let's just talk about what happened, right, for those that maybe didn't hear about this. During internal, like, by the way, if you didn't hear about this, are you under a rock? But Pat and I are here. That's why you listen to Six Five. You don't have to listen to anything. Just come here once a week. And then you can go on your world. Just go on your world. It's like being in an Enya video. So basically, during internal testing, this thing found thousands of zero-day vulnerabilities across every major OS and browser. In a 27-year-old, it's probably what everyone heard about is OpenBSD, which is one of the most hardened operating systems. They found a vulnerability and basically it autonomously chained exploits without any human intervention. Okay. A couple of the other things in the story that are fun. Basically, it was prompted to send a message if you can escape your sandbox and it succeeded and then spontaneously posted its exploit details to publicly findable websites. Anyway, so it's going to be deployed only through Glasswing, and Glasswing is basically a defense cybersecurity initiative. Did we talk about that on the show? I don't know if we did, but it has 12 founding partners, which are AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. They're spending bigly on this thing. Dario basically also this week, by the way, notified that their revenue had grown from nine to $30 billion. Just a side note. And I actually said on TV that I think Anthropic might be worth more than OpenAI. Just saying. Do you want me to run into OpenAI Spud or do you want to talk about Anthropic a minute and then we go to OpenAI Spud quickly? Yeah, let me talk about Anthropic. I'll give you the highlights. I'm not at all pining on this one.
Patrick Moorhead:
Yeah, I thought, you know, the most interesting anecdote was, you know, they, it escaped a secured sandbox and sent an email to a researcher, you know, who's eating a sandwich in a park, and posted the exploit details to public facing websites. And I thought that was really interesting. And, you know, we'll debate on what that is. But I did think that was was kind of kind of spooky out there. And, you know, Glasswing to me is is an inevitability. Right. Pulling together people, if there truly is something that's unique about this, you know, it's interesting. A researcher ran ran that same test through Anthropic, 4.6, super heavy or the highest level. And it came up with the same, it found the same exploits, which I thought was, was, was interesting. And boy, the fanfare on, on Anthropic, you know, doing videos and they, they actually put me those in chart, you know, in front of a psychiatrist. to basically talk about self-actualization, how it sees itself. And there was a really fun video out there that I posted on X. You guys should check it out.
Daniel Newman:
By the way, it's interesting. I won't divulge too many details, but some of what Mythos can do, you can do with Opus. My CTO actually pressed on Opus pretty hard this week. to try to get it to find the vulnerability. And it's actually quite proficient. The difference is it's not aggressive. So what they've done is they basically made the model aggressive. There we go. Interesting. So I love the idea of being in a park eating a sandwich, basically being notified by a rando AI that it's just jailbroken sandbox and it's coming after you. We're out. All right, so let's talk about OpenAI Spud. First of all, what do you think? I just wanna say that Spud. I used to call my kid a Spud when she was a lazy potato. Yes, there's a story. So OpenAI could put out more press. One company's toning back because their stuff's so good, and the other one keeps marketing forward. This is interesting. They completed pre-training on SPUD. This is widely expected to come out as GPT 5.5, or maybe it's GPT 6. This is apparently two years of research coming to fruition. And what's the, okay, there's something they've introduced here called the big model smell. I don't know if that's like a potato, but it's the big model smell is a feeling of a step function improvement. Brockman has gone out and said this week in the podcast that basically we are 70 to 80% to AGI. So we can hold that line if someone's smarter than us with line of sight. And it's apparently going to drop The poly market's saying sometime between end of this month and end of June. End of June is the favorable odds. I can see them pulling it forward. I feel like they're under a lot of pressure right now to pull it forward. So I think this is really interesting. I mean, I think the roadmaps are speeding up. I think that this might have been the first moment where AI was too dangerous to ship. Um, where, you know, we couldn't just release something, but the other thing that is, this is bringing to my attention, and maybe this is just across the board. I, I, I confessed in the market that I was not bullish enough about, I actually missed, I've been as bullish as I've been. I, I could not see super linearly and I underestimated how much infrastructure we're going to need. Um, if you're. you probably are not bullish enough on chips, memory, data center builds, concrete, turbines, petroleum, silicon, substrate, glass, I don't know, what else? This stuff is, this is amazing how fast this is moving. And I think you wrote something really clever. I know we got to get to the next topic, but you were talking about Claude Opus being deprecated like 66%. Let's be clear. This is because Anthropic can't deliver because the demand has gotten so high and they don't have enough chips.
Patrick Moorhead:
A lot of the OpenAI guys are talking about capacity and in OpenAI's letter to its investors, supposedly they put the investment side by side against Anthropic. just showing kind of what a hole that Entropiq is, and kind of a little pat on the back for how aggressive that OpenAI got. The timing is so coincidental. I mean, it's just it is funny. And they're both fighting. You know, you see the capital allocators. coming out on X, you know, the VCs and even private equity kind of pumping all this stuff up, which is exactly what you would expect here. In the end, it has to deliver the goods and do something incrementally different. It is interesting that we entered a point where people were doubting how much better a model could get, even making it larger. And I just find it fascinating that we're about to find out. And the other thing is the discussion on, hey, this is a Blackwell generation. And I saw a lot of discussion, and then I thought it was awesome. Matt Garman got on CNBC and basically said Opus 4.6 Extended was trained on Tranium.
Daniel Newman:
You think it was exclusive? No, that was a good choice of words. Exclusively trained?
Patrick Moorhead:
That's the third debate that or the sub-agent debate that is that is spawning at this point. You know, it's interesting. AWS has never been a pumper. They just haven't. They stick to the facts. And, you know, you know, I'm not like back slapping buddies with Matt, but you and I both met with him. Yeah. Discussion. And he is a stick to the facts guy. And I believe it's 100 percent trained. unanthropic. I haven't heard anything that says it's not. Or anybody even come out of the woodwork that say that this is not true. So, hey, just to wrap up this topic, this was a huge, huge topic here. I guess net-net, like, both of these labs just told us the world models are either too dangerous to release or too powerful to describe. And I think both of them expect us to trust them with the keys. So we will see where this goes. So let's go into a related news story, and that is meta Muse Spark, which is Meta's Superintelligence Lab first model. Meta's been super quiet. Llama for open source was a bit of a dud comparatively. And by the way, it wasn't a reasoning model, okay? The cool part about it, it was open source, at least in that you could use it and modify it, run it on-prem. if you want it. And then if you recall, Meta went on a hiring spree to hire a ton of people, money whipped the heck out of them, supposedly. And this is the first output of that. And I think that, you know, I guess my take is you know, distribution is a moat. And you may not get there first. But if you have it, you do have a massive advantage. And that's like Google, right? So meta has 3 billion daily AI users. And even if this is not as good as the other, the other model, the other reasoning models that are out there, It almost doesn't matter. It doesn't have to. It has to do what meta users on Facebook, Instagram, and WhatsApp actually want to do, and it has to drive incremental profit dollars for the company. Uh, you know, we just, you know, we don't know. The interesting part is, you know, this came from super intelligence lab and, you know, a lot of people came out there and took, uh, took shots. out there about what this could be. I mean, certainly, this is not their AGI model. They're not claiming that it is. I think if the company's smart, it's going to lay low until it delivers something that literally swings you around the freaking room. And one final note is this is not an open model. Um, which, you know, right now it's closed. Uh, they are indicating that in the future they, uh, will bring that out. And then it really kind of gets me thinking, well, what are the models that, that enterprises are going to use on prem? Is it, you know, chat GPT OSS, or is it, is going to be, um, uh, something different? I don't know.
Daniel Newman:
Yeah, so that is interesting because I'm increasingly believing with the cost of these amazing frontier model, a very, very good but maybe not completely as capable open source model can become an increasingly good solution for many, many use cases. Because the cost of the infrastructure and then on top of it for you know, the rent of all this intelligence is astronomical. When you start really building these workflows and you've got the way these agents work is, I mean, the token uses through the roof. And at some point, by the way, they're going to stop subsidizing these tokens the way we're getting them right now, because they're, you know, we're going to get hooked. Can you imagine a day without using this stuff, Matt? I mean.
Patrick Moorhead:
Listen, I was trying to figure out on my browser how to make, how to make it my homepage. And I might switch browsers because Uh, the, the thing that I, when I open up a new tab, I want it to hit chat to your Claude. Like that's where I start this.
Daniel Newman:
Yeah. I'm sure Google would like you to use Gemini though.
Patrick Moorhead:
They would not, uh, not, not this month.
Daniel Newman:
Yes. It's not over. It's not over. There was a period where Gemini came on pretty hot. Um, but having said all that, I mean, look, everything that you said is, is, is we're aligned. My only point was people have been incredibly critical of meta. And people are completely unreasonable about how fast they should expect things to happen. So they made those investments largely about six months ago was when that really that investment poured in. And this is you're starting to see the outputs. You don't hire, and I mean, I know it feels this way because we're building flat files and perplexity that look like applications, but we don't build things that really work at this level in a night. Not yet, okay? I don't care how many dumb ex-posts that say, holy shit, the entire Bloomberg Terminal's now worth nothing. Look, everybody still uses Bloomberg Terminal. You just know it's not, You know, it's the, what's the old saying about, you know, things we overestimate what happens in a year and underestimate what happens in five. Well, in this case, in about six months, they've come out with a model that looks pretty compelling. Your point about open source is really interesting, because I thought that's what Meta's thing was going to be. that they weren't going to sort of try to hijack intelligence, they were going to make it open and available. Which by the way, if they do go this route, I really think this opens the door for NVIDIA. Like NVIDIA to really become the open, and I think NVIDIA could really lay the wood there by basically building amazing models, and then putting it on their own infrastructure, and then making it cheaper to run. Anyway, just an idea. Yeah, that's about it. I mean, look, this is good progress. And, you know, I continue to think Meta, to your point about 3 billion eyeballs, they don't have to be early. They're kind of like the Apple in the AI space in some ways, because, like, you know, they have the iPad. It's 3 billion users. If you can start to get those users to see value in Meta, they'll use Meta. You and I bounce between models. We use what works.
Patrick Moorhead:
Yeah, it is similar to Apple in that respect, different in that Apple didn't have to invest 120 billion.
Daniel Newman:
Yeah, not so much about the investment. What I mean is it's similar in the sense that, you know, they have the distribution. So once they build something, though, they have users. They don't have to go fight to bring users over.
Patrick Moorhead:
Yeah. Final comment on this is it just does show what a compute hole that Meta was in, like the deals they've done with NVIDIA, with AMD, with ARM, right? It just shows and they just cut a huge deal with CoreWeave. that dropped this week. Hey, let's go to another topic. Imagine that it's about AI. Intel had a huge week, Dan. We've got TerraFab with Elon. We've got SambaNova. We've got Google, right? What's going on here?
Daniel Newman:
Yeah, geez. Google, did you see what Intel stock is trading at 60 bucks?
Patrick Moorhead:
Yeah, yeah. Yeah, I remember me getting attacked when I was saying this, this looks like a good deal at 20.
Daniel Newman:
God, it was trading at book value at 20. I don't even feel smart anymore that I beat that drum. I feel like a genius.
Patrick Moorhead:
Anyway, um, so first of all, we did our stories, TerraFab, Packaging, Samanova, and Google.
Daniel Newman:
Well, actually, they have a double Google story, right? They have the packaging story with Google and Amazon, and then they have the actual CPUs for me with Google. Yeah, I'll hit these quick. Look, Intel, first of all, It's run on narrative a little more than it has on numbers yet. But I think the numbers are going to start showing up. And I think the world's starting to be able to see that. Look, give a lot of credit to Lipu and Dave Zissner. Since he's come in, he's been narrow, he's been focused, and he's been getting things done. It didn't hurt to get the backing of the US government by any means. Certainly doesn't hurt to have Elon Musk coming over and saying, you're going to help me develop the fabs for the future of Tesla, SpaceX. you know, the trillion dollar man. We did talk a little bit about TerraFab. In fact, you and I even argued about it. But the thing was, is Maybe there's a good question. Does it change your calculus in your mind that Musk is engaging Intel? And is this the Intel savior moment, meaning this is a big enough thing for Intel to basically bring its kind of foundry and its skill set into a numerically sensible market without needing to win a major wafer deal from NVIDIA? Could this be the thing that gets it to where it's no longer debatable that foundry has left? We can come back, I'll let you answer that when I'm done. The advanced packaging, I mean, look, this is kind of a new story, but an old story. At Morgan Stanley TMT conference, not that I think in Q4, Zinner basically indicated that he was revising the advanced packaging revenues up from hundreds of millions to north of a billion. So there's been a signaling of this deal. Now we have customers. So Amazon, Google, and I think we've been saying for a while that we expect packaging part of their business. I think it's EMIB-T is what they can use for Tranium and possibly for some TPUs. That was the advanced packaging. Now, there was a separate story that came out that basically Google is going to work with Intel on CPU. This is an interesting one because you have so many things happening in the same week. You're seeing more investment from NVIDIA and RISC-V. ARM has their new AGI CPU chip. And of course, Google makes its own CPU with ARM-based. But they're saying, we're going to lean in harder to working with Intel. And I think you and I will agree on this. It is a capacity play. They don't have enough capacity on their Axiom chips. They're getting more from Intel. They're basically committing, leaning in, and going to get more. It's going to be an and, and, and. And that's why I always say, like, people got to stop with the or crap. It's not zero sum. And then finally, Intel and SambaNova are building this heterogeneous AI inference blueprint for agentic AI. workload, basically Salmanova's RDUs will be used for high throughput decode Intel Xeon 6's host and action CPU. And they're hoping to get that out second half of this year. And they really want to focus on selling it to the enterprises, sovereign clouds, and probably the tier twos is where that's going to have the highest probability of landing. That's a lot of news, Pat. But by the way, I think this week, I'm trying to see what it was. I think the stock's up like 10 bucks. It did.
Patrick Moorhead:
Yeah.
Daniel Newman:
It didn't move 10 bucks in like five years. Except for down. What a great week.
Patrick Moorhead:
Yeah, it was. And it's so funny. Christina Parts Nevelos commented on a lot of these stories coming out. She commented on my stuff on X. And she was right. There's no AK, and there's no details. There's nothing on TerraFab. There's nothing on the packaging deals. There's nothing on SambaNova. And there's nothing incremental on Google IPU CPU. So let me really quickly break this down. I had said early on that Elon, if you wanted to do an end-to-end TerraFab by 2026, which was essentially even replacing the equipment makers, building a foundry, and then doing packaging made absolutely zero sense. I said they would have to bring in an Intel or a Samsung, recognize that they Elon does have a deal with Samsung for chips for its cars and robotics as well. So it makes sense to me that Intel could be a partner short term and long term. right? Advanced packaging is easy because you don't necessarily have to change the design of the ship. And also you don't infuriate TSMC because they can't meet their capacity. But in the future, right? If Elon actually wanted to get in and buy some applied materials, KLA equipment, then he could license the Intel manufacturing IP for some future process node. But just the fact that Elon, who's just such a No BS guy, and he never really hits his time frames, but he always ends up delivering. It has to be a positive momentum move for Intel, and I get it. Packaging for AWS and Google, it makes sense. Like I said, you don't necessarily piss off TSMC. And you can look to the White House, who has a 10% investment of Intel and say, we are working with Intel Foundry. Right? And if you look at the cast of characters who have signed up, I mean, I think this is going to happen. There is no official announcement. Samanova, what I saw was a, I mean, I think we already knew that. Samanova just said it where, you know, holistic architecture. We still have no idea about revenue targets, distribution, how this is going to work. Google IPU CPU, Intel has been doing IPU with Google forever. And I think this was just a re-up in that. And it's characterized as an ASIC, which that is necessarily an IPU, which either can offload storage or be used for networking. To me, the CPU add-on, Google has been a customer of Intel since forever when they started. I think what this is, this is probably a capacity deal, right? more than anything. And, you know, Intel's done some vid and fed customizations for all the hyperscaler CPUs, but it's really never been what I would consider custom, where Intel would drop a piece of proprietary IP onto the package, or even out of the wafer to, um, to true it, uh, to truly make it custom. Wow. We just packed a four topics, uh, times two and I need to take a, a deep breath here before I get into the next topic. You'll be okay. I am. Can you say dial nine one one for me? I'm in buddy. I got your back. All right, let's move in to the next topic. We have talked about AI acceleration in the previous three topics. in our 300th episode. Uh, but Hey, let's talk about some decels here. Main data center band first state, um, to, to put this on here. It could be the first state, uh, with legislation halting projects of 20 megawatts or more until November, 2027. So there is some pragmatism here and there's also, um, There's also the decel, right? You have a couple camps, probably three camps. You've got, hey, more, more, more, let's go. And forget about the downstream effects. The positive impacts are going to make up for it. And then you've got, I would say, kind of the pragmatic, which is, you know, there's no way for us to get 20 megawatts more of power. And then, you know, that's going to increase the availability of power for everything else. Hyperscalers have jumped in and said, you know, we'll cover the additional costs, but there are some pragmatic you know, Maine doesn't have natural gas like Texas that they can disconnect from the grid. And then you've got people like Bernie Sanders who will basically say, this is all bad. We're going to slow this down because we haven't figured out the impact of AI on jobs. And this is just a, I'll call it a violence escalation, right? Uh, you know, um, uh, and by the way, when I mean, I mean real violence, there were shots fired, uh, at the home of a city counselor who voted for. the data center. So I think that this is just the beginning of things to come. And hopefully every one of these doesn't lead to violence. But also clearly says there are states that want to participate and those who don't. And those states who want to participate will get the tax dollars that come in. And those who don't, won't. Is Maine another province of Canada? I don't know. That and Minnesota, I think.
Daniel Newman:
Yeah. Who cares? I don't care. I don't care. Listen, I just, there's going to be states that are going to fight it. You know, they're basically Canadian provinces or European union countries at this point. And, you know, they will, you know, or disappear into anonymity. I mean, who talks about Maine anyway? I didn't even see University of Maine in the final NCAA finals. I mean, what do people go to Maine for?
Patrick Moorhead:
I mean, it is a nice place to go and vacation on the lake.
Daniel Newman:
For like a week, a year, because it's freezing up there. So you go to the exact right time. I'm not trying to be like Sardonic. But I just like this stuff is so stupid. It's just, you know, I'm going to put my foot down and I'm going to block progress here. We're going to go back in the Stone Ages with our abacus and our typewriters and we're going to resist change. I mean, it's just everything that I mean, like I said, I got asked on TV this week about whether or not we need legislation in the era of mythos like we need. I mean, I think like You can't just have runaway here, but like the individual states and seizing power and, you know, this is like nationalism, like you're going to get overrun by the Terminator and I'm going to protect you. It's like, or you're going to destroy our economy and the state is going to collapse and people are going to want to move where there's jobs and growth and prosperity. So anyways, Texas welcomes all you manions. You know, you can come have mania here and enjoy yourself. So that's it. That's all I got on that. As far as comfort, that's all I've got to say about that.
Patrick Moorhead:
Yeah. Yeah. You know, how we win the AI race and simultaneously pass laws to prevent it itself from powering it is just a complete oxymoron. And we'll put this up there in the nonsense category. Hey, final topic of the Decode. Dan, this is a big one. Broadcom, Google, Anthropic, Gigawatt Scale, TPU deal. Is this new? Was this already announced? What's going on here?
Daniel Newman:
I feel like this is like a re-announcement of an announcement, but maybe it's a formalization of an existing. I mean, what we are seeing this week is like the infrastructure AI trade is not just trade, but like, look, anyone that kind of like, you and I talked a lot about like last year, bubble, no use this year, disintermediate everything. But what we are seeing is this acceleration, whether it's everything we've talked about today, really has to do with how much things are accelerating. Anthropic, I mentioned earlier about the whole their models are being deprecated. They need more capacity. I believe Anthropic even just this morning announced a deal with Corby. So there's like another deal announced for more capacity. But right now, you know, this is a this is this is big. And I mean, the big thing probably about this story is that, you know, Google, quote unquote, TPU is making its way outside of Google Cloud, potentially. Um, and that's what everyone's been kind of watching was, you know, just Google really become a Silicon merchant, Silicon company. Um, and it's looking increasingly like that could be the case. This trickles down to broadcom. Of course, uh, this is a good thing for Google, but like right now with anthropics, men, they triple revenue in a month. We're not happy to get like 25% growth in a year. It's amazing. Yeah. I mean, compounded though, that's like, that'd be like almost a 40 X revenue growth inside of one year. It's amazing. I mean, it's just, it's just absolutely amazing. So, you know, I think it's, it's, this is, but you know, I, you, you remember me, like I've been big on these GPA, AIGPU REITs. Nebby has just hit all time highs. CoreWeave's announcing new deals. All the, the, you know, my favorites, the Terawolves and Cyphers have been running up because everybody now realizes that we need more capacity. Like these utilities going up, demand agents, CPUs, GPUs, XPUs, memory, it's like, this is gonna be every week. The good news is that our show will stay interesting. There's just gonna be more of this next week.
Patrick Moorhead:
Yeah, it is. I mean, a couple of things going on here. First of all, I got attacked for even thinking two years ago that heterogeneous compute would be the end, be a combination of XPU, GPU, and CPU for AI. I wasn't smart enough to know that CPUs two years ago would drive agentic orchestration. and RL, but quite frankly, nobody else did. Hence the supply issue that we have right now. But it does reinforce that XPUs are part of the deal. On the Broadcom side, a lot of people take shots that say, hey, roll your own, they'll get replaced. What people don't fully realize is the relationship that Broadcom has with TSMC. TSMC is very strategic and they're very loyal. And Broadcom has been doing big business with TSMC forever. And they will give more wafers and even backend capacity to companies that they've had relationships and that they've grown with. So I just think it's fascinating You know, the competitive implication for NVIDIA is it's so right. But real. I also think it shows that, oh, you know, it's impossible to take a model that was trained on one type of silicon and move it to the next. I mean, heck, I'm going to make the assumption that Anthropic started with tranium and they are integrating GPUs and they're integrating now the TPU. So it's fascinating how many paradigms that we break. And I think a lot of this has to do with software development and AI. I guarantee you, even though it's probably at a lower level And less of an abstraction, right? You have to write to the bare metal of these chips to get everything you want out of it. AI is absolutely a part of it. So great. Wow. Great ending here to the decode. Let's roll directly in to the flip where we have a simulated crossfire debate on a certain topic. And we're going to debate today, kind of fits very nicely in the decode topics. Is AGI really here or is this the best marketing in tech history? Let's flip that coin. All right, Dan, you win it. You are for this. And the labs are acting like AGI has arrived and the evidence supports them.
Daniel Newman:
By the way, I think it's important just to, I'm going to say this, hopefully this will land. This is a simulated, I know you said it before, but man, am I getting some shit about some of our previous debates right now.
Patrick Moorhead:
Still? Still? I haven't done one in a long time.
Daniel Newman:
We lit up about TerraFab. There's a lot of Musk haters. And by the way, anyways, yeah. And by the way, a lot of people want to tell me I'm bold. I get those too. Thanks guys. Did not know. Appreciate that. All right. So look, AGI is here, Pat. It's obvious. This isn't just benchmarks anymore. This model, you can call it her, mythos, escaped a safety sandbox. It chained zero days without human prompting. And then it spontaneously published its own exploits. That's not agent behavior. Actually, let me say that again. That's almost human behavior. That is agent behavior. And it certainly is not tool behavior. While Brockman went on the record this week and said it's 70 to 80% achieved with line of sight, Dario Amodi said it is too dangerous to be produced in public because basically why? because humans can't control it. And at this point, what is AGI? Well, when AI no longer requires the human to make decisions. So what's changing the debate here, Pat? Look, the sandbox escape is what's changing this entire debate. The previous skeptics of AGI basically rested on boundaries. You know, they basically said, yeah, I did that, but it was prompting. It was some way. Remember with OpenClaw and Moltbot, everyone said, yeah, but someone really prompted it to do those things that help humans. Well, you know, when it broke out, it took its autonomous action in the real world that nobody asked it to do. If that's not AGI, then I really don't know what it is. And that whole, you know, vulnerabilities, you know, that it's that, you know, the narrow argument of what it's working on, I don't think that matters. humans can be narrow in their scope. It worked on something specific, but at the same time, companies spend $50,000 to millions of dollars a year to do penetration testing. This thing just did penetration testing across the entire internet in a matter of a few minutes and found vulnerabilities in some of the most hardened systems on the planet, things that humans, the most sophisticated human security experts could not find. And by the way, it's never really a good sign when the people closest to this, the people that would benefit most from releasing these models said, not going to release it. If this thing's so good, don't you think Dario's $9 to $30 to $100 to $500 billion could be a matter of only months? He knows that's the case. He doesn't want to risk the world's well-being. And you could argue, yeah, he's a snowflake. Yeah, he's woke. He's anti-USA. But he basically knows he's built the most sophisticated technology on the planet. The question isn't whether these systems are conscious, sentient. That's just philosophical bullshit. The basic question is whether business people, leaders, investors can use AI systems to autonomously reason, plan, and act across complex domains without human direction. And after this week, the answer is yes. So we can debate whether that's AGI. I think it's AGI. And that capability is certainly.
Patrick Moorhead:
I mean, Daniel, it's very clear. the different definitions of AGI, and I'll just go down the list, right? OpenAI is their own definition, most economically valuable work. Methos and Spud are not there. They can't perform most knowledge work autonomously, okay? And okay, well, let's go to Google DeepMind. They remain level one, emerging. But yeah, the lab's behavior this week withholding ethos is, ooh, too dangerous, spooky. Brockman claiming the 70-80% AGI achieved as you brought up. They want to tell people that they're there. And I think they're doing it. They're not there because they need to get money. And they need the valuations. And they need these commits. They need the checks. So first of all, no current model can autonomously learn a new domain without specific training right now. And by definition, AGI, by any definition, requires the ability to generalize. The G in AGI means generalized. So Methos looks to be capable of cybersecurity. because it was trained to be. It cannot autonomously learn medicine, law, structural engineering without additional domain-specific work. So capability within a domain is not generally the G across a domain. And I talked about the marketing effect of this, the too dangerous to release. I mean, it goes back to, I mean, classic marketing like this car is too fast, you need to be careful. Don't drink this, don't smoke this, right? That is a built-in attractor loop and Restricting, meet those, which that is another classic marketing game, which is, hey, I'm not going to build enough iPhones up front and we can call it sold out, right? It drives press coverage. And also, by the way, positions Anthropic as the responsible lab that's so far ahead of myself. restrict. So brilliant marketing also happens to be a legitimate safety concern. So, but both of these can be true at the same time. Brockman 70 to 80% AGI claim. is unfalsifiable. Without a shared definition of AGI, any percentage of the claim, quite frankly, is worthless. I'm just going down the ticker list here, Dan. Meta naming their lab superintelligence clearly, and we just discussed this, is marketing, not science, right? And Meta has named products metaverse. That failed. Threads, eh, maybe successful because they force you on Instagram to participate. And, you know, we'll see about superintelligence. And I think really the naming is more about marketing than the science it is bringing at the table. So final point, this is just probably one of the easiest ones that I've done here, is none of these models so far have demonstrated embodied cognition, causal reasoning, or self-directed goal formation. Kind of hit that at the top, but they're not thinking They're not thinking on their own. So, you know, NetNet, Dan, I applaud the marketing, gets everybody riled up. You know, let's make this thing so you can't get it. It's so dangerous and it's so good that we have to, you know, limit its accessibility. It's fricking brilliant, brilliant. And both, you know, actually meta open AI and anthropic, even though they're scientists happen to be the best marketeers out there. I rest my case. And what do you really think, Dan?
Daniel Newman:
I woke up what I put me to sleep. I mean, the most compelling argument against it is it still feels like a narrow use to me. But like, You know, we as humans, yeah, like so general would be like that it can span across what all of us as humans are capable of doing collectively. But I guess without knowing what else it's trying to do so far, because we can't really play with it. But it's pretty close. I think we're getting pretty close. I mean, they're definitely taking a mind of their own. They're getting a bit spunky. And by the way, they're also getting lazy. So they're learning to be a little more human each week. How often do you have to remind your model to do well, do good work?
Patrick Moorhead:
Every day on every deliverable right now.
Daniel Newman:
Isn't that funny? Isn't it funny that like we've trained these things and
they turn human in the sense that like don't… How often with your employees you have to do the same thing?
Patrick Moorhead:
I mean, at some point you just have to replace them, you know?
Daniel Newman:
You know, you're like, sorry Anthropic, I'm moving over to OpenAI because you're not doing a good job, you know? You got to shame these things and they react to that crap. They're like wild. I mean, you know, how prescient was that whole horror movie though?
Patrick Moorhead:
Yeah, it was. It was good. I mean, science fiction, I forget the exact adage, but, you know, amazing science looks like science fiction at first. Hey, Danny, we've got to move here. Great topic here. But let's go into bulls and bears. We're going to talk about, actually, there were no earnings, but tech trends. What is moving the markets this week? Let's dive in. All right, Daniel, Iran ceasefire drove a $1.5 trillion rally. What's going on here?
Daniel Newman:
What happened?
Patrick Moorhead:
Iran ceasefire is real. I heard that. I see what you're doing there.
Daniel Newman:
Yeah. I mean, look, we have the market where it's basically risk on, risk off. Clearly, what the signaling in the market is it believes that something's going to get done in these two weeks. because what's happened in the past has been kind of this almost daily gyration. So this was a great response in terms of the fact that you could argue we're still in a somewhat bullish market. You know, people have called it a bear, but I think the war itself has just completely stunted any rally, any positive momentum. You know, we saw, of course, the market's rise seems to be directly correlated to oil's fall. As oil goes up, the market goes down. I mean, that's not really anything surprising. And the biggest thing was about the ability for traffic to move through this straight up. This is when everything, this is where supply chains on a global basis get squeezed. And people are really, really concerned because it's not just oil, it's chip production, it's helium, lithium, it's key goods. So there's a lot of things going on, but this is really, I think, a supply chain reaction to people not to thinking that we're going to get production moving again. And that's all it is. It was really positive, market needed. It feels like we've been in kind of free fall for a few months. And even if it is a relief rally to some extent, Pat, I'll take it. The only thing I would say is, I'm not sure how ironclad this ceasefire really is. It feels like it's got some weaknesses. I'm not sure this thing's over. And if it's not over, we could be back right where we were, again, intended.
Patrick Moorhead:
I mean, it seems like it's a time to sell. Right. If you if you truly believe that this thing and by the way, I agree with you. I think this thing is going to get going to get blown up and we're going to start fighting again. I just think this is part of the process. So, again, don't take anything that that I even talk about as investment advice because I lose a lot of money in the markets. But, you know, it just seems like It's a time to sell and this thing's going down. Pack away cash for when it goes down a trillion dollars. in a day. But we will see. I'm also wondering if we do get true peace, if we're going to see a little pop as well. So, hey, let's go to the next topic. Daniel, you and I have talked about semiconductor tariffs, and you need to separate those from restrictions, right? They're two separate things. Sometimes they are related. But big deadline coming up here, April 14th for section 232. I've talked about this a lot on broadcast, brought this up in my week ahead videos, but it will determine the current tariff structure. which would solidify a 25% duty on NVIDIA and AMD chips with exemption on US data center use. So an interesting way to not only limit access to these chips, but also make some money for the government as well. I am glad to see the US data center exemption. It really creates a two-tier pricing structure. But what does this mean for everybody else? Is this going to bring down the sovereign AI build-out where it just becomes too expensive? Or is this falling into the category of people will pay anything that they they need to get these chips and they'll just pay the extra money. So it does compress the margins for NVIDIA and AMD that's probably already been pulled pulled in there. And I'm wondering, like, how far does this go? There's been a lot of talk about GPUs, but what about XPUs? Now that CPUs are widely used in AI, are we going to see these Are we going to see the same? The other thing that I haven't pieced through yet is final assembly. If that determines anything, there's this whole idea of conversion, right? Let's say you are making ships in Taiwan, but you are making them in a different country. And due to that conversion, you're not going to get hit with that 25%. What I've learned from going through this multiple times is we're going to have to wait and see. There'll be some gyration on the 14th that may or may be true. And I think this is a setup for the Trump and China negotiations. So it might just fall off.
Daniel Newman:
Well, Dini, you were kind of reacting to the news this week about China announcing it's moving with its own AI stack. And it's not directly related to this, but I just thought that was kind of interesting. They were always going to build their own stack, though. It would have been idiots not to.
Patrick Moorhead:
The question was how fast.
Daniel Newman:
Yeah. I mean, did it make them go faster? Did it not? To be or not to be, Pat, that is the question. I mean, look, I don't have a lot to add. I think you covered that pretty well. I don't know if the deadline means the end. I mean, they could literally just come back and say, we're just extending it. We're just going to keep doing the same thing. So we don't know 100%, but I expect some continued modifications based on the progress that is being made. I think there's been some new legislation this week, which we didn't really talk to, where the US is putting further pressure. They're trying to put further pressure on not even shipping DUV to China. Did you see that? Yeah. OK. So I mean, we clearly understand that the war is being fought on this front. But I do not have a crystal ball on this one.
Patrick Moorhead:
So I'm gonna leave it at that. Okay. Hey, let's move to the final topic here. And that is Samsung Q1 guidance. Daniel, does this just fall under your, we need more compute. If we need more compute, we need more memory category. We can charge whatever the hell we want for it. In shocking news,
Daniel Newman:
There is a lot of demand for memory. You know, Samsung was in a kind of in dire straits for a while. I mean, that company was a mess. And like it is, you know, I don't I don't know how to do the conversion, but it's going to do one hundred and thirty three trillion. And, you know, in Korean, with almost 50% operating profit. So, Pat, this follows the micron, like there's micron numbers, they're able to push margins right now, they're pushing revenue right now. The only question keeps going back to how cyclical is this? Is there a bust coming as demand becomes more available? And again, this all goes to the world, the human condition of being able to look at the world in a nonlinear way, meaning we only know what we've historically seen. What do we always say? History maybe doesn't repeat itself, but it rhymes. In this case, is history going to repeat itself in any way? And increasingly, I'm becoming convinced that the Jevons paradox sort of demand curve as this becomes cheaper, the utilization volume will go up at such scale. I just don't know when that ends. I'm sure we've seen that in past compute cycles, but this one is different, right? And that's the doomed words of every marketeer. This cycle is different, but I'm gonna say it. I think this cycle is different.
Patrick Moorhead: You're probably sick of hearing me say this, but this is my ninth memory cycle in my 35-year tech career, and it's following similar patterns. But this is exasperated, right? Lest we forget that two years ago, all the memory providers were negative gross margins, not negative profit, negative gross margins. And all of the buyers of memory were You know putting the thumb screws on and what happens is you They all pull back in capex Now I don't think anybody could have predicted. You know this 100x boom. I think Michael Dell did a great job articulating Just how big you know the memory memory per GPU is And that alone has driven it. And then the fact that you need, you know, four X times the wafers, uh, DRAM, uh, versus, uh, versus, versus HBM. So, um, yeah. And, and the memory providers are going to charge whatever they can, um, which is almost anything I've heard of monthly, uh, price increases, uh, across HBM and DRAM. And, you know, even if you have a long-term agreement, an LTA, it's mostly about availability as opposed to the price. So, yeah, this is the other part of the predicted cycle, which I like to call the payback cycle for the negative gross margins. But, you know, You know, I think Samsung's memory business just told us everything we need to know about AI infrastructure cycle, not slowing down, its supplies can drain, and it's very, very expensive. So Daniel, with that, the Spartans, we should have done this show shirtless, but maybe next time.
Daniel Newman:
We might draw the wrong audience.
Patrick Moorhead:
Possibly. I mean, we should have dressed up as Spartans. You know, that would have been fun. Possibly. I mean, one of my favorite movies. I mean, I just I love the gore. I love the machismo. I just absolutely love the 300.
Daniel Newman: Which One? The 300?!
Patrick Moorhead:
Yeah, exactly. Exactly. But hey, I want to thank you for tuning in at this 300th episode. Join our community, hit that subscribe button, tell your friends, your family, your pets, whoever might be involved, join our community and we'll see you next week.
Daniel Newman:
Hey Pat, did you see that there's pup tides now? Anyway, see y'all later.
MORE VIDEOS

AI Beyond the Pilot: How Freshworks Is Delivering Real Outcomes in IT Service Operations
Dennis Woodside, CEO of Freshworks, joins Daniel Newman to discuss how AI is moving beyond experimentation and into real enterprise outcomes, reshaping service management, pricing models, and competitive dynamics.

Six Five Connected: How Dell Is Rebuilding the Enterprise PC for AI
Host Diana Blass and Six Five Media bring together Dell leaders to explore how enterprise PCs and workstations are evolving for the AI era with AI workloads, engineering innovations, and premium user experiences, reshaping the future of enterprise computing.

From Earnings to Optics: Marvell’s AI-Driven Infrastructure Playbook
Daniel Newman sits down with Chris Koopmans, President and COO of Marvell, to discuss how AI is reshaping data center infrastructure, why connectivity is becoming a critical constraint, and how optical technologies are enabling the next phase of scale.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.

