Home
The Six Five Pod | EP 299: OpenAI’s $122B Raise, Google’s TurboQuant Shock, and NVIDIA’s Infrastructure Endgame
The Six Five Pod | EP 299: OpenAI’s $122B Raise, Google’s TurboQuant Shock, and NVIDIA’s Infrastructure Endgame
OpenAI locks in the largest private funding round in history, Google disrupts memory economics with a major efficiency breakthrough, and NVIDIA continues to consolidate control over AI infrastructure. This week, Patrick Moorhead and Daniel Newman unpack the clear shift from model competition to full-stack execution.
Handpicked Topics:
🔹 OpenAI closes a record $122B funding round: AI is being reframed as global infrastructure, but the raise puts new scrutiny on capital efficiency, burn rate, and long-term sustainability (The Decode)
🔹 Google’s TurboQuant breakthrough: A major memory compression advance shakes chip markets and reignites the debate: Does efficiency reduce chip demand, or accelerate total AI deployment? (The Decode)
🔹 Microsoft Copilot’s multi-model orchestration: The hosts break down how their recent announcement shifts enterprise priority from model selection to real-time simultaneous multi-model coordination (The Decode)🔹 NVIDIA & Marvell Partnership: NVIDIA continues extending its reach beyond compute into interconnect and the broader data center stack (The Decode)
🔹 IBM and Arm push heterogeneous compute forward: The two companies signal a move toward dual-architecture enterprise environments, accelerating the shift toward heterogeneous compute as the default AI infrastructure model (The Decode)
🔹Is Multi-Model AI the End of Vendor Lock-In? Or a New Kind of Complexity Trap? Multi-model AI promises flexibility and less dependence on a single vendor, but it may just shift lock-in up the stack. Pat & Dan tackle both sides of the question around whether orchestration platforms become the new gatekeepers (The Flip)
🔹 The Magnificent 7 posts its worst quarter since 2022: Pat & Dan reflect on this and raise broader questions about AI capex pressure, geopolitical risk, and valuation resets across big tech (Bulls and Bears)
🔹 Intel Fab 34 buyback: A signal of confidence in Intel’s manufacturing roadmap and capital position (Bulls & Bears)
🔹 Salesforce authorizes $25B buyback: A major vote of confidence in enterprise software during a broader selloff (Bulls & Bears)
🔹 Cybersecurity insider buying: Palo Alto Networks’ CEO purchase reinforces security as one of the most resilient spending categories in tech (Bulls & Bears)
Disclaimer: The Six Five Pod is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and reference share prices, but nothing discussed should be taken as investment advice. We are not investment advisors.
Daniel Newman:
One away, one away from the big three double O. We look like we could be part of the 300. Remember the 300?
Patrick Moorhead:
Are we getting that? Oh, I got you. Yeah, this is your one for the day. And so early. It's 9am central.
Daniel Newman:
Part of the 300. Yeah, but next week we come as warriors. Spartans.
Patrick Moorhead:
I know we do need to do something special. We were threatening to show up with like shirtless, I think.
Daniel Newman:
Yeah. Well, I mean the 300, right? I mean, they were Spartan warriors.
Patrick Moorhead:
I gotcha.
Daniel Newman:
So you got to have something different, you know, clearly a six, five things should be worth a few hundred million at this point.
We have a two and a half times revenue, maybe three times revenue run rate. So does that make us worth $600 million?
Patrick Moorhead:
I don't know. The rumor is TBPN got 10x, but we'll talk about that later today.
Daniel Newman:
We will, we will. But all is well. I mean, for all you out there, we are working on a day, on a holy day. Some people don't work. It's a holy day for many, whether you're celebrating Good Friday, Easter, whether you're celebrating Passover right now. Or you're just celebrating that it's Friday. In your case, Monday or Saturday. I don't know when these things come out. But when it's out, when you're listening, we hope you're having a good one and having a good holiday, Pat. It is a day that my calendar suggests that there are some people not working. I only have like 12 meetings. It's about half a normal. Um, and you know, they say about this era, right? You know, before, uh, AI, I put it, you know, on average 12 hour work days and thanks to AI. Now I'm putting it 18 hour work days.
Patrick Moorhead:
It's a, it's very, the meme is real. I'm doing the same thing. I mean, anywhere I can have a screen, I'm vibe coding.
Daniel Newman:
So I'm on, I'm having a problem with the CLI. I just hit my rate limit. You know, I have the max perplexity and I'm still running out of tokens and it's only nine 11 in the morning. I just started an hour.
Patrick Moorhead:
No, but that's crazy. I do 100,000 a batch, which is about $1,000 per hit. I mean, how can you get rate limited? You're probably rate limited because of the API it's connected to is rate limited. Whatever model you're asking it to use or it's using on its own.
Daniel Newman:
Yeah. It just says you've hit your rate limit resets 1 p.m. American Chicago. I don't know. I'll argue with her later. We've got another busy week in the books. global politics. We are still at war. It's been busy in the macro. We had a jobs report come out today, even though the markets are closed. It's been a busy funding week. Certain massive AI labs got funded. It's been a busy news week overall. You and I are here. We're going to try to kind of sift through the news. We're going to try to break down the FUD. By the way, our Elon FAB argument, it went pretty viral. I'm noticing a lot of X action on it. A lot of people don't realize it's a simulated argument, so I'm getting a lot of hate mail this week. I don't know what we can do to make them realize it's not a real, real argument, but probably a lot of people tended to agree with you that this ain't happening. So we'll see, though, you know, it's never, never a good idea to, you know, to underestimate Elon and what he's capable of. But, you know, buddy, we got a big docket, anything you want to hit on before the week? Are you home next week? You're actually getting a whole other week off the road.
Patrick Moorhead:
I'm actually home next week. Yeah, I'm actually going to get just, I'm going to stay in the gym pretty much all day, maybe move my desk up there, make it a standing desk. You know, I could do squats and meetings at the same time. Uh, but no, we got a lot of travel coming up after that though, right? We've got, uh, uh, Adobe summit. We're both going to be there. I think we're going to fly in it, uh, probably for a day I think. And then same place in Vegas, Google cloud next, uh, six, five, uh, we'll be in both of these places.
Daniel Newman:
Yeah, we're everywhere. We are everywhere. So you hear that, Dario? We are everywhere. You need a podcast network? What do you got? All right, well, listen, show looks great. We got a great docket for everybody. We'll waste no more time. Let's go ahead and get into our top section, the decode.
All right. People said it couldn't be done. Nobody wants OpenAI. Nobody wants its stock, except apparently there was $122 billion in this oversubscribed, at least technically, because there's supposed to be 100. And the evaluation was supposed to be 850. They got a spare 2 billion. I mean, what's a round of 2 billion?
Daniel Newman:
It's just a small round. But what's going on? OpenAI got their money.
Patrick Moorhead:
Yeah. You know, it's funny. This kind of reminds me of my investment strategy, which is always to buy high and sell low. And, you know, where people are doing something, it looks antithetical to something that I believe in. I mean, I barely use Chachi Boutini anymore. It's more accurate than Claude, but it's less, it's like glacially slow. And if I put a fact check at the end of every one of my opuses where I need 100% fact check, it's actually faster. And this was very top heavy for the largest companies who did this. Well, first of all, it was the largest private funding round in history. And by the way, do you ever hear of anything that was undersubscribed? A slight editorial note there. Amazon led $50 billion, but $35 billion is contingent on IPO or AGI. NVIDIA and SoftBank threw in $30 billion, and Dries and Horowitz, and even D.E. Shaw, where they pulled in some of their investors, probably family offices. It's hard to figure out what this company is doing, right? Is it getting out of consumer and they're focusing on commercial? And then you see all of this OpenAI brought out an iOS, sorry, an app inside CarPlay, right? And then you saw they acquired TBPN, a talk show. What is this company actually doing? It looks like Anthropica is on an absolute tear. right now, so it's hard to see where this thing actually ends up. Out of the other side of my mouth, I mean, it seems like every service has a whipsaw, you know, one month or two month in the sun, right? Gemini had it. ChatGPT seemed like it had a run for about a year straight that nobody could, and then, you know, we still got X. I know people keep telling me X, XAI is so amazing. It is fast, it is good, but quite frankly, it doesn't have the tools to do what I need to do. So maybe it's an API service that's out there. So yeah, I mean, shuts down Sora, buys TBPN, brings out a CarPlay app. I don't know what this company actually wants to be great at.
Daniel Newman:
I'm happy they're vibing their way through. I got a headline on CNBC about that today. I said the TVPN deal is like a good vibes deal because you raise $122 billion. You buy something like that. I still believe proprietary data. You and I still clearly believe analysis matters. Who says things matter a lot? But I got asked by them, what's their M&A strategy? And that one, I really can't figure out what the box is there. What's the M&A strategy there? But all I would say is every massive company that became worth a trillion dollars or more eventually buys some type of media outlet. You know, it was like, whether it was like Bezos, you know, buying the Washington Post, you know, whether it was Microsoft, they bought LinkedIn, you know, Google obviously acquired YouTube and different like, became about distribution, you know, meta is the meta and X are the modern media. So I don't know, maybe there's something there about kind of being able to filter through the real time and create something, you know, that's why, you know, I think in the future, you and I have talked a lot about this, but like, as content becomes more and more democratized in AI scale, who says things is going to start to matter even more than what gets said. And so, you know, I think that's the, you know, the trust layer. So it's trust and attention. But having said that, outside of that particular acquisition, you said it really well, Pat. It's like, open AI needs to figure out a way to be sticky again. That's my big concern for OpenAI is I get a trillion-ish dollars of valuation. If they're not showing these investors coming in at $850 something, we're not seeing, I can't get to that to understand that value. Because it's very easy now to mindlessly interchange between different models. They get get they can get the job done and you and I are power users but we've sort of discern and like we like a perplexity because the end like we just want something that tells us which model is going to do the best job on the particular task that we're trying to get. And each model has different pros and cons. They just, they do. And so I just don't know what's sticky there. You know, like, like I said, you know, Microsoft has its distribution. Google has its data. Amazon has marketplaces, you know, like you can go down these companies and video is the pigs and shovels. All these magnificent trillion dollar companies have something, you know, Meta has 3 billion daily eyeballs. on the platform, like what is going to be sticky for OpenAI? I don't think this solves their problems, for sure. But it is interesting. So anyways, but I mean, overall, the bubble goes on. If it's a bubble, this just keeps it moving. So sorry, bubble bears, you're gonna have to wait a little longer, because what did they get like two and a half years of runway? Right now, it's like two and a half years of runway. So OpenAI can keep doing OpenAI things, and they're going to have the money to keep spending it. And Just like I can't get to how SpaceX could be worth $2 trillion, I really can't get to how OpenAI is worth a trillion, but nevertheless, here we are. So let's talk about some more nonsense. So Google's TurboQuant breakthrough, this kind of came out by the way. There was an unveil, but there was a paper on this that was like six months before it came out. So this was like, but basically Google has a compression algorithm reducing AI memory model requirements by like six X. It delivers eight X faster inference on older GPUs, because this has been being tested for a while. It was being tested on H100s. With zero loss accuracy, zero retraining required. Without getting too technical, I see this as another deep-seek moment. This is like people want to be like, oh, memory is dead now. 100 percent guarantee you, there is no slowing of demand for memory. Whether you hear the story about Apple buying up every bit of DRAM they can get their hands on to basically shut out other device makers, or what's going on with HBM and Jensen and Google and everyone buying up every bit of HBM they can get their hands on. All that this, in my opinion, just means is it's a Jevons paradox thing. It's just going to open up more use and scale. It's great that we're figuring it out. It's great that we're creating more efficient inference. But as you and I keep blowing through our token limits and as we continue to run into every time we try to use this shit, it's compute constrained. This is part of the solution for it. So It's great they're doing this work. It's great they're breaking through. It's idiotic that you're seeing 14% plunge of Micron shares in two days, wiping out $25 billion in market cap when not a single fundamental thing has changed about the business. Margins are high, spot prices are high, demand is high. And by the way, some big analysts fell for it. I think it was like Citi, a few others like downgraded them on this, which once again, you know, can only be one of those things that make me say, hmm.
Patrick Moorhead:
Yeah, it was funny. Even made NVIDIA go down 6.6%, which was kind of crazy. And what I like is this is a industry reacts and everybody should have seen it coming. I had talked for years about how AI was going to evolve from GPU only to heterogeneous compute. And forget the fact that Google was doing it all day long with TPU. And you had to inferentiate. Nobody believed it. So it was an all GPU world. And if you didn't get on board, you were a complete moron. And then here we have NVIDIA, who came in with CPX. They came in with Grok. And then they came in with CPUs. And therefore, this happened. So we should have expected this. the market will react. We're going to see some interesting architectures, I think, that puts less reliance on the memory. You even have NVIDIA that's kicking in stuff for KVCache with some specific, you're going to see a lot of compression stuff. And if I can shill one of the companies that I advise, which is ZeroPoint, They do memory compression on the GPU and on the on the XPU about about 10x. And it's not, you know, some industry standard thing baked into logic. It's very, very, very proprietary. So, you know, every time we get an out of bounds, the market, the market, sorry, the technologists react to go in and find a solution. So I think it's super, super exciting. And it's funny, when I parse through the big drop off in valuations on the memory stocks, some of them have to do with Um, like Micron is not doubling down on CapEx, right? So it's almost the, the lack of belief. Like if you look at the terminal value of the company, if they don't see some big, uh, big CapEx, uh, bigger CapEx commitments, then the terminal value goes down. And I get that.
Daniel Newman:
Yeah, absolutely. Well, this is going to be a constant push and pull. But, you know, we're going to talk a little bit today, by the way, about something else that's, you know, a pull, as I mentioned, the whole pulling from multiple models at the same time. Oh, by the way, I got to sit and run me to tell you something really funny about the source links on this one. Have you looked at them? Anyway, um, but let's talk about the Microsoft pivot, you know, copilot has been in a bit of a, you know, holding pattern, let's just say, revenue wise, it's one of the best performing AI enterprise platforms out there. But the question marks around, you know, co work and other things. But Microsoft basically saying, you know, we're not all in anymore on just one model, we're going all in on multimodal, what do you think? By the way, this sets up the flip, everybody. So, Pat and I, you can listen for cues of what's going to happen in the future.
Patrick Moorhead:
Yeah, we're just going to hit the news on this. So Microsoft revamped its researcher agent that allows it to do two models at the same time. You've got ChatGPT and you've got Claude. Features like Critique and Model Counsel. So Model Counsel is very similar to Perplexity's Model Counsel, where you put in a query and it pulls back information from both. And Critique is more picking a primary model and then having another model check its work. and compare and find the one that you like the best. Both of these talk about where the output varies. So it's funny, I personally do this and it's not any fancy feature. I find chat GPT is much, hallucinates a lot less. than Claude, and I'll sometimes have Chad GPD fix it. Funny, other times, I'll just have the Claude researcher go in and fact check its own work. So percentage-wise, and Daniel, you hit that coming in, They've got 15 million paid copilot seats at 3%, the 450 million M365 base. They're up 160% year on year, but still a very small fraction of the available base.
Daniel Newman:
Yeah, I think like you said, let's say we're gonna talk a little bit later on about whether this is a great strategy or not. We'll tease both sides, but rather than hitting this one too hard right now, let's keep moving and talk a little bit about another $2 billion check. You get $2 billion. You get $2 billion. Jensen says, everybody that we like that can help us get to the mission of a trillion dollars of sales per year, and then probably more in the future, gets $2 billion. And this week, Arvel gets $2 billion. Because what is Jensen continuing to do? He's continuing to signal the future constraints. He's investing in things that are going to enable, and in this case, connecting the NVIDIA's AI factory And it's NVLink Fusion and all of its product with Marvell. And so Marvell is going to create and provide custom XPU, NVLink, Fusion-compatible scale-up networking in situations where NVIDIA will be delivering Vera. ConnectX, Bluefield, NVLink, and Spectrum X. I sat down a week ago with Chris Koopmans, COO, and he continues to reiterate a couple of things at Marvell. One is that the custom AI chip business is important, but the connectivity business is really the money shots. And so whether it's been the recent coherent investment that we saw, or it's the Marvell investment, there is a meaningful focus on adding that connectivity layer. So, you know, I always like to say 40 chess, chess not checkers. Jensen's basically buying up the next pieces of the supply chain. He's buying both loyalty, he's buying commit, he's buying capacity. Remember, because all of these, by the way, that he's investing in, guess what they all have, is capacity. So he's getting them to commit their capacity to him in advance. It's very smart. But this is putting himself on the front foot. What's next? Is Lisa going to go cut a deal with Hawk? Lisa and Hawk going to go cut a deal? What do you think?
Patrick Moorhead:
Is there a move here? They're already cutting deals with each other. Look at the silicon in the big rack that's coming up. So, you look back at the Celestial AI Photonics acquisition, fabric acquisition, and this is really the crown jewel here. I mean, the company paid $3.2 billion in 2025. And some people talked about, was that too much? And I think they're looking really freaking brilliant. Marvell right now is the leader in AI photonics. There's a lot of folks who are honing in. There's a lot of people who are selling alternatives, but this is where they're really leading. You know, Matt had a good, Matt Kimball on my team, he called a universal control layer of play, which I think is beautiful. It connects, manages heterogeneous enterprise A environments. So it makes sense. It was a quote from Network World. It is interesting, the 2 billion for everybody type of thing. They didn't give 2 billion to everybody, but there were a lot of $2 billion checks out there. If you two billion, there's noth that, you know, Park scott has, I think it's $120 mil people about that. I think type of thing which is pretty funny. By the way, they didn't just give, they gave a lot more money to Intel as well, and they ended up buying part of the company. So the one thing that's not getting a lot of play because a lot of people aren't interested in this is the AI ran. angle and NVIDIA honing in on what they want to do with the 6G infrastructure cycle. It's interesting, you know, working with Marvell and like Arial ran for 6G, I think extends NVIDIA's reach beyond the data center and into the telecom track. Nokia investment, let's not forget about that. So I guess if I were to sum this up, it's like, you know, NVIDIA is going to keep selling GPUs, but it's also going to start collecting rent on the interconnect fabric on every AI data center around the planet.
Daniel Newman:
Collecting. Landlord. Scumlord. That's what my dad used to call them. Unless it was us. We were good landlords. So, yeah, I mean, look, someone that hadn't been in the news for almost a few days was Arm. So they decided to be back in the news with IBM. That's an interesting one. IBM and Arm partnered to build dual architecture AI infrastructures. What's going on there?
Patrick Moorhead:
Yeah, so interesting stuff. So first of all, backdrop, IBM has a very robust Z mainframe and Linux one server business. You know, don't forget everybody, the mainframe died 10 years ago. No, it didn't. Use of it just keeps going up. And as IBM has layered in open source software, Red Hat stacks, agentic and put AI functionality not only on the chip, but also expansion cards with Spire. So stuff like fraud detection that may or may not have had to be ETLed off to an NVIDIA GPU server gets done right there. So it's a lot quicker. And every time you move the data, you run a security issue. And oh, by the way, Banks get fines if they don't have certain transaction speeds. So that's kind of the backdrop here. And most of that software or all the software today has essentially been proprietary software that talks directly to the IBM chips inside of the mainframes and inside of Linux 1. Now, x86 as a percentage is declining. and ARM is growing and what this collaboration, and there's not a lot of details yet, but it's a dual architecture AI infrastructure. But the net net is in the future, you'll be able to run ARM applications, ARM middleware, ARM development tools that you could run. Let's say the hyperscalers, all the hyperscalers are using and now you can deploy and use those same tools for z and and linux one linux one uh servers.
Daniel Newman:
Yeah there wasn't a lot more on this but i think the trend line whether it was at gtc that followed up the arm everywhere event and now this news is the move towards heterogeneous compute continues to catch uh catch fire. Arm is continuing to lean in, you could say right now, and it's picking a moment. And this moment seems to be working well for it. IBM, like you said, is so critical in so many of these kind of transactional compute. People don't really know that, by the way, how much the world's transactions run through IBM. But there's a reason that these companies continue to thrive. Um, yeah, I mean, not much, not much else to really cover on that one, Pat. You know, we have a couple other highlight high level topics, but I think. You know, I think I'm going to skip Pat because I think what we really need to do is debate because, you know, we're just getting, we're agreeing too much in this section. And nobody wants to hear you and I, you know, a lie. People want to hear one of us make absurd arguments and then they want to not realize it's a fake argument. And then they want to get really angry about the position. So, you know, this week, you know, we're going to, you know, go to, we're going to debate something about multimodal and what's going on there. But for those of you unfamiliar, let me just reread you in. This is the flip. We take, We take a topic, we choose, we predict, we provide our insights and we debate it, but it is simulated. And then we flip a coin and we don't know which side of the argument we're gonna get. So this just shows maybe how on our toes we are. You know, Pat is like really on his toes. Cause sometimes, you know, I actually rig these flips so I get the side I want. And I just want to see Pat's work. But today, we're going to talk about multimodal. I'll give you more on the topic on the other side, but let's go ahead and do the flip. I have to go first on this week. I like when you go first. So the question today is going to be, is multimodal, let me double check this one, is multimodal AI the end of vendor lock-in? or a new kind of complexity trap? All right, let me do my best here. What do you think? Come on, Claude, give me my answer. I'm just kidding. I'm just supporting a few notes. All right, look, I think the answer is yes. You heard me sort of provide a little breadcrumbs to this when we talked about open AI. I provided more breadcrumbs when we teed up this topic before. Pat, nobody wants to have to know which model they use. Unless there's a commoditization and a market dominance the way Google has dominated search, and there's gonna be this many models, people want the simplicity. They wanna be able to talk. semantically, contextually into a chatbot agent text window, and they want an answer to come back that is going to be very simply the very best answer. In the beginning, yes, it was novel. Chachi BT, it was novel. Gemini, it was novel. Anthropic, that's not novel anymore. People are understanding the both generative AI value as well as the agentic workflows. And anybody that's taken even one quick tour on OpenClaw or Perplexity Computer knows that in the end, the computer knows better which model does best. Microsoft didn't have that optionality early on, and so it built its entire co-pilot ecosystem around OpenAI. And while OpenAI has proven to be a great return at an $850 billion valuation when they put money in at like 10, what OpenAI did not do was enable co-pilot to be the winning solution that CloudCowork proved to be early on, and that computer or computer-like multimodal offerings are going to be in the future. Pat, you and I can break down each and every time what is wrong with each certain model and why we don't want to be locked into a single model because it always inevitably does some part of the work that we want to do wrong. But what does Microsoft have going for it? It has the combination of distribution, and now it has multimodal. And it can basically, in a short period of time, take advantage of that distribution to win back customers that would prefer not to end up having to use a single model. So in the past, we were fighting about infrastructure. In the past, we were fighting about which LLM to use. In the future, we're going serverless. Multimodal is like the era of serverless for people using AI. Unlocking with critique mode basically is one model checking another, just like you've got a single model that's going to work. Only partially, multi-model works for everything. This is what Microsoft needs to do. But it's not just Microsoft. This is what the world wants. This is what the future looks like. And nobody is going to be locking themselves into a single model, trying to run their enterprise, trying to run their agents, trying to run their business. Sorry, Charlie, but the billion or trillion dollar IPO of OpenAI is going to be a problem.
Patrick Moorhead:
All right, Daniel. I mean, first of all, I think Microsoft is brilliant for doing this, and so is Perplexity. But can we stand back a second? We've got Claude timing out. We have an energy crisis. We can't build data centers fast enough. We can't build chips fast enough. So hey, I have a solution. Let's throw two agents or two models at the same problem. That will fix everything. Let's not focus on improving the model and getting the right answer out in a decent way. Let's double the amount of work it takes to get it done. Makes a lot of sense, doesn't it? You thought you were afraid of your tokens, Daniel. We just doubled it. We just doubled the budget. There we go. And then let's talk about orchestration complexity. Running two models means managing multiple two APIs, billing relationships, compliance requirements, data residency rules, and SLAs. That could be an issue. And it's hard enough. Most enterprises can't manage one model yet. I mean, how are they going to model? How are they going to manage? They're going to manage two here, right? Compliance services, surface area doubles, cross-checking also creates latency and cost. I already talked about cost. And quite frankly, the consensus answer might be worse than the single best model. Like, who knows? Who says it's better? It seems like a human would do a better job at determining what the right answer is. So anyways, I think it's a, I do agree in multiple models, but putting the right model to accomplish the right agentic task. But what I do know is that in this high cost energy starving environment, having two agents, two different models apply their logic to something doesn't seem to make a whole lot of sense.
Daniel Newman:
You know, that's bullshit. Your argument was the best part of your argument was all we're going to do is chill, compute more, which is actually Really smart. Cause that was a way you could actually not totally argue against the fact that multi is going to work better, but you actually were able to make an argument that we just were too resource constrained to actually go this fast. Um, yeah, that's, that's actually really true though. I mean, cause now you are going to double token, double, triple quadruple. If you're using an agent and then a second agent to basically check the work of your agents. we're going to increase token usage exponentially. But this is also why when people talk about turbo quant and breaking, it's like, this is all horse caca. Like, like, because to actually make AI work better, you're going to apply more AI to the problem. You're going to use more agents, more models, you're going to have these, you know, when you, I've seen you coding, you're telling these things to fact check, double fact check, you know, when you say double fact check, you just put two more agent workloads to basically have the first agent fact check and then a second agent to fact check the agent that you just fact checked with. So yeah, if you're one shot, you can be efficient, but you're sure as hell not gonna get good results. And by the way, once you build these things, you have these things happening autonomously. So you're not actually asking it to do this, you build the prompts. So every single time you launch the prompt, it actually does the second, third, fourth, and you'll say, go out to this model, go out to this model, go out to this model. And by the way, check it, check it, check it. So you actually did one thing, it would have been a one shot with seven or eight agents.
Patrick Moorhead:
Oh 100% i'm doing multi-model i mean and whether it's inside of clod or using you know chat is just so glacially slow at this point i can't even use it. I use chat like i used to use gemini for for like all the quick easy stuff and let it just do auto. But yeah, I'll do an adversarial fact check at the end of every important thing that I had to do. And that thing spins for like seven minutes on like a 5,000 word thing. And then I have to tell it to go back in and reintegrate all the mistakes it made and reintegrate that into the initial output. So yeah, it's definitely a simulated argument, but it was a lot more fun than taking the pro on this.
Daniel Newman:
No, pro is too easy. It's too easy. Although I would imagine that people that aren't really using this stuff, you know, I mean that's like the interesting thing like I still think we're like in the twilight zone in some ways like we are bubble of information where everyone around us is like You know, this guy built a $5 billion company with two subagents in his spare time while he was working out. And then like, you know, we see this and we're like, God, we're behind. But then you talk to a lot of people and it's like, they're still using chat GBT like a surgeon, if they're using it at all. You know, I think there was something I read that was like 15% of the workforce says they haven't used AI yet. I'm like, I don't even think that's possible anymore, because if you've even done a Google search at this point, you're using AI. Like, um, I don't know the AI. Exactly. The AI overviews Google, right? Exactly. So I mean, like, if you've done a search, if you're, you know, doing a search on Insta, which, by the way, knows way more about what I'm interested in seeing than it should, given the fact that I've never told it anything. Um, Anyway. All right. Well, you know, today is Friday and it is Good Friday and the market is closed, but you're listening to this whenever you've gotten around to it. Thanks for tuning in. But Pat, there was four open market days and on Monday, the market will once again be open. And you and I like to kind of at least do a quick rundown of what's going on in markets. So why don't we jump over to bulls and bubble bears?
Patrick Moorhead:
All right, Mag7 posts the worst Q1 since 2022.
Daniel Newman:
You weren't going to let me do my job, right? Sorry, I was talking to you. You're much better. There was a bash command that came up on my CLI and then I just ran out of tokens and upgraded my account and got started. Everybody wants to follow you and I's vibe coding. They want to know what we're up to over here. Pat, MAG-7 posted its worst Q1 since 2022. We thought this was going to be a great year. The end of the year was great. Everything was up. AI was humming. The world was at peace. What is going on? Is AI dead? Is this why? Is everyone giving up? Are we calling it a day? Are we packing it in?
Patrick Moorhead:
The mags ETF down 16% down from 52 week highs. Microsoft off 35%. Worst decline of any quarter since the 2008 financial crisis. Meta off 27%. Tesla 24, Amazon 18, Google 16. NVIDIA 16, Apple 14. I thought Apple was in its own little cocoon in here. I mean, listen, a lot of things are happening at the same time, disenfranchised with AI, which it's funny, I can barely even pin it to what's actually causing it, because the downstream pull is, it seems ridiculous. I do think There is a decent argument. Every dollar that Anthropic makes in revenue, I think they said they're losing 50 cents. I'll need to go double click on that. You've got OpenAI that, you know, so maybe we're getting discounted tokens and we don't even know it. That $200 sub should actually cost $500 or $600 if these companies actually want to make a profit. I saw a meme out there that talked about OpenAI is going to make up their losses with higher volumes. which I thought was super clever. You've got that, and then you've got the SaaSpocalypse narrative, right? Is it a self-fulfilling prophecy here? It does seem a little overblown. I've said on record, and I think you have too, that there's going to be the SaaS companies that get very quickly to value with AI, and those who don't, and those who don't will be dead. and they'll be vibe coded now. I mean, heck, I don't have a CRM system, but I built the front end of one connecting it directly to NetSuite, which has been a super interesting thing. I saw the first dashboard yesterday. I was pretty pumped. I think Iran has something to do with it because the energy supply, right, and all the energy that goes in, even though most of that, that oil, you know, is barely used for data centers. And I think, you know, peak data center energy draw is 5% right now going to 10%. So, you know, it's, um, You know, it's funny, uh, people are rotating out of mag seven and it's not going to cash, right? They're apparently plugging into defense energy and industrial. So I don't know, maybe during a war that, uh, that makes sense. Energy makes sense, but it's also tied to, um, tied to data center.
Daniel Newman:
So anyways, the market's going wackadoodle. I don't fully get it. I mean, I get the oil thing, like, you know, one of the largest, you know, passages for oil to the world, not necessarily to the U.S., by the way. You know, going through the credit reviews. uh, you know, being largely shut down and the expected impact. And then there's this downstream stuff you saw about like resources that were needed in like Taiwan for the fabs to keep making chips. And if this got shut down long enough, then, you know, again, I'm reading this stuff. I, I'm not a materials expert. Um, and I don't know how much supply they all have, but there's things like they're going to run out of supply. Also read something like Taiwan might run out of water. Yeah. I don't know if you saw that, but they had a drought right now, and they literally turned off the showers at the gym, so that they can make sure they can keep making that. making semiconductors for our data centers. But I think a lot of this is war. I think a lot of this is just, you know, the excuse. I said something yesterday. I think maybe the President just likes April to do the hold your nose in five months for investors. Last year, we had Liberation Day. This year we had Iran. You know, he got on that press. I don't know if you watched it the other night. And I was like, what the What the hell was that? Everyone thought it was the de-escalation of Dress and he's like, I will bomb the shit out. again. And then the markets went after a couple of good days, the market just blew up. But then it kind of went flat. I mean, I think the world largely expects this to this conflict to kind of come to an end probably by the end of April. And by the way, you know, if you're if you're wanting to crank the markets back up right into the midterms, you know, he could just turn the switch back on whatever tweet or post he needs to do. roar the markets back and get everybody excited for a second half re-election. I don't, I can't pretend to be inside of the planning, but all I'll say is if you bought last April, if you held your nose and bought, you did really well. So don't be surprised if this drawdown is an opportunity, but again, Pat and I are not providing financial advice, so don't do anything because we said you'd do it either.
Patrick Moorhead:
I mean, do you, There's reverse Kramer, there should be reverse Moorhead. You should really try to work on that. Seriously, I know I'm the worst investor that lives. I lost a quarter of a million dollars in a SPAC that never even got off the ground. I'm the biggest chump out there in the world.
Daniel Newman:
At the time, though, I remember I was like, how do I get in on that? So I mean, that was that was a timing thing. You just got hit the tail end. A year earlier, that would have been like a 5x return. You know, so it's just wild. But let's, let's, let's talk about Intel. Intel, you know, shows, I call it a little confidence, a little swagger. And it goes back to Ireland after making a decision, back to Ireland, and they repurchased their ownership from Apollo. which, by the way, has had some stress on it, you know, highly exposed to software, highly exposed to private credit, an area that's been under a bit of duress. And they basically offered them 14.2 billion to repurchase the 49% stake in Fab 34, which is its high manufacturing facility in Likeslip, I don't know how to say that, Ireland, Ireland. And, you know, but this was kind of a win-win as I see it. Apollo made about $3 billion in less than two years. But Intel knows that the future, having all the ownership of capacity, and this is where a lot of its Intel 4, 3, Core Ultras and Xeon 6 is being built. So, I think this is smart. I think it's a good move by Lipbu. He's continuing to sort of reverse some of what I would say the previous, his predecessor's decisions, good or bad. This isn't an indictment, just saying he kind of goes the other way. Doing less, being more focused, doubling down on certain things. But Pat, I think it shows a good vote of confidence and the market agreed, give him a 9% boost on that deal.
Patrick Moorhead:
Yeah, I mean, the the fab 34 buyback, I think changed the narrative from survival to offense. Board sports confident, ELT is confident about the the company's future. And by the way, As you know, Daniel, JVs can be complex. The governance across that constrained a lot of the operational decisions. I think, quite frankly, likely the reason that they didn't go gung-ho and they're in a CPU shortage on server could have something to do with Fab 34. And 9% single day surge, like you said, market is rewriting Intel as a company that can reclaim its strategic assets. And Apollo earned a 27% return in roughly two years, which for them, even though they are distressed now, really validates this distressed investing thesis in semiconductors. So yeah, I think this is just another feather in the cap of the turnaround strategy. But net-net, Intel sold its factory to survive and bought it back to prove it was alive. And that 9% pop really says the market, I think, believes in its resurrection.
Daniel Newman:
Absolutely. Speaking of resurrection, software goes boom. But the CEOs of software companies say they are buying it, but they're not buying that software goes boom. They're buying their stock back because they believe that the market's got it wrong, that they're being undervalued. And no one's doing this in a bigger way than Mark Benioff, which I think just added $25 billion to his $25 billion. What's going on there?
Patrick Moorhead:
Yeah, so this $25 billion, it's part of the $50 billion total buyback authorization. First phase, 103 million shares. I'll quote Mark Benioff, quote, we are aggressively repurchasing shares because we are so confident in the future of Salesforce, end quote. And also, it's not just the company buying back, right? Board members like Laura Alber from William Sanova, David Kirk, a former NVIDIA chief scientist, are also purchasing shares as well. I mean, listen, it's a double-edged conviction bet, right? I mean, it sends the right message for sure. But it's interesting, Daniel. I haven't seen a lot of software buybacks uh, result in the institutional investors saying, yeah, or, you know, we, we, we, you know, we're in, and I would say aside from the reduction in the dilution, right? Cause anytime a company buys back its own shares, the amount of shares floating out there go down and the, the price per share goes up. So, um, you know, not seeing a lot of that. right now. And I do think, yeah. No, no, no, no, please.
Daniel Newman:
No, I was just going to say, like, I think you're right. I think the buyback is like, it's like a lever. I think it's a signal. I don't think it always results in what people expect. It doesn't necessarily instantly turn to a higher share price because I think people know that the they know the joke, you could call it about, you know, um, I mean, no, one's done more buybacks than Apple. I mean, Apple's literally just continue. I think it was like 600 billion, like that they've spent. So while, you know, the AI investments in CapEx have been about 600 billion by all the other mag companies, Apple spent about that much buying their stock and not doing CapEx. And by the way, the market, as you saw when it was pulling back, pulls back less on Apple than anyone else though. So, There is some upside to that, but I do think what it signals when a company buys back, I think even bigger though is when the CEO themselves buy stock. I think that's a bigger signal to me is when the executive in the open market, when they feel that their stock is undervalued, is willing to put their own money back up. and reinvest into the business. I think that tends to move markets more than necessarily balance sheet tactics. But it's still a better sign than doing nothing. But the always question is when you put 50 billion to work buying your own stock back, would it be better to put that 50 billion to be building, innovating, developing, acquiring? That's right, Daniel. That's right. Something else you should be doing. That's what I always criticized Apple. Like, what could you have done with that 600 billion? It's not buying your own stock.
Patrick Moorhead:
Yeah, it is interesting the the freeze that's put on the M&A market with the Biden administration, too. I mean, it was really you know, really put a freeze on all of that. Because you can invest it into R&D, you can make an acquisition, you can do a JV, heck, but now the rage is just doing a strategic alignment with a company, hoovering 90% of their employees and moving from there. So, interesting stuff.
Daniel Newman:
Yeah, by the way, nobody, you know, until they do something about that, why the heck would anybody buy a company? I hear you. No idea. By the way, on the way out, I will say that, you know, we're not going to dive into this deeply, but the CEO of Palo Alto did. I think he bought in the open market 10 million with his own money of Palo Alto. And so, like I said, things like that always make me think the CEO knows stocks are undervalued. I think cybersecurity is the dumbest Downsides, while these, literally these LLMs are not solving cyber, they're creating massive cyber risk. So I think that's one of the areas that's been significantly mispriced. I think the market will catch a clue on that at some point in the near future. In fact, we did it. We actually finished our show within the window we booked. One of the technical issues that we had this morning. But, you know, I hope you have a good weekend. Like I said, whatever you're celebrating, enjoy it. You know, ride horses, spend money on horses, think about horses. My kids are coming home. You're gonna see my kiddos, and I'm excited about that. And back on the road Monday. But all of you, whether it's Monday, Tuesday, Wednesday, Saturday, whenever you're listening to this, thank you so much for being part of this community. Share this. great content because we've been ripping, rocking, and rolling for a long time here, and there's so much more good stuff to come. Thanks for tuning in. See you all back here, same spot, next week.
MORE VIDEOS

Rethinking Data Security, Governance, and Resilience for the Agentic Era
Anand Eswaran and Rehan Jalil join Patrick Moorhead and Daniel Newman to discuss how enterprises must rethink data governance, security, and resilience as AI shifts toward real-world deployment. The conversation explores why unstructured data is central to AI and how organizations can build trust at scale.

Resilience in the AI Era: Why Security, Data, and Recovery Must Converge
At RSAC 2026, Commvault’s Anna Griffin and Michelle Graff join Patrick Moorhead and Daniel Newman to discuss how AI is reshaping resilience strategy. The conversation explores ResOps, platform unification, and why security, identity, and recovery must converge in the AI era.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.
.png)

