Six Five Connected with Diana Blass: Why Everyone's Talking About CoreWeave
Explore the reasons behind CoreWeave's explosive growth and its impact on the AI cloud infrastructure landscape with insights from industry leaders and customers.
🤯 From crypto mining startup to AI powerhouse in under a decade!
Six Five Connected host Diana Blass explores the layered journey of CoreWeave, the AI cloud infrastructure company that’s now making waves.
Hear direct insights from CoreWeave’s top tech minds – Chetan Kapoor, Chief Product Officer, Peter Salanki, CTO, Chen Goldberg, SVP, Engineering, and Max Hjelm, SVP, Revenue on their pivots, recent IPO buzz, and strategic partnerships with NVIDIA and OpenAI!
To truly understand the forces shaping AI infrastructure, this episode brings in invaluable perspectives from Brian Belgodere, Sr. Technical Staff Member, AI & ML, at IBM, Mark Whitney, VP, Engineering, at Rescale, and Nick Patience, VP & Practice Lead, AI, from The Futurum Group, showcasing how CoreWeave's purpose-built infrastructure is accelerating real-world AI applications and disrupting the cloud landscape.
Key takeaways include:
🔹Purpose-Built Power Accelerates AI: CoreWeave's infrastructure, specifically designed for AI with optimized bare metal servers and NVIDIA GPUs, delivers significant speed and efficiency advantages over traditional cloud solutions, enabling faster and more scalable AI application development as validated by IBM and Rescale.
🔹From Startup to AI Infrastructure Frontrunner: CoreWeave's rapid evolution from cryptocurrency mining to a key player in AI cloud infrastructure, marked by strategic partnerships and an IPO, highlights their successful pivot and strategic positioning within the rapidly expanding AI market.
🔹Real-World Results Validate Specialized Infrastructure: Customer testimonials from IBM and Rescale demonstrate the tangible benefits of CoreWeave's purpose-built cloud, showcasing accelerated AI model training and significantly faster simulation times for critical applications.
🔹A Specialist Disrupting the Traditional Cloud Model: By focusing exclusively on the unique demands of AI, CoreWeave is emerging as a disruptive force in the cloud landscape, offering specialized solutions and challenging the traditional one-size-fits-all approach of hyperscalers.
Learn more at CoreWeave.
Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five Connected is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Diana Blass: It wasn't a chatbot that hit Wall street first.
News Reporter: It's the largest tech IPO we've seen in the US in several years.
Diana Blass: Instead, it was coreweave, the company providing the infrastructure powering them all.
Max Hjelm: AI becomes really exciting when you don't even know you're using it.
Diana Blass: In this episode, we go beyond the ticker to meet the company that's the talk of AI.
Jensen Huang: One of our great partners, Core Weave, is in the process of going public and we're super proud of them.
Nick Patience: Classic case of a specialist in a maturing overall cloud market.
Diana Blass: Discover what's made coreweave one of the fastest growing cloud providers in the AI revolution.
Brian Belgodere: They would give us the best opportunity to get the GB 200s in the hands of our researchers sooner than anyone else.
Diana Blass: The simulations it's helping to scale.
Mark Whitney: It’s to get thousands times faster results.
Diana Blass: And the cloud war, it's disrupting.
Chetan Kapoor: This is very different from some of the traditional hyperscalers.
Diana Blass: All coming up on this episode of Connected. To understand CoreWeave's rise, we headed to Nvidia's GTC, one of the biggest stages in AI. Here we saw a world of robots, virtual assistants and automated cars, all running off AI models that require massive compute engines infrastructure where these models are trained, fine tuned and deployed. But doing that at scale with speed and performance isn't easy.
Nick Patience: You know, the amount of throughput, putting networking together with the GPUs and the storage is absolutely crucial for that raw high performance.
Diana Blass: Traditionally, enterprises leaned on hyperscalers for just about everything. Storage, compute, networking. But their infrastructure assumes that each machine is doing something different. AI is a whole new ballgame.
Peter Salanki: The AI use cases are very different because all of a sudden you have a whole lot of computers that are all working together on the same thing. So you're super sensitive to any individual failure. And also your usage pattern, like how you access storage, even how your power power demand looks like in a data center are all now uniform. So the traditional architectures both suffer in terms of performance because they assume that like all the computers in a data center are not doing the same thing. They assume some over subscription.
Diana Blass: Enter CoreWeave, a company purpose built for AI with key differentiators among them bare metal servers optimized for Nvidia GPUs. That means direct access to the hardware, no virtualization, overhead storage, slowing things down.
Nick Patience: As you get closer to the, you know, the actual hardware, I guess any layer of software you abstract use of abstraction layers ultimately has a performance hit. And so if you can talk more directly to the hardware underneath, then you know you're going to get the optimal performance out of that hardware.
Diana Blass: But speed is only part of the story. Coreweave wraps its raw power in a streamlined developer experience, offering managed Kubernetes and advanced workload orchestration and flexible pricing that scales with demand. It's all part of a platform designed to make AI deployment easier and customers say it's working.
Brian Belgodere: One of the big selling points was that they build to Nvidia spec. So I know exactly what I am getting.
Diana Blass: IBM turned to CoreWeave to power the training of its open source granite models.
Brian Belgodere: When it came to building out the Blackwell generation that added a much larger energy footprint and water cooling requirements, we started looking at while we sized what it would take for us to do it ourselves, we had potential data centers selected, but that was going to be a 10 year lease. Appreciation is six years on this hardware and next year my researchers are going to want the GB 300s. Vera Rubin we're constantly going to be chasing what the latest generation is. So it kind of made sense to partner with a company like CoreWeave.
Diana Blass: Coreweave's story started in 2017 as an Ethereum mining operation. But the founders quickly recognized that their GPU resources had greater potential. By 2019 they rebranded and began offering cloud based GPU services. So when ChatGPT hit in 2022, CoreWeave was ready. A strategic partnership with Nvidia gave them access to in demand GPUs. As of the end of 2024, they operated 32 data centers across the US and Europe, housing over 250,000 GPUs serving clients that include Microsoft, IBM and OpenAI.
Chetan Kapoor: So it's a hyper competitive environment with respect to having the latest and greatest foundational models available. Because that basically underpins the capability of the AI service that builds on top of these models. It also drives economics, right? Newer models are faster, cheaper, more capable and whichever company is able to kind of move the ball faster and at a greater distance has the opportunity to start building up market share.
Diana Blass: Trading models fast is one thing, keeping them running reliably is another.
Chen Goldberg: Let's say in the old world you had a web app running on a single node on a Kubernetes cluster. Awesome. Maybe there is an error with that node. What you will usually do, you will move it to a different node, okay. And maybe it's already over provisioned and running on multiple regions and so forth. In the world of AI, because of the size of the data and because of the scale, usually there will be multiple nodes involved in that interrupt. So let's say I'm running a job and one node is not working. That means that maybe I have 999 nodes that are now waiting for that node to be resolved in order to continue the job. So just the cost of those cycles is really, really high.
Diana Blass: In high stakes sectors like biotech, that can have major consequences. Just imagine you're a pharmaceutical company using sensitive data to develop a cure for cancer. A system failure could risk errors, increase costs, or even break compliance.
Mark Whitney: My name is Mark Whitney. I'm the vice president of engineering at Rescale. Rescale is a high performance computing platform in the cloud built for scientists and engineers to simulate the real world on computers. The observability piece is very important for these really intense, long running, high performance computing workloads because if something goes wrong in the middle, we need to be able to pinpoint where that is and be able to resume the simulation because we still want to be able to get that result for the customer.
Chetan Kapoor: A lot of cloud service providers have very limited visibility or insight that they provide to their customers around what's going on in their fleet and things like that. But on the CoreWeave side, we just have a really, really rich collection of observability tools. This dashboard is the dashboard for the actual job. You can see it's running at full power. On the right side there's a graph that shows how much compute is actually being used in order to kind of train that model. They're also able to kind of get into specifics around the GPU temperature, the core utilization, how much memory is being used and things like that.
Diana Blass: That kind of efficiency, according to Rescale, has enabled simulations to run 20, sometimes even 40% faster.
Mark Whitney: So I think it's good to look at an example. One is Boom Supersonic. So they are building a new supersonic jet and they use the Rescale platform to do a lot of different simulations of their plane designs. So for example, one thing they're doing is aerodynamic simulation. So they use our platform to simulate the aerodynamics over the wings. And furthermore they use GPUs on our platform to accelerate that simulation. So they've seen up to like 20 to 40 times faster simulation times with Nvidia GPUs on our platform.
Diana Blass: It all speaks to the power infrastructure can unlock when it's purpose built for the unique demands of AI.
Max Hjelm: Really how we help to deliver that by solving the infrastructure challenges for our customers. They can spend more time building these products.
Diana Blass: In under a decade it's gone from a GPU startup, to a standout at Nvidia's GTC.
Peter Salanki: Three years ago we were like, oh Google, someone noticed me.
Diana Blass: Now an AI hyperscaler competing with the likes of AWS and counting Meta as a customer.
Chen Goldberg: Let's go.
Diana Blass: Will their purpose built cloud help to get us to the next stage in AI?
Jensen Hunage: AI that understands the physical world.
Chetan Kapoor: Customers are going to continue to need the latest and greatest infrastructure. Not just GPU nodes, but the entire cluster. As quickly as possible, as large of a scale as possible.
Diana Blass: Will be watching because this show is just getting started. I'm Diana Blass and now you're connected to CoreWeave.
MORE VIDEOS

Accelerating Creativity and Productivity with Adobe Firefly
Alexandru Costin, VP of Generative AI at Adobe, joins Daniel Newman and Patrick Moorhead to discuss the evolution of Adobe Firefly, integration with creative tools, and how generative AI is redefining content creation and productivity for the enterprise.

The Six Five Pod | EP 275: AI's Ripple Effect: The White House, Job Market, Chip Wars, and Corporate Strategy
On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss recent antitrust rulings affecting Google, the availability of Nvidia's H100 and H200 GPUs, and the impact of AI on job markets. The hosts analyze earnings reports from Salesforce, HPE, and Broadcom, offering insights into their AI strategies and market positions. They debate the concentration risk for Nvidia with its top customers and explore the economic implications of recent unemployment data. Throughout the episode, Newman and Moorhead provide expert commentary on the evolving AI landscape, its effects on various tech companies, and the broader economic outlook.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.