The Rise of Customized AI Factories
How are customized AI factories leading an evolution, transforming performance and efficiency in the AI era? 🤖
Find out during this conversation from the Six Five Summit: AI Unleashed! We're thrilled to present one of our Semiconductor Speakers, Chris Koopmans, Chief Operating Officer at Marvell. He joins host Daniel Newman for a pivotal discussion on Marvell's role and innovations in this critical space.
Key takeaways include:
🔹The Rise of Customized AI Factories: Explore the crucial concept and rapidly growing importance of customized AI factories, understanding how they differ and why they are essential for future AI demands.
🔹Marvell's Innovation at the Core of AI: Delve into Marvell's pivotal role and groundbreaking innovations within the AI space, highlighting their unique contributions to the development of these specialized computing environments.
🔹Future Trends & Predictions for AI Deployment: Gain forward-looking insights into the most significant trends and strategic predictions for how AI will continue to evolve and be deployed across the industry.
🔹Optimizing Performance & Efficiency through Customization: Understand how the shift towards customized AI factories, driven by companies like Marvell, is unlocking unprecedented levels of performance and efficiency for complex AI workloads.
Learn more at Marvell.
Watch the video at Six Five Media, and be sure to subscribe to our YouTube channel so you never miss an episode.
Or listen to the audio here:
Daniel Newman: Hello, everyone. Welcome back to the Six Five Summit 2025. We are here in the semiconductor track and we've got a great spotlight session. Excited to have you join me here. I have Chris Koopmans back for another conversation with Marvell. How are you doing today, Chris?
Chris Koopmans: I'm doing great. How are you?
Daniel Newman: Oh, it's been a rush the last couple of years. This show, Chris, is AI Unleashed and you've been a keynote for us a few different times. I enjoy every year sitting down with you. I'm just thinking about the last couple of years, how the conversation has progressed. Marvell has been moving along in the space, but where I'd love to start this conversation is just a little bit of your general observations on how the AI market is moving. You're in it in so many different ways.
Chris Koopmans: True.
Daniel Newman: So, why don't we start there?
Chris Koopmans: AI Unleashed is a perfect title because it is unlike anything I've ever seen in my career. Every time we think, "Wow, this thing is huge and can it really grow that fast from this point into the future." And then, if you fast forward any time period, one quarter, one year, we look at it... It was faster. And then, you ask yourself the question again, "Can it continue to grow this fast?" And it's evolving really, really quickly. We're just seeing the levels of data center CapEx spend just unbelievable and the amount of interest and desire for both, for much of our solutions, both high-speed connectivity and building out custom AI infrastructure platforms. The opportunity set just continues to explode. So, it's truly an exciting time to be in the infrastructure space.
Daniel Newman: Yeah, and your business did really make a significant pivot and you were one of the companies that came out early and really were able to be declarative in sort of how AI was impacting your business.
Chris Koopmans: Right.
Daniel Newman: But I can't really let this conversation go too long without talking about the XPU space.
Chris Koopmans: Sure.
Daniel Newman: There is, Chris, so much excitement about the custom silicon. We've seen hundreds of billions of TAM added in the AI chip space. We think the next frontier is going to be custom the hyperscalers. But beyond, talk a little bit about how AI factories, that's a term that gets used a lot. How do you see this kind of emerging and being built in the future?
Chris Koopmans: Sure.
Daniel Newman: How does Marvell see that?
Chris Koopmans: Yeah. So, first of all, the first waves of build out in AI infrastructure really were with the big hyperscalers. The ones that already had huge data center footprints and we're able to very quickly mobilize and build out AI infrastructure within their footprints. And, of course, secure and build out new data centers and new infrastructure. And that starts with the top four US, but also around the world. Going forward though, we're seeing and call them emerging hyperscalers like, who do we call a hyperscaler in 2028? I actually think there's going to be new names on that list because you see all these new kinds of companies that either the model owners and builders, the application owners and builders, and then the rise of sovereign AI. You're starting to see just an explosion in all of these new areas as well. And ultimately, they're all looking for this AI infrastructure. And you mentioned sort of the XPU side of things. At Marvell, we have kind of, I'll just call it two halves of the AI business. There's the interconnect. No matter whether you have a GPU or an XPU, any of those things, even CPUs, they all need to be connected with really high speed. This business has grown so fast over the last several years because it's attached to every single one of them. Every system that's being built and delivered by every data center operator in every AI space in the world is using Marvell interconnect technology, the high-speed DSPs. And so, the way that that has grown has been truly remarkable.
While at the same time, we've built out this custom business that you talked about where we're seeing an explosion of a desire to build their own AI infrastructure, optimized for their own sets of applications. So, if you think about what I just mentioned, the hyperscalers for doing what they're trying to do and build out their application and workloads. And when part of this space with cloud, the model owners can build optimized infrastructure for their application. The application owners are building out AI infrastructure optimized for their applications. And so, that's actually the custom opportunity continues to get bigger every time we look at it. In fact, in 2021, we sat in this room and we announced, Matt, our CEO announced that Marvell was entering into what we call the cloud optimized silicon business or the custom silicon business. Most people looked at us like, "Why would they build custom silicon?" Well, nobody's asking that question anymore. Everybody's building custom silicon and it goes beyond just the XPU.
Daniel Newman: Yeah, and that's a good leading question. And by the way, you were early, you were right. Got to give you credit where credit is due. And I don't think everyone saw it right away.
Chris Koopmans: Right.
Daniel Newman: But I do think when you look at how this is emerging, right? There's kind of these two schools, there's kind of the merchant silicon school. But if you look at, and by the way, the way you talked about hyperscalers, I think that is really interesting and I'm glad you brought sovereign into this because I even think of things like service providers becoming hyperscalers.
Chris Koopmans: Right.
Daniel Newman: I look at how in different regions are you going to address sovereign cloud in Portugal. It's probably going to be some partnership with a cloud and a telco company and trying to... That's got that regional expertise, so much going on Chris. But the platform isn't just XPU, it's not just compute.
Chris Koopmans: Right.
Daniel Newman: What are the... Talk a little bit about what the platforms...
Chris Koopmans: Sure.
Daniel Newman: ... look like beyond just fighting for those custom AI chips.
Chris Koopmans: Sure. So, if you think about this space, one, XPU, GPU is not good enough for anything. You can't build the application to fit into one XPU or GPU. So, you have to string together thousands, tens of thousands of them. And so, that means, we fundamentally have a connectivity problem. How do you actually build this at data center scale? And so, ultimately, what that really means is there's a platform around them. If you start with the basics of an XPU, the best thing you can do is pack as many transistors as you possibly can into one reticle-sized die, not enough. So, then you go with advanced packaging and Marvell's made numerous announcements on some, this subject over the past few months. Then, you package multiple of this die together with high bandwidth memory into a single package.
Then, you want to put as many of those on a board as you can, put as many of those in a rack as you can, put as many racks in a data center as you can, and connect all of them with as much bandwidth and connectivity as you can to really build out a logical XPU, if you will, out of multiple smaller components. And so, ultimately, the way that that is done tends to be tied very closely to the architecture. So, if you built a custom XPU, you probably have a custom platform. You're not just using a standard off-the-shelf platform. Now, that is a change. When this first started, you probably did just put it into a standard like x86 infrastructure with a top of rack switch that was running ethernet and all of these other pieces. But that infrastructure moves at a much slower pace. And so, what you're seeing now is an investment in building new types of custom scale-up fabrics, and the rest of the platform going custom to be able to really get as much of this compute power available as possible, optimized and designed for your application.
Daniel Newman: So, it's interesting though because as we pin it at something like 10% right now, XPU versus kind of the big GPU span, but it is converging. It's closing in...
Chris Koopmans: Right.
Daniel Newman: ... because the volume buying the diversification. But when you talk about the overall platforms, one of the big rate limiters, Chris, is going to be the network. So, you're very focused on interconnect as well. AI doesn't function all that well if you can't move the data quick enough. So, you hear a lot about scale-up, you hear a lot about scale-out.
Chris Koopmans: Right.
Daniel Newman: Marvell has a pretty significant role to play there as well.
Chris Koopmans: Right.
Daniel Newman: I mean, how fast are you seeing? Are you seeing the industry keep up from this space of connecting everything as we are spending time talking about doubling and tripling an exponential growth of compute power?
Chris Koopmans: On every one of these vectors, I continue to be amazed. If you start in package, packaging today has achieved things that nobody's ever thought of before. And ultimately, being able to package the number of die and the size of these chip, it's hard to call them chips. These things are like six inches by six inches. These are huge packages. And so, that's one technology vector. The next technology vector is, of course, how do you actually connect them together with the high-speed copper interconnect, and there's tons of innovation there. Eventually, you're going to run out of space there and it's going to need to go optical. So, we've made announcements around our co-packaged optic solutions to be able to build the ending to any XPU fabric, all optically connected straight into the package. That's a new innovation that's coming as well. And, of course, once you build out that rack and you have the rack scale platform, now you need to be able to go across the data center and build as many as you can there. And Marvell demonstrated the world's first 448 gig PAM4 optical connectivity at OFC a couple of months ago.
A few years ago, there was debate in the industry whether you could ever do 448. In the technology industry, we sometimes think we'll never be able to go beyond this. There's some barrier coming. And I'm always amazed at the ingenuity of engineers to break through that barrier. And another one is the reach of PAM4. These data centers are now getting so big that it's starting to hit the reach the end of that. And so, the between data center, the ZR space for data center interconnect is too high a power. The PAM is too short a distance. So, Marvell announced the first coherent light solution that actually takes and combines parts of both technology that allows you to connect huge campus-wide data centers. So, if you look at every one of these vectors, the race is on, the engineers are innovating, and the technology is moving at a pace unlike anything we've ever seen.
Daniel Newman: And it feels in some way, at least as I observe, is that Marvell is innovating in many, many vectors. But there's so much concentration right now on one. And so, I think it's kind of important for everyone out there to understand that what you're doing in terms of connecting in the rack, connecting between the racks, connecting between the physical buildings. And then, of course, sites that are far apart. There's a lot of technology in there, and there's a lot of TAM.
Chris Koopmans: Right.
Daniel Newman: There's a lot of TAM that's accessible to you. One other thing that I think is super important because I say what are the rate limiters? It's the compute, it's the network and it's the power.
Chris Koopmans: Yup.
Daniel Newman: Coppers has its place, but over distances stuff it becomes very power inefficient. And, by the way, GPUs can be very power inefficient. For certain use cases, they can be great. We're seeing exponential scales in some way. But there is also a reason that many companies want to design their own.
Chris Koopmans: Right.
Daniel Newman: They have a very specific workload in mind or several workloads. They can build their own software. They can do these things. How is Marvell thinking about addressing that? Because that seems like one of the biggest opportunities...
Chris Koopmans: That's right.
Daniel Newman: ... is sort of addressing the power problem. And while China might be happy to build coal fire plants to a week or whatever they're building. In the US, we're trying to build nuclear, but that's like a decade out, Chris.
Chris Koopmans: Nuclear power, yeah.
Daniel Newman: It's not going to happen that quickly.
Chris Koopmans: Yeah, ultimately, I mean, that's what we wake up and think about every single day. When you're building silicon, you tend to think about power performance in area and areas cost. So, it's like how much is it going to cost? What's the performance of it? And how much power? And to the point you made earlier in terms of capital allocation, cost is almost not the discussion anymore. It's how much performance at what power is really all of the discussion that you can have right now. And ultimately, everything we're doing is focused on that. I mean, you mentioned custom earlier. That's exactly why custom. I mean, by the way, why are we doing AI work on GPUs instead of CPUs in the first place? Because they're more specialized to do this type of work.
So, if you can make a more specialized XPU or AI processor for a more specialized workload, it's going to be more power efficient, right? And same thing, if you're addressing any possible application, you're going to build one type of a platform that can deal with anything. But if you're saying no, I really, I'm going to focus on this model or this application, you can build not only an optimized XPU but an optimized platform that connects those together and optimizes the connections, connections between XPUs and the connections to memory and the ratios of all of those for the way your model is split and spread logically across that logical XPU. And so, all of that has power and space in mind. I mean, cost is obviously an important one, but even more important is, "Hey, this is the space and power envelope that I have. What's the most optimal way to deploy infrastructure to attack my problem?"
Daniel Newman: Yeah, I agree with you. I actually think there is, at least at this point, if the bubble bears, I like to call them, have anything to actually have a bubble about, it's that we have to turn all this infrastructure investment into consumption.
Chris Koopmans: Right.
Daniel Newman: And we're seeing it. But these agentic workflows where you have trillions of concurrent 24/7 agents, and we are by the way also seeing a pivot that I think is very favorable to you, which is inference, surpassing training.
Chris Koopmans: Right.
Daniel Newman: And you guys can build a custom chip for training; you can build a custom chip for inference. But the volume and inference is where money is made.
Chris Koopmans: Right.
Daniel Newman: Training is where it was basically the R&D of this industry that basically scaled the models, prepared the data, all the stuff that had to be done. So, how does this work going forward?
Chris Koopmans: Sure. Yeah, you mentioned the tipping point and money to be made, and turn all this infrastructure investment into dollars. And I find it interesting. I started my career in mobile where the world had 2G cellular networks and was just auctioning 3G spectrum and everybody was thinking about going with wireless. And ultimately, there was billions and billions of dollars being built in sort of telecommunications network and cellular communications on top of the telecommunications networks around the world. And that question was asked constantly, is anybody going to ever use 3G, 4G, 5G to do anything? And nobody could even imagine. Back then, we weren't even doing SMS text messages. And there was a lot of questioning, does this make any sense?
The reality is it took more than a decade of just pouring money into the infrastructure build out before all the blossoms bloomed, right? Before you actually had this massive mobile economy. It is the economy now, right? It is built on this giant infrastructure that's been built. And back then, nobody even knew we were going to have iPhone and Android. Nobody knew we were going to have the app store. Nobody knew we were going to build all of these apps in the app store that are making billions of dollars.
Daniel Newman: Certainly, they didn't know about ChatGPT.
Chris Koopmans: They didn't know all the stuff that was going to become on the back of it. And I feel like the way we're with AI is in similar we're pouring billions into the infrastructure to develop the platform, which will eventually lead to this massive economic explosion. But these things usually are decades. We're only like two years into this thing.
Daniel Newman: Yeah.
Chris Koopmans: I think that we don't... What are they going to be? The app stores and the Androids and the IOSs and all of the other parts of the platform that's going to lead to all this economic value for the world. We don't even know it all yet, but it's definitely happening.
Daniel Newman: Yeah, there's a lot of questions. And to be very clear, I'm not one of the bubble bear guys, I just meant that overall that's been probably one of the questions of how fast we're building infrastructure versus how quickly it's being consumed. And to your point, the fact that we actually have this many use cases, you got CEOs, you got the Elon Musk and Sam Altman saying, "My GPUs are melting." Literally, they have a use case that's consuming so much compute right now that they cannot get enough access. So, we actually have some of these use cases.
Chris Koopmans: Right.
Daniel Newman: It's in the wild right now, well ahead of the era where Steve Ballmer, when he said, "Nobody's going to tie up from a touch screen."
Chris Koopmans: Right, right, right.
Daniel Newman: There's been some fascinating... By the way, really smart people that made some fascinatingly wrong predictions in the past.
Chris Koopmans: Of course.
Daniel Newman: I think we're going to get there. I think these are really exciting times. Chris, before I let you go, what is your sort of vision? You kind of started to alluding to it, but how fast does this accelerate? And how big do you see this custom opportunity?
Chris Koopmans: So, what we believe is that, first of all, take your numbers of how big you think the accelerated infrastructure market is going to be and how big the accelerated infrastructure TAM will be over the next three, four years even, right? The compute side alone should be close to $400 billion, somewhere in the sort of $300 to $400 billion by 2028. And what we've said is that we think a quarter of the market, you mentioned 10% now, we think it'll become a quarter of the market, whether that's in 2028, seven, eight or nine. But we believe that it's on track. We said that a year ago. We said that again today. We think that that is on track...
Daniel Newman: Around $100 billion.
Chris Koopmans: ... to go custom. And ultimately, exactly. Ultimately, somewhere in that sort of 80 to a hundred billion worth of custom opportunity.
Daniel Newman: Yeah, it's a big number. It's a big opportunity. And like I said, I think one of the biggest things right now is we've kind of built this world where people limit their purview. Everyone's obsessed with the compute, but this stuff does not work...
Chris Koopmans: Right.
Daniel Newman: ... if you don't get the thermals right, you don't get the connectivity right, you don't get the, of course, compute right. You got to have the data layer correct.
Chris Koopmans: Right.
Daniel Newman: Then, of course, we've got to have the applications and the use cases.
Chris Koopmans: It's the whole platform and the infrastructure is actually in some ways almost more important, really. And we're seeing that in terms of our design...
Daniel Newman: We can't do any of it without this base layer that you are helping to build. And so, I really appreciate that. And, of course, the era of the humanoids and the fully autonomous vehicles and we'll each have our own agent. And then, you and I, we can play golf... Go on a... And finally, take a vacation. So, Chris, thanks so much for joining me here at the Six Five Summit. It's great having you great each year talking about what's going on. It's moving very quickly.
Chris Koopmans: Yeah, thanks for having me, Dan. Always a great conversation.
Daniel Newman: And thank you everybody for being part of this Six Five Summit 2025. We are AI Unleashed. The spotlight session with Chris Koopmans, COO of Marvell in the semiconductor track. Stick with us, check out sixfivemedia.com/summit to get all the sessions. More insights coming after this.
Disclaimer: The Six Five Summit is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Speaker
As Chief Operating Officer at Marvell, Chris Koopmans leads the company’s global business operations, ensuring the successful execution of its strategic goals while shaping its long-term growth strategy. His leadership spans critical functions, including operations, marketing, information technology, and facilities. The Multimarket Business Group, which includes several of Marvell’s core businesses, also reports to Chris.
Prior to becoming COO, Chris served as Executive Vice President of Marketing and Business Operations, where he led corporate transformation initiatives, M&A integrations, and global corporate marketing. He previously headed Marvell’s Networking Business Group and Global Sales and Marketing, playing a key role in expanding the company’s market presence and driving growth.
Chris earned a B.S. in Computer Engineering with Highest Honors from the University of Illinois. As a National Science Foundation Graduate Research Fellow, he pursued advanced studies in Electrical and Computer Engineering at the same institution. To date, he holds more than 10 issued patents.


