Accelerating Energy-Efficient Computing

How can we accelerate energy-efficient computing to power the next wave of AI? ⚡

Find out in this critical discussion as part of The Six Five Summit: AI Unleashed 2025! Tristan Holtam, Group Vice President, Corporate Strategy and Development at Applied Materials, joins host Daniel Newman for an illuminating conversation on semiconductors for AI workloads. He shares how Applied Materials, a giant in semiconductor manufacturing equipment, holds a foundational role in shaping how chips are made for the AI era.

Key takeaways from their discussion:

🔹Applied Materials' Foundational Role in the AI Stack: Discover how Applied Materials lays the groundwork for AI technology, starting from the core processes that define chip manufacturing.

🔹The Energy Efficiency Imperative for AI: Explore the industry's biggest challenge in large-scale AI deployment – the urgent need to boost data center hardware energy efficiency by an astonishing 10,000X over the next 15 years.

🔹Manufacturing as an Innovation Engine: Understand the critical role of manufacturing in driving breakthroughs for energy-efficient computing, highlighting disruptive innovations across all facets of chipmaking.

🔹Beyond Moore's Law: Delve into the evolution of Moore's Law from 2D to 3D, emphasizing the rising importance of materials engineering amid chip design complexity. Tristan shares Applied Materials' strategy for accelerating new chip technologies through co-innovation.

Learn more at Applied Materials.

Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Daniel Newman: Hi, everyone. Welcome to the Six Five Summit. AI Unleashed. I'm joined today by Tristan Holtam, Group Vice President, Corporate Strategy and Development at Applied Materials. This is going to be an excellent semiconductor spotlight. We're going to be talking about accelerating energy efficient computing. Tristan, welcome to the show. Great to have you on The Six Five and at the summit.

Tristan Holtam: Thanks so much, Daniel. It's great to talk to you.

Daniel Newman: So let's do a quick backdrop on this. You know, Applied Materials, like, you know, everyone that knows the 65 knows Patrick and I know that we are semiconductor geeks, nerds, chip guys. Anyone that reads our stuff watches us on the tube, on the news. We love talking about this. But Applied Materials, a massive company. But just give us that quick backdrop for everybody out there in our audience and those that may or may not know Applied Materials.

Tristan Holtam: Sure. So I think many of the other speakers at this summit work for companies who are leading the way either in the development of AI hardware or software that ultimately is going to be used by companies or consumers. At Applied Materials, we're really working at the other end of the technology stack. And Applied Materials is actually quite a descriptive name for the company. We're a material science or materials engineering company at our core. And those large pieces of capital equipment that go into the fabs that we're well known for, they're really just the life support system for that material science. One of the examples that we often give is if you look at that application processor chip in your phone, that chip has 19 billion transistors, about 70 miles of wiring. It's all contained in a sliver of silicon as small as your fingernail, as thin as a piece of paper. And today that chip is made from 25 different materials that are combined together in 100 different ways. And the smallest feature of that chip is about two atoms tall. So creating, depositing, shaping those materials, that's really what Applied Materials does better than anyone else in the world. 

Daniel Newman: Tristan, really appreciate you giving a little bit of the background. Now let's talk about AI for just a second here. I mean, yes, people know applied materials and now they know it even better. They know you're one of the top suppliers of equipment for basically every chip fab in the world. But one of the areas that they may not be as familiar with is how you enable AI. Talk a little bit about where Applied plays in the overall AI technology stack.

Tristan Holtam: In terms of our role in AI, we look at AI as a technology stack with the applications, the models, the softwares, the software at the top of the stack. So that's companies like OpenAI, Google, Microsoft and Meta, the data centers themselves in the middle and then the chips and the chip making technology at the base and at Applied Materials, we're really the foundation of that stack. Our focus is on those foundational technologies that define how chips are made and we work up from there and we partner with the different layers of the hardware and software companies in the ecosystem.

Daniel Newman: With such a foundational role in the tech ecosystem applied as a unique perspective on where AI is headed and what might be some of the big challenges facing the industry. But what would you say is the number one thing? And I have my answer, by the way, but I'm not going to tell you what it is. But what do you think is the number one thing that's going to really limit this large scale deployment of AI?

Tristan Holtam: Yeah, so I would say if we were going to use one word, that word would be energy. But to explain why we believe that's the case, maybe I'll back up a little bit and explain where we think we are in the adoption curve of AI. And I think the first thing that we can say is that AI has almost infinite potential applications and we're really just at the beginning of this large-scale deployment. And that deployment is going to take multiple decades. And as AI gets better and better and more cost effective, you can think of these economic crossover points for all of the different types of application. The second thing I'd say is we see the adoption of AI becoming a necessity rather than discretionary. So if you step back and you look at some of the big macro trends in many of the developed countries, there are significant changes in demographics taking place. So people are having less children, they're living longer. And that means the ratio of the working population to the non-working population is getting smaller and smaller. And the way to offset what is effectively a shrinking workforce is through productivity growth, increasing the average output per worker. But productivity growth over the last decade has been very slow in those economies. I think in Japan, for example, it's less than 1%. So I think this will probably sound like a very Silicon Valley thing to say, but the only way to drive future productivity growth at the magnitude that's needed is through technology, specifically AI and automation. So with that as some background, given those factors, why don't we see AI being deployed much faster? And I think you'll hear throughout this summit that there are numerous reasons, cultural, ethical, technical. But in our view, the biggest one is energy. And I don't think it's any secret that AI consumes a Lot of power. Today to train a generative AI model like ChatGPT, we those companies are typically using a 10 megawatt class data center. So a data center like that is consuming the same amount of energy as 5,000 homes. Within the next year or so we expect those data centers to have 100 times more compute and use 10 times more power. So at that point you're going to be at 100 megawatt class and then a few years further out we can expect another hundred times increase in computer, another 10x in power. So at that point you're at a 1 gigawatt class data center. And in fact Meta recently announced that they're building a 2 gigawatt data center in Louisiana. And to put that into perspective, the generating capacity of the Hoover dam is only 2.1 gigawatts. So that means that we see this grand challenge for the industry is to significantly improve the performance and energy efficiency of those data centers. And the metric that we think about is flops per watt and it's really flops per watt per dollar. And an emerging consensus in the industry is that we're going to need a 10,000 times improvement in that computing performance per watt over the next 15 years or so.

Daniel Newman: Tristan, I really like how you set that up too because I've spent a lot of time thinking about the productivity gains, thinking a lot sort of what it means, how quickly we move in this direction and how do we get more productivity and also those sort of rate limiters. Because I think even at this event you probably hear a lot of people talking about it's been pretty exponential, it's been pretty mind blowing to see in two years or two and a half years since that first instantiation of ChatGPT to the model iteration innovation and the exponential growth in their capabilities to reasoning the development of the chips, the ones that you help support from various fabless designers and different, even hyperscalers building their own. And of course you play a role in how this all comes to pass and then how you come back to this, that it is energy. And I've spent a lot of time talking about this too. And of course there's different ways to achieve energy. There's more energy through, you know, kind of dirtier mechanisms like coal fire and then there's nuclear, which we've spent a lot of time talking about. And you know, people like Elon Musk are kind of swearing by the sun being the answer that we can get all of this power we need from solar. But then even when you put into perspective something like the Hoover Dam and one single data center for one company, we go, holy smokes, we are way, way over our skis right now in terms of how fast we're building data centers versus how much power we have. So that's going to be a real challenge. But the other side of this is some of this efficiency can come from the design, some of this efficiency comes from the material, some of this comes from the way we interconnect and network systems to optimize. And that's been a big advantage here in the US of why we have AI leadership that we don't necessarily get more total compute. We can just do it more efficiently. And that's a big part of what's going on. So let's just dive a little deeper into that though, because I'm hearing, and hopefully you're all hearing from me as well, my opinion, as well as Tristan's here. So clearly a big topic, hyperscalers, chip designers. But where do you, where does a company like yours play a role in sort of delivering efficiency?

Tristan Holtam: Yeah, absolutely. So we would agree with everything that you've said, and I think you'll hear from many leaders in the industry right now saying that AI is not an evolution in computing, but it's really a revolution in computing. And that new era of computing requires new software, it requires new data center architectures, it requires new chip architectures. And then from where we sit, from a manufacturing perspective, it requires new ways to make those chips. So at Applied Materials, we're really focusing on five key areas. And the first one of those is how do we accelerate the leading edge logic roadmap? So really how do we create those better transistors, those better interconnects, the better wiring to advance leading edge logic? The second area is how do we enable the next generation of compute memory? So dram. And DRAM is really undergoing some very big device architecture inflections at the moment and they will play out over the next five or 10 years. The third big theme for us is advanced packaging, both for die stacking. So for example, where you have high performance DRAM chips and you're stacking them on top of each other to make high bandwidth memory and then also heterogeneous integration where you're putting multiple chips together using very, very sophisticated packaging techniques so they act as one integrated system. The fourth area we're focused on is next generation power devices. And those power devices are going to use compound semiconductors. So gallium nitride or silicon carbide and the potential there is that they can provide significant energy savings in the data center's power systems. And then fifth, which is a little bit further out, is silicon photonics. So silicon photonics is all about using light to move data from chip to chip, rather than traditional copper wiring. And what we would say is in each of those areas, there are very significant innovations underway. There's a wealth of possibilities on the roadmap. So we're very optimistic about what can be done at that foundational chip level.

Daniel Newman: And those are many things. And I'll skip the editorial as I bounce into the next question because I've got a bunch. I was thinking about everything you just said. With materials, with light, these things are happening and they're happening very quickly and they're essential. So companies like yours doing what you do is going to be essential because I think we've shown that we can keep pushing the limits on compute. And there's been sort of these pronouncements about the death of Moore's Law. I think different companies argue this in different ways, but we're seeing just AI data center architecture. We're seeing a year to year cadence from multiple companies now that tend to include exponential orders of magnitude more computing at greater efficiencies. But we're also seeing software and use cases developed that are going to use all that computing and more concurrently. So it's going to put a lot of pressure on the system. So we hear about the death of Moore's Law, we now see trillion transistor GPU systems in a single package. How do you see the chip making roadmap evolving in the coming years from your lens?

Tristan Holtam: Well, I found that it's very unpopular to say that Moore's Law is dead. And that's really because it depends on how you define Moore's Law. But I do think it's safe to say that the benefits of traditional 2D scaling, which is all about packing more transistors or bit cells into the same area on the chip, are slowing down or have stopped for many types of chip. When that 2D scaling Moore's Law was working well, you would shrink the feature sizes on the chip and then you would make all kinds of adjustments to the materials to make them work at those smaller dimensions. And the outcome of that would be a simultaneous improvement in performance, power and area cost of the chip, which is often called PPAC or, or PPAC in the industry. So today, even though 2D scaling still has an important role to play, much more of the PPAC is coming from different types of innovation. And that might be the architecture of the chips themselves, it might be three dimensional devices and structures in the chip. So things like new transistors and interconnects, it might be new materials like compound semiconductors, or it might be things like advanced packaging. So if we just took the transistor as an example, transistors have moved from being a planar two dimensional device 20 years ago to FINFET and now to GATE all around. And that first generation of gate all around transistors gives the US about a 25 to 30% improvement in energy efficiency over FinFET. But those devices are a lot more complicated, complex to make and they actually require atomic level precision in the material science. So in the channel of those transistors there's actually five different materials and each layer of material is only one to two nanometers thick. And then end to end that process flow to make the gate all around transistors, there's actually 10 to 15% more process steps, so more complexity compared to that prior generation FinFET. But I think a simple way to think about it is as both the devices and the packages are getting much more three dimensional. As that happens, materials innovation is playing a much bigger role.

Daniel Newman: I love that you pointed that out. I think one of the key takeaways for everyone out there is we are still getting really significant scaling, but the way we scale is different. So when you thought about Moore's Law, it was very centric, like you said to a certain idea in two dimensional. A lot of what we're hearing now is about packages, you know, two and a half D, 3D. You know, how we're basically putting more and more into, not just the, you know, into the system to create more computing power, but doing it differently. I mean, you talked about light earlier. You know, things like co packaged optics are really changing, you know, the future of design and how systems are able to communicate networks. These are just a few of the examples. But where does innovation come from? So innovation used to come from one place, now it's kind of coming from many. Some of it is materials, by the way, which is what, you know, you're very focused on. So let's kind of bring this all home, you know, because I think what I was trying to summate, maybe not as eloquently as you, but the traditional version of Moore's law may not in fact exist. And you may be very unpopular for even suggesting that. But I think again that comes down to the definition. But it sounds like there's a lot of reasons to be optimistic. The industry as a whole is driving incredible innovation, which includes this AI era, which is almost a brand new computing architecture ground up. So chips are getting more complex. This means more R and D. This means more collaboration in the semi silicon and systems ecosystem. This means risk, because companies like yours have to make big bets and big investments to get to new products, new materials, new designs. So talk a little bit about that. How do you sort of make sure we keep pushing the envelope, but make it worthwhile for companies like Applied Materials to take this risk to support these next generation of designs? And what do you see in terms of the role of R and D in all of this? So your role, R&D's role, and how do we keep the innovation going? Tristan?

Tristan Holtam: Yeah, absolutely. So as we talked about earlier, the technologies that are being used in these early phases of AI deployment, they're going to change quite significantly in the coming years. And maybe another way to frame that, another way to say that is much of the technology that we need to deploy AI at scale either hasn't been invented yet, or if it does exist in an R and D lab somewhere, hasn't been fully commercialized. And then of course, in many cases there are multiple competing technologies and it's not entirely clear which of those is going to win out. So I think all of that means that we're really in a race to bring these new technologies to market. And whether you're a company or at a national level, those companies or countries that can get there first, they're going to have a significant competitive advantage. So at Applied, we're spending a lot of our time thinking about what we can do to speed up, how we can accelerate those technology roadmaps essentially for our customers. And we believe there's a number of things that can be done to change the way that the ecosystem is working together. So if you look at how this works traditionally, if you look at how a new breakthrough in material science makes its journey all the way through to a commercial chip, it's actually quite a serial process. And there are several, you can call them clunky handoffs along the way. That process usually starts in a skunk work type environment, so that might be a university lab environment. Then you have to develop a commercially viable process technology and the equipment set to support that process. That's really the core of what companies like Applied Materials does. Then you need to integrate those new technologies into the device itself. So that entire process can take 10 to 15 years. So one of the things that we can do is we can start to make that journey more parallel and less serial. So as we start to think about speed, and I think really we should say velocity, because direction is important, one of the things that we can do is we can work on collaboration, tighter collaboration, and we think there's an opportunity to work differently, not just with the immediate upstream and downstream partners, but across the stack. So, for example, if the companies at the top of the stack understand the art of the possible in the material science roadmap, they can actually help steer and optimize that roadmap and also those investments. And then finally, I think this is true for everyone in the industry. We all want to maximize the impact of the R and D dollars that we're spending. So for a company like Applied, we have to invest in our R and D platform, which is our labs and the equipment sets, and then we also invest in R and D programs. And it's really only those R and D programs that generate innovations and products and revenue. The platform itself is important, but it's really a necessary evil, and it's a very expensive necessary evil. So one of the other things that we can do is share platforms with others. And that means that across the ecosystem, we can start to make our assets work harder and free up more investment so we can spend more on the R and D itself.

Daniel Newman: And Tristan, what do you think the industry can do? You talked about that collaboration. What could the industry do to accelerate the pipeline of R and D to commercialization? Because 10 to 15 years, as you suggested, that may not be fast enough.

Tristan Holtam: Yes, sir. Daniel, we can definitely talk about the theory of industry working together more effectively and efficiently, but actually, Applied is making some real investments to make this a reality. Our vision is what our CEO calls high velocity co innovation. So in other words, getting more speed through better collaboration. And to do that, we've been building our EPIC platform. EPIC stands for Equipment and Process Innovation and Commercialization. EPIC really is designed to be an open platform that brings together our customers, our R and D partners, our suppliers, and other companies that bring complementary capabilities and technologies. And using epic, we can run either bilateral or multiparty R and D. And really the goal here is to give us faster cycles of learning and then ultimately higher mutual success rates. So the centerpiece of this platform is the EPIC Center. So this is a new facility that we're building here in Silicon valley. It's a $5 billion investment for Applied Materials and it's going to provide us with three acres of state of the art clean room space that we'll share with our partners. And interestingly, the building itself has several innovations in both design and construction to support the way that we want to work in the future. And in R and D, we're about 2/3 of the way through that project. The center is on track to come online in the spring of 2026. And we're just very excited about the work that we're going to be able to do with our partners and our customers at EPIC. And Daniel, as you can probably tell, we're very excited in general about the future. We truly believe technologies like AI and automation have this huge potential to have a positive impact on the world. And today we're really just scratching the surface of what's possible.

Daniel Newman: Yeah. And a thing I'd like to point out, and this is definitely my opinion and certainly not proclaiming it to be yours, Tristan, or that of Applied, but is there's a reason I'm such an advocate of research and the R part of R and D. It's a very long slog and the companies that do it take on a ton of risk. And this is also why I tend to be pretty bullish on anything that's IP protection. Now there's a certain amount where we want to diffuse and not use it to take advantage in the market or create great asymmetry or make things not accessible. But at the same time, we have to give companies like yours the ability to breathe and feel comfortable that their investments will be protected and they'll have a chance to profit on the risk they take. Because when you take such a big risk, you should. Same thing we see in pharma and other areas. Risks and opportunities should be in parallel. That's what keeps, you know, you talk about Silicon Valley, you talk about that's what keeps this engine of innovation ecosystem going. And that's what's made this such an exciting time for AI, is that there are people and enterprises and entities taking big risks, doing really important work. Research is a big part of it. Great, Tristan, to have a chance to talk to you here at the 65 summit. Hopefully everybody out there learned a little bit more about applied materials, about what's going on in material sciences and how it really is feeding this exciting time of AI developing more energy efficient silicon and chip designs and then of course, powering the future across our industry. Tristan, let's do it again soon. I have a feeling there's going to be more to talk about.

Tristan Holtam: Thanks so much, Daniel. It's been a pleasure and thanks so much for inviting us to participate.

Daniel Newman: Absolutely. And thank you everybody for joining us here in this Semiconductor Spotlight session here at The Six Five Summit. Stay connected with us on social and explore all of the great conversations at SixFiveMedia.com/summit. More content, coverage and summit coming up next.

Disclaimer: The Six Five Summit is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Speaker

Tristan Holtam
Group Vice President, Corporate Strategy and Development
Applied Materials

Tristan Holtam is group vice president and chief of staff to the President and CEO of Applied Materials, Inc., Gary Dickerson, where he plays an important role in developing and communicating Applied’s vision and strategy. As head of Corporate Strategy and Development, he works across the business groups and functions to manage the company’s strategy development process and operating rhythm. His responsibilities span key functional groups including corporate strategy and marketing, communications and public affairs, government relations, and facilities. In addition, he oversees teams that are incubating new strategic capabilities and growth opportunities for the company.

Mr. Holtam started at Applied as a New College Graduate in 1997, working in product development and management roles in the Semiconductor and Energy and Environmental Solutions groups before joining the corporate staff.

Mr. Holtam earned his bachelor’s degree in mechanical engineering from the University of Surrey, UK, an MBA from Henley Business School and has been awarded nine patents.

Tristan Holtam
Group Vice President, Corporate Strategy and Development