The AI-Native Era: Building the Future on Arm

How is Arm working to redefine computing in an effort to power an entirely AI-native future?

Discover the answer at the Six Five Summit: AI Unleashed! We are privileged to feature Rene Haas, CEO at Arm, as our Day One Opener! He joins hosts Patrick Moorhead and Daniel Newman for a pivotal discussion on Arm's transformative role in the AI computing revolution.

Key takeaways include:

🔹The Transformative Power of AI-Native Compute: Explore the profound impact of AI-native compute across the technology landscape, fundamentally changing how systems are designed and operate.

🔹Arm's Blueprint for Performance & Efficiency: Rene unpacks Arm's efforts to optimize performance, drive efficiency, and enable crucial customization specifically for demanding AI workloads.

🔹Innovations in Arm's Architecture & Subsystems: Understand the key features and innovations within Arm's advanced compute subsystems and the groundbreaking ARMv9 architecture, built for the demands of AI.

🔹Real-World Silicon Innovation & Future Prospects: See the tangible impact of Arm's technology in advancing real-world silicon innovation and gain forward-looking insights into the future prospects of AI-native computing across diverse industries.

Learn more at Arm.com.

Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Patrick Moorhead:The Six Five Summit is back in its sixth year. We are on the first day with our opening day one speaker here. Daniel, I'm looking forward to the summit and we are talking about everybody's favorite topic and that is AI.

Daniel Newman:Yeah, we will cover a little bit of AI here and it is open for day one, Pat, and this event each and every year. I just continue to be blown away at the amazing contributors and the people that we have as part of it.We hope everyone out there is tuned in. Yes, you can catch these on-demand, but we would love if you'd hang out with us because we are going to be talking about AI, and we do have some of the foremost experts, CEOs, business leaders, people that you want to hear from and people that we're lucky enough, Pat, to get to spend time with here at the summit.

Patrick Moorhead: Yeah. And aside from AI, our second favorite topic, maybe it's our first favorite topic, are semiconductors and silicon. Daniel, I always like to say that, okay, I get the software that is eating the world, but it doesn't run on air. And if I look at the valuations of companies, a lot of them are hardcore in infrastructure right now and one of the big innovators in this space is ARM, and I am very happy to introduce Rene Haas, a friend of The Six Five. We're going to talk about AI-native compute in an AI-native era. Rene, welcome to the show.

Rene Haas: Thank you, Pat. Daniel, good to see you.

Patrick Moorhead: Yeah, I mean it's great you have your own podcast now and I just can't tell you how much I appreciate you coming on another podcast to do this. Wonderful.

Rene Haas: My life is podcasting, but yes, super happy to be joining you guys.

Daniel Newman: Yeah, that room looks like it was built for you to podcast in.

Rene Haas: It is a podcast-dedicated room, but we have made good use of it.

Daniel Newman: Well, one of these days what you'll need to do, Rene, is you'll need to have the Six and the Five, you can pick which one, come on. We're going to come onto your pod. We are going to wax poetic on all things AI and chip-making and everything else that's going on. But look, we really appreciate you joining us. I know our audience is excited for this keynote session. So let's start off talking a little bit about AI because that's what the event we're unleashing. We know AI Rene is driving transformations from data center to device. Start with just giving us the ARM perspective on how it's enabling AI everywhere from hyperscale data centers to the edge and what role does AI play in your growth strategy?

Rene Haas: Yeah, so the way to think about ARM is we are the most ubiquitous compute platform ever invented. 70% of the world's population uses ARM. We're in the largest of data centers, but we're in the smallest devices like earbuds, and we run all of the quote "Legacy software," application software, whether it's an operating system, the apps, the hypervisors, all that compute stuff runs on ARM and it's run on ARM for years and decades. As we think about AI going forward, what we see happening is that increasingly the compute obviously still needs to take place on these devices. You're not replacing the operating system, you're not replacing the apps, you're not replacing the hypervisors, but what you are doing is you're adding AI payloads, AI acceleration, AI workloads on top of all that, whether it's training in the cloud, whether it's inference at the edge, that's all happening. So for ARM, it's a very unique, I would call it “and” opportunity where the general purpose compute that we've been known for whether it's plugged in the wall or not plugged in and actually not plugged in is really the sweet spot for ARM in terms of power efficiency. What we see is AI is now going to run in conjunction with those standard compute workloads, in some use cases, more, in some cases, how does it get optimized? So it's a gigantic opportunity for us and what we're seeing is already at the data center NVIDIA Grace Blackwell, which is all ARM, it's really accelerated a transition away from Legacy x86 to ARM being the processor choice in the data center. But for us, we think the opportunity for AI is much more broad, because AI will find its way literally in any of those devices I mentioned, and it will run on ARM. So what we're spending a lot of time on is how do we optimize that hardware experience, and back to the software, make it seamless for developers to run the software on the ARM platform.

Patrick Moorhead: Yeah, it makes sense. And Rene, when you said, I think you said 70%, I'm thinking that's probably 100, but I'm sure there's a good reason you can have a microwave oven with an ARM IP inside of it, but I understand it's a really big number. It's pervasive. ARM is pervasive.

Rene Haas: Yeah.

Patrick Moorhead: I've really marveled at the way that you have added more value for your customers, okay. I think 10 years ago I was thinking to myself, ARM is such a valuable company to its customers, and to society. It seems like it would be bigger. It seems like you could do more for those customers. And under your leadership you brought out what was called compute subsystems, AKA CSS, which a lot of people have described as a game changer. I mean, essentially you're taking on a lot of the system's responsibility that would normally be up to other people. Can you talk about, first of all, how does this play into your platform-first company strategy? And you and I, we've personally talked about, and I've seen in the press releases companies that have adopted it, but how is it looking in the future?

Rene Haas: Yeah, so let's maybe start about with CSS a little bit at the lower level and then maybe talk about it at the platform level. So, ARM's been around 35 years and when we started, the concept of delivering IP to a company in an RTL form and using common EDA tools and third-party manufacturing was quite foreign, and kudos and credit to the ARM founders to really start something that at that time was incredibly novel and ambitious. And then what we saw happen over the decades is the industry got really good at it and tools got better, fab processes got better, and you could essentially take a piece of RTL, whether you're in 14 nanometer or 10 nanometer geometry and end quote, throw it over the wall to a customer and they'd be able to take that RTL and deliver a very efficient design. But what we've seen happen as we've gone from seven nanometer or five nanometer to three nanometer, a couple of things have happened. Number one, people are putting a lot more compute down on these chips. We've gone away from two-core four-core designs, to mobile phones that have 12 to 18 cores. Auto chips have 32 cores, server chips have 128 cores. Putting all those cores together, that's real work. Then secondly, when you start getting into the geometry such as seven nanometer, five nanometer, three nanometer, squeezing every ounce of performance out of that technology is also a lot of work. So we looked at that and said, it's probably the time for us to think about delivering something that is going to create a big advantage for our customers in terms of not only developing chips faster, but guaranteeing that you're going to get the highest level of performance, because candidly, at the end of the day, ARM will be in the best position to know exactly how an ARM processor is going to behave on a given library on a given process. So we kicked this off a few years ago, not long after I took over as CEO, and three years later we're starting to see the first products hitting the marketplace mobile, we've had a few out in servers already and the proof points are there. The fastest processors on the planet now are being delivered via CSS and what that does for the customers, obviously you've got the fastest product on the planet, but equally you've shaved a lot of time to market off of the chip developments' time. So that's just huge.

Now, it's also important for us from a platform standpoint because at the end of the day, any CPU ISA is only as good as the software and the developer community.

By delivering these platforms and getting products out faster, we can engage developers earlier, we can take advantage of the features faster, and we can essentially accelerate time to market. A great example of this is everything going on with AI. When you think about Gemini, when Gemini Nano was introduced and it got put on last generation's mobile phones, none of those mobile phones knew that they had to run Gemini at the time that they needed to run it. So the software links were not there, hooks to the hardware were not there. And that just gives us a big, big mandate to say at the pace of which these workloads are moving, we have to be involved in the system design, because the software relies on it. So it's been something that, as I said, we started three years ago, it's now everywhere for our business. We've had a lot of success with servers, we've had a lot of success now with mobile, very timely that I'm on this podcast or ViewCast, whatever we're calling it, because we just announced our automotive CSS and all of that is hugely beneficial.

Daniel Newman: So Rene, you're also in a fairly significant transition to platform, the companies, one of the probably leading indicators is just how you've reconfigured your naming strategy, right? It was very product-focused for a long time and now you're very platform-focused. You have data center platforms, personal computing platforms, smartphone platforms, vehicle platforms, and kind of your small device edge-embedded sensor platforms. I wish I could memorize them all, but we'll start with Neoverse. That's probably one of the very, very well-known and very popular Zena on automotive, but this has been a fairly big pivot and of course the market's making a lot of assumptions about how you evolve. I think this is a really great opportunity here for our audience to talk about what is taking place. Maybe a little bit about why you evolve from product to platform, which I think some people probably it's very obvious, others maybe not so much. And then how does this evolution fit with the strategy of the additional value that you and Pat just were speaking of and of course the future, which many people continue to speculate, that you'll become more and more critical in terms of full end-to-end involvement in chip design and innovation?

Rene Haas: Yeah, thanks for asking this question because I think it's an important one, relative to ARM's strategy. When you think about what the essence of a compute ISA is at the CPU level in and of itself, it is a platform, because it attracts such a large community of developers. Again, whether it's around operating systems, applications, hypervisor, AI, payloads, by its definition, it is a platform, and I don't think we made it easy for our ecosystem partners to understand how everything stitched together. I don't think we did a very good job in the past of clearly communicating exactly how all the elements fit together, because we were actually providing a lot of those solutions. Now, CSS accelerated that for us because it gave us, again, a methodology to be much more prescriptive in terms of how to put together these systems. So, the naming, to some extent, for us, was just a natural next step. We are very vertically market oriented because the solution you need for something that is very low power that runs on battery for a few weeks is not the same as something that's running at hundreds of megawatts inside a data center. So, the solutions are different, the software platforms are similar, so we want to be very, very specific about that. But I think it's all a continuation of really really making it clear to the ecosystem and developers that ARM is the compute platform for AI, and I can pretty confidently say there's nobody on the planet who can provide that solution. If you're running from milliwatts to megawatts, that's unique to ARM, and that's really the space where we intend to play.

Patrick Moorhead: So Rene, there's been a lot of, I don't know, industry debate on what are the things that drive AI, "Oh, it must be the GPU, it must be the accelerator," and then we find out that most AI is actually run on a CPU. Even on broadcast news shows, I get the question, "Hey, how does ARM intersect AI?" And we have this conversation.

But when it comes to helping to solve for widespread adoption, because I think that's where we're at right now. How are you helping developers and customers overcome some of these challenges, especially on the software side, where you can make an incredibly huge difference?

Rene Haas: Yeah, so, AI, a lot of mystery associated around AI from a software standpoint at times, and there are companies that will describe it as a brand new way of processing, a brand new way of computing, but to some extent it is yet a different way of computing, another way of computing, but it's very reliant on a few things. It's reliant on compute, obviously, but also memory bandwidth. But again, because of AI, that workload is not exclusive to the domain of the cloud, people are going to have to figure out ways to solve it inside conventional compute solutions. So back to the ARM role in this, the edge devices, the earbuds, the cameras, the automobile, the cell phone, they still have to run all that legacy software. They still have to run it, and they still want to do it in a power efficient way. They want to run it in the cost-efficient footprint in a thermal footprint that matters. So then they look and say, okay, I've got a CPU that's there. Are there things that can be added to the CPU, whether it's around implementations in the micro-architecture that accelerate AI from the instruction standpoint, and/or are there things in the GPU or an accelerator? So what role do we play? Kind of twofold. One is putting the hooks in the hardware that allow for acceleration, but probably equally important is make it easy for software developers through libraries, and this is what we're doing with our Clyde AI libraries to be able to abstract away what's inside the hardware. So the developer doesn't really need to know exactly what that NPU is that's inside it. If it's standard, it's much easier, and that's what we intend to do. So, that's probably the best way to think about it, Pat, is that the AI problem is not a unique software problem from our viewpoint. It's a different problem. It's going to have to run in addition to what's already there, and then you're going to want to do it in as power-efficient a way as you can, particularly with Inference.

Training is its own problem. People are not going to do training on small devices obviously, but training is the teacher and the world knows that we have way more students in the world than teachers, and Inference is the student and Inference is what's going to run everywhere for everything going forward now.

Patrick Moorhead: So the TLDR is ARM has the ability to help not only you're running CPU, sorry, AI on the CPU also on the NPU to help reduce friction on the device itself.

Rene Haas: Yeah, and because it sort of starts with a CPU and you start thinking about, well, how do I solve this creatively in terms of memory, bandwidth, custom implementations, different types of solutions in terms of interfacing on and off the chip, or the chiplet, that all runs through us. So without tipping my hand in terms of our future products, I'm not announcing new products today, we are in a very unique position to help architect this in terms of where it all goes and for people who work for ARM and in our space, it's incredibly exciting for our engineers, because engineers love to solve hard problems.

AI is a very hard problem because people want to be able to figure out how to do it inside of existing interfaces and existing workloads. So, it's a place that we're very focused on, but I think we're in a very unique spot to address it.

Daniel Newman:

I won't put Rene on the spot, pat, but I love to editorialize. I've maybe been on the record a few times suggesting that I think ARM has had an incredible role in AI through its CPU, small cores, things it's enabling with, like you said, students versus teachers. I would not be surprised to see you become more directly involved in the near future, just watching how you evolve. Again, not asking you to comment, but I think everybody out there is probably keeping an eye on this company. So speaking of keeping an eye on this company, let's do the future moment, the flash ahead to 2030.

As analysts, Pat and I love to predict at least five years out so that people never hold us accountable when we're wrong, but in all seriousness, the further out we go, the less likely people remember and you only have to be right once, to really permanently become inflamed as a superstar analyst, right?

But Renee, as you look five years ahead to 2030, how are you thinking about success, and what does that mean for ARM and how are you sort of defining it? What goals and metrics are you setting to be where you want the company to be at the turn of the decade?

Rene Haas: The stuff, it's an interesting question, I was meeting with a group of analysts yesterday and they were asking me about what the smartphone is going to look like in 2030, and how we viewed the growth of that market and et cetera, et cetera. And I said, "Look, there's no way to predict what a smartphone looks like in 2030, whether it is still at the growth rate that it's at today, or it's something that looks completely different." I'm older than you guys, but we're all of a similar vintage. People will talk about the cellphone such as the smartphone that it can never be replaced, but this is a device that was just invented in 2008, so it's not actually been around that long. The way I think about ARM in 2030 is less about the physical device, because it may be a type of ambient computing device that we don't know what it looks like today or a very, very different interface into a physical device where agents are running, et cetera. So, the physical, I can't really think about the physical too much in terms of what it is, but what I can focus on is there's going to be two domains plugged in, not plugged in, running off batteries or running off of power. It is unquestionably going to be running an increasing AI workload. That AI workload may be the dominant workload relative to traditional compute because you may have things obfuscated away such as apps and operating systems, where agents now are talking to other agents and everything runs kind of over the top. So what I have to be sure of for ARM is that that all runs through us in some way, shape, or form, and that's why we're very, very focused on the developer community. We're very, very focused on the wide breadth of devices that we can enable, and we want to be sure that we are providing that solution, that value. I think we're in a great position to do that because again, legacy software hangs around for a long, long, long time. Fortunately or unfortunately, there's a lot of stuff that you need to carry and continue to run. So I think that actually positions us pretty well for this.

But to your question directly, what does success look like for five years? Pat's number is greater than 70%, I have my number. We're still running that vast majority of software on ARM, whether it's AI or non-AI or something even different and providing the best solution for the customers.

Daniel Newman:That is the right answer for the CEO of ARM, 100%. And we definitely do appreciate Rene that you've always been so open, at least within the constraints of someone that has a role that you have and sort of giving us your views and the vision. It's been really pretty remarkable to watch the journey in even just the past few years, the company, and of course we didn't even really get to it, but projects like Stargate, watching all the things you're doing with your partners within Softbank and the ecosystem that's being built and how this is all going to play out. And of course, there's probably not a single person that's watching this right now that isn't being touched in some way by ARM on a device that they're using, in a car that they're driving, in some sort of appliance that they're using. So it's been a pretty remarkable and incredible journey.

Rene Haas: Thank you.

Daniel Newman: So, congratulations. We look forward to having you back in 2030 where we're going to hold you accountable to everything you just said. For everyone out there, Rene definitely is out and about a bit across the social platform, so make sure you check it out and follow his comment and his podcast. He's got one, he's got that great room, he's got his own podcast. And if you're not watching The Six Five, that's probably a good one to watch as well. So, thanks so much for helping us open up day one. Hey, everyone out there, subscribe for the rest of the event, stick with us. We've got so much more great content, we'll send it back. Stick with us.

Disclaimer: The Six Five Summit is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Speaker

Rene Haas
Chief Executive Officer
ARM

Rene was appointed Chief Executive Officer and to the Arm Board in February 2022. Prior to being appointed CEO, Rene was President of Arm’s IP Product Groups (IPG) from January 2017. Under his leadership, Rene transformed IPG to focus on key solutions for vertical markets with a more diversified product portfolio and increased investment in the Arm software ecosystem. In addition to his role as CEO, Rene also sits on the boards of Arm China, AstraZeneca, and SoftBank Group.

Rene joined Arm in October 2013 as Vice President of Strategic Alliances and two years later was appointed to the Executive Committee and named Arm’s Chief Commercial Officer in charge of global sales and marketing.

Before joining Arm, Rene held several applications management, applications engineering and product engineering roles, including seven years at NVIDIA as vice president and general manager of its computing products business. Prior to NVIDIA, Rene held executive roles at Scintera Networks and Tensilica.

Rene is based in Silicon Valley, U.S., he also spends significant time in the major technology centers of the UK, Europe and APAC.

Rene earned his Bachelor of Science in Electrical and Electronics Engineering from Clarkson University and is a graduate of the Stanford University Graduate School of Business Executive Education Program.

Rene Haas
Chief Executive Officer