Home

The Rise of Companion Silicon: Rethinking AI Architecture from Edge to Cloud

The Rise of Companion Silicon: Rethinking AI Architecture from Edge to Cloud

Patrick Moorhead and Daniel Newman break down the week’s biggest AI signals, from $650B in hyperscaler CapEx and Anthropic’s breakout momentum to the SaaS repricing debate and a Flip segment on how fast AI can realistically disrupt white-collar work.

As AI systems scale from data centers to the edge, the architectural conversation is shifting. It’s no longer just about CP=Us and GPUs. Companion silicon is becoming foundational to how intelligent systems are built, controlled, and secured.

Hosts Patrick Moorhead and Daniel Newman are joined by Lattice CEO, Ford Tamer, to explore why FPGA attach rates are rising and how small and mid-range, low-power devices are driving real system-level value across AI infrastructure.

This discussion builds on Lattice’s recent momentum, reinforcing the company’s positioning as the “everywhere companion chip” from the edge to the cloud. 

Key Takeaways:

🔹 Rising FPGA Attach Rates in AI Systems: As AI deployments expand, more systems are incorporating companion silicon to handle control, security, connectivity, and real-time management alongside CPUs, GPUs, and accelerators.

🔹 System-Level Value in Small and Mid-Range FPGAs: While industry attention often centers on massive accelerators, much of the architectural value is being created in low-power, deterministic devices that enable flexibility and orchestration across communications, compute, industrial, and automotive markets.

🔹 The “Everywhere Companion Chip” Strategy: Lattice’s FPGAs are designed to operate from the edge to the cloud, supporting modern AI infrastructure by managing critical system functions that go beyond raw compute.

🔹 Platform and Software-Driven Deployment: Beyond silicon, Lattice emphasizes a platform and software-led approach that helps customers deploy faster, scale designs, and support long-lived systems in regulated and industrial environments.

🔹 Physical AI and the Next Phase of Infrastructure: As AI moves deeper into robotics, industrial automation, and communications, companion silicon is positioned to play a growing role in enabling real-time control and secure system management.

Watch the full video at sixfivemedia.com, and be sure to subscribe to our Youtube channel so you never miss an episode.

Listen to the audio here:

Disclaimer: The Main Scoop is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Ford Tamer:
The focus on increasing the pace of innovation continues to rise, like the NVIDIA's of the world and AMD's of the world are driving to one year between GPU different introductions. And so there's more and more functions that make a lot more sense, are much more cost effective to be in an FPGA.

Patrick Moorhead: 

The Six Five podcast is back. Daniel, it's great to see you. We are talking about two of our favorite things, semiconductors and AI. Can you believe it, how much that has dominated our conversations here?

Daniel Newman:

 I can. I actually, Pat, at this point, I am fully inundated and expecting nothing else anymore. It's been a heck of a journey. And as it goes on to what? three plus years since that massive AI inflection. We've seen major changes in the economies. We've had events all over the world. But Pat, one thing that's been steady and sure as time for the last nearing half a decade now has been that AI is front and center.

Patrick Moorhead: 

That's right. And part of that discussion has been the heterogeneity of the compute and also where the compute is happening. If you go to one side of the room years ago, oh my gosh, AI can only be done on the highest performance GPUs. And then we got more of a maturity on what can do it, but also where it can be done, but also what are the support chips that actually help AI infrastructure? One type of chip we have talked about a lot here is the FPGA. And we have done a lot of recordings and a lot of interviews with Lattice executives covering their events. And it is my pleasure to introduce Ford Tammer, the CEO of Lattice Ford. Great to see you.

Ford Tamer: 

Thank you, Patrick. Thank you, Daniel. Great to see you.

Patrick Moorhead: 

Yeah, it's been great. I've been really enjoying watching. I did your stock comparison since you had joined the company and you're at about 2X the rate of the SOX index. So congratulations on that. I know it's more than stock price, but it is fun to watch. It's a lot more fun to watch an ascent than a descent.

Daniel Newman: 

Thank you. Yeah, it's been it's been a great ride. And it's great to have you on for the I think first time, right? You have not. Okay, so we've had a few of your peers along the way and you know, we've wanted to talk to you. So it's great that we're getting you on the show. I mean, Pat set this up pretty well. I mean, silicon semiconductors have been front and center forward of this last multi-year conversation. And AI has really driven that. We hear about compute shortages. We're hearing about memory shortages. And of course, all of this has driven a lot of creativity in terms of how architectures are being developed, how technology partnerships with companies like Lattice and using FPGAs can help enhance what's being built. I saw a ton of progress in your numbers, in your data center numbers, just in your most recent earnings, which look very optimistic, as I think the AI wave is definitely creating a new tailwind for you. So one of the ways you've been describing yourselves at Lattice ME has been the everywhere companion chip. So let's just talk about that for a minute. What in practical terms is kind of driving the FPGA to be playing such a big role alongside CPUs, GPUs, and various AI accelerators?

Ford Tamer: 

Oh, it's great. So first, Patrick and Daniel, thank you for having me on the show. Patrick, the condition for me to show up was that the stock price would be doing better. We're here, baby. We are here. But we went through a transformation where we went through a move from where the company was more focused on industrial automotive to now being a two-legged stool with data center being one and still the industrial automotive being a strong second leg for us. Over the past year we refocused the company on AI, both data center AI and physical AI. And the way I describe this is the most valuable player in our industry is the GPU. So we all know the NVIDIA, the AMDs of the world, but we also have support other MVPs, such as the AI accelerators from all the cloud data center, the US and otherwise, the CPUs, the board management controller, the switch and the neck, MPU and MCU and other sensors in a data center. And if you go to physically eye, you've got obviously, again, sensor and analog type of components. The MVPs are very powerful, and they're very key to win a game. But they cannot win a game on their own. They need a team. And we are that team. So fundamentally, Lattice provides all the support functions needed on that team from all the time from the time you boot the system, to the time you do the power sequencing, you have to securely boot it, you've got the control, IO, expansion, bridging, security, all kinds of functions that these support chips are providing. And so we support this across multiple functions, we support across multiple suppliers. So whether you're using a CPU from Intel x86 or MDX86 or ARM, we support all of them, providing, as you said, heterogeneous, in a heterogeneous environment, the Switzerland approach across all of these variety of vendors. We do it across multiple markets, you know, all the way from coms and compute, to industrial automotive, to aerospace defense, to medical, to consumer. We do it across all applications, across, you know, security or power and cooling, rack management, quantum computing with our latest post-quantum security to, you know, sensor aggregation and bridging and fusion and pre-processing, okay, to all kinds of new interesting industrial applications like humanoids and robotaxis and these new medical arms that are getting smarter and autonomous drones that are getting much more smarter. and AR, VR, consumer type of applications. So these companionships, the fact we're calling ourselves a companionship does not make us look weak. I mean, at the beginning when we came up with this concept of companionship, the team was very worried that, oh my God, the companionship, that's a weak type of term. I said, no, no, wait a minute. Companionship are powerful because actually we are providing that Switzerland. We are providing this cross supplier. We are providing this heterogeneous support. We are providing this across multiple functions and applications and markets. And by the way, future proof your application, because guess what? When you're implementing something like a post quantum cryptography, these algorithms are still changing very fast. And if you want to change your application in the field in the future, guess what? FPGAs are perfect to do that.

Patrick Moorhead: 

Yeah, so Ford, I want to Agaran on something you had said in your last earnings call, where you raised guidance. And you gave some information about it when you answered the prior question, but you talk about rising FPG attach rates. And I think these were attach rates to AI servers. I remember two years ago going into a public hyperscaler. They showed us, and I could look at the racks, and I could see the main board, and I saw multiple of your chips on there. So we all understand growing with the market as the market grows. But specifically, why is the attach rate on AI systems going up?

Ford Tamer: 

Yeah, thank you, Patrick. You know, to first put some context for listeners on what you just said, we guided for a Q1 revenue to grow over 37% year on year with a corresponding 69% growth in our APS. So you could see we're growing very fast on the top line, but we also committed to grow the bottom line faster than revenue. What's driving this, as you suggested, is increasing quite a few, quite a few axes here. And let me define them. Number one, the capex keeps increasing. So from the time we were there in December to January to now, The CapEx from the four major US cloud and the Neo cloud and the second tier and the model folks and the Chinese data center and the sovereign folks have all increased tremendously. Number two, as you suggested, we're finding new applications. Our attach rate continues to grow because of these new applications, these new functions we continue to support. The third one is our average selling price continued to increase with the complexity of these. And so what we're finding out is on the attach rate is the following. System design cycles continue to go shorter. The complexity of the system continues to rise. the cost of putting some of these functions like boot and power sequencing and security and advanced two nanometer, three nanometer node continues to rise. The focus on increasing the pace of innovation continues to rise, like the NVIDIA's of the world and AMD's of the world are driving to one year between GPU different introductions. And recently, Elon Musk's XAI just jumped in and said nine months in between. So there's a race on the speed of innovation continues to increase. And so there's more and more functions that make a lot more sense, are much more cost effective to be in an FPGA. To give you an example on security, security is a perfect example of how we're finding an increased attach rate. We started with root of trust, then went to attestation, and now we went to PQC, post-quantum cryptography, to guard against the bad actors copying your data now and decrypting it later when they get access to quantum computers. Post-quantum cryptography is requiring a total change of crypto algorithms from what used to be traditionally more simple protocol. The quantum right now is like taking a gun through a paper. is this is how weak these old protocols are. So now you've got these new, what's called lattice, no pun intended, lattice protocol. And we support those and secure against PQC. So we keep finding these new use cases that are really getting our test rates. And I hope I answered the question.

Patrick Moorhead: 

No, you did.

Daniel Newman: 

Thank you. You know, it's interesting though for one of the big problems I'm hearing that you're solving is just utilizing, you know, the chip surface for the use case that is most and well intended for meaning you know taking that. you know, 2 nanometer, sub 2 nanometer and applying it to some of the things you can use an FPGA for is, to your point, kind of like appointing a bullet at a piece of paper. You know, I know you used it in a different analog, but it stands the same as like, if you can use as much as possible for those most, you know, high throughput workloads related to AI, and then you're not using them for all these alternative reasons, It becomes like, I think I was trying to set this up, like it becomes really the future how architecture is designed, right? Right ship for the right use case. And you put it together. And then, of course, there's capacity in your space. There's more wafers. There's more availability to do what Lattice is doing so you can use. So this is kind of important that people realize why this trend is going to continue, because as we continue generation to generation, whether it's a year, nine months, six months, you're going to need to figure out how to use all the capacity. Because we just simply include it.

Ford Tamer: 

Absolutely. And Daniel, you make a very good point. And let me just add to this. Think about, you would like to make these expensive assets, like a GPU, as efficient as possible. So for example, we have been building on a partnership with NVIDIA, where they've got this thing called HoloScan. And we're one of the partners that is supporting them on this. where in, for example, in physical eye, for TOR and ORIN in the robotics and these medical devices, we could take many vision streams, and these vision streams could be image sensor, could be LIDAR, they could be radar, they could be infrared camera, they could be ultraviolet, and you bring all these image streams into our FPGA. And we do some pre-processing in our FPGA before passing the metadata to the NVIDIA TOR or ORIN, making those processes more efficient. So again, It's one example of how we are putting this companionship near our partners to make them more efficient and more productive and make the solution better for our customers.

Daniel Newman: 

Yeah, I like that you pointed that out. Now, sort of where I was heading is like, you know, you're very focused on, you know, small, medium, low power FPGAs. I mean, the system level value, of course, Pat and I have been covering this for a long time, has been all the things you're talking about you're doing, but what we're basically seeing now is these megatrends, physical AI, quantum, and of course, data center AI are basically just like, you know, it's accelerants. They're incredible accelerants to what you're already doing. It seems like there's just a ton of upside and a ton of additional value you should be able to extract here.

Ford Tamer: 

Let me give you another example. We discussed the use of our FPGA and data center AI with things like security. We also have rack management, power cooling, all bunch of new use cases and data center. Let's move on maybe to humanoid, which is an interesting use case. And there we've got actually partnerships we're putting in place with companies like NXP for microprocessors, with ADI for like sensing and power management. physical layer devices. And what RFPGA do, we obviously also do the vision, where we can, you know, fuse and pre-process a bunch of vision streams. But we also do the motor control. And let me spend a few minutes maybe on the various attributes of why an FPGA would be better in such applications. Number one, latency. We're very low latency compared to any device out there. So we're talking about nanosecond latency, which allowed this motor to operate much faster, allowed the robot, if you've got a robot that's moving and it has to stop, it can stop much faster because of low latency in an FPGA. Second one is accuracy. So FPGA provide, you know, very good position and precision for accuracy in applications like, for example, screwdriving, welding, you know, bin picking, you know, where we can have accuracy less than 10 microsecond for motor control compared to 100 microsecond or more for alternative solution. Third one is deterministic. So we provide the same answer every time. And it's much easier to do in a hardware tab solution like an FPGA than in a software running on a microcontroller. Parallel processing provides you a higher performance. So we can do, for example, over 10 motor control in our FPGA and run these motors at 100,000 RPMs. Connectivity, we allow, you've got to be able to synchronize all these image sensor and all these different motors together very precisely with things like EtherCAT and PROFINET and to provide a whole bunch of connectivity to be able to synchronize all these for you. future proofing, we've talked about PPA, so power performance in area, we provide low power, great performance, good cost effective solution and longevity for 25, 30 years. And we do this with the 3S from Lattice solution support and supply. So it gives you a bit of a view of how another application in the physical eye world, if you wish.

Patrick Moorhead: 

Yeah, so I think you've made a great case on the value of the flexibility that you provide and talked a little bit about kind of time to shipment. Can you just very simply talk about why this is a better time to market opportunity as well?

Ford Tamer: 

No, it's a great point. Look, on time to market, we've been investing very heavily on tools, on IP, on solution, on enablement. That's been a big focus area for ours. We've doubled the size of our software team. By the time we end this year, we'll triple that size. We have a pretty wide IP library that we continue to enhance. We provide solutions that are very targeted to these use cases on, you know, for example, vision, security, embedded processing, industrial automation, communication, you know, for AGI and autonomy. And we provide all the reference design, application notes and support to allow our customers to go to market faster. Finally, we are very focused on making sure our developers and our partner community is successful as well. So we don't do this on our own. You've been at our last developer conference, you've seen the hundreds of partners that are in a partner program and we're very proud of them and we're very critical to our success to make sure we support them and they're part of the journey as well.

Daniel Newman: 

So as we tie this all together, Ford, I mean, everything you're saying sets you and Lattice Semiconductor up for a very, very bright future. I don't know if you've noticed this, but I think I've tracked and over the last couple of years, every single estimate for CapEx that has come out has been wrong. They've all been wrong by underestimating the amount, and they've all been revised up, we just had a whole wave of guidance, I think it was 650 billion just from four companies. And of course, those aren't the only companies spending. How does this sort of, do you see, you know, being able to keep up? I mean, is this a, just a really incredible short-term cycle that you see? Do you see this sustained? Just kind of, how does this whole Companion Silicon evolve for you as that CapEx boom continues and How far out and how long do you see this kind of boom going forward?

Ford Tamer: 

Look, I've been in this market for 20 years. I mean, you know, and we've all seen it. These analysts always are wrong. So far, it's been on the conservative side. I mean, first on the switch, Broadcom, then Optical at Infi, and now here with the companionship at PGA. And at Infi, when we took the revenue from 30 million in communication revenue to nine years later, 900 million run rate when we sold to Marvell. People could not imagine these applications. So we delivered, you know, a range of optical connectivity that the analysts could not imagine. And so here it's the same. People are not able to imagine the applications that this AI is unlocking. I had some architect friends, we had dinner over the weekend and he was talking about how his job got so much easier doing some of these renderings that used to cost him tens of thousands of dollars and months of creation now is a $20 a month subscription to chat GPT. So our you know, word is changing, you know, big time. So I think some of these innovation and new applications, people have a hard time managing them. We don't see this stopping anytime soon. We see the demand for foreseeable future. Are there gonna be bumps along the way? Probably. But are we? Is that a bright future? Absolutely.

Daniel Newman: 

I really appreciate it, Ford. This is really exciting times. I think Pat and I can both feel pretty good that we've been on the side of Overestimating yet underestimating. I made a comment a while back. I said this could be, 26 could be the year we see a trillion dollars in semis. And I remember people thought it was crazy. And now you're starting, I think the CEO of Applied, one of your peers in the industry came out, Applied Materials came out and said that that's what they're seeing now. So this has got to be a great time to be running a company in this space. It's a lot of fun, I'm sure for you. It's a lot of fun for Pat and I. to be here. We still remember when people said hardware wasn't important and the media didn't want to talk about semis. So a long journey, great progress from Lattice Semi. It was great to see your recent earnings in the term. And we look forward to continue to follow the progress and hopefully we can have you back again here on The Six Five soon.

Ford Tamer: 

That'd be great. Thank you for the time.

Daniel Newman: 

Thanks, Ford. And thank you, everybody, for being part of this 6-5. It was great to sit down with Ford Tamer, CEO at Lattice Semiconductor. Subscribe, be part of all of our content here on the 6-5. We appreciate you being part of this community, but we got to go for now. We'll see you all later.

MORE VIDEOS

The Six Five Pod | EP 293: AI Factories, Memory Crunch, and the Models vs Infrastructure Showdown

AI momentum is accelerating, but real-world constraints are tightening. From hyperscaler infrastructure lock-ins and sovereign AI expansion to RAM shortages and enterprise AI pivots, Ep. 293 examines what truly determines leadership in the next phase of AI.

The Six Five Pod | EP 292: Capital Flood, AI Disruption, and the Real Risks Ahead

AI investment is accelerating at historic levels, but so are the questions. From trillion-dollar semiconductor forecasts and 100-year bonds to the debate over AI’s impact on jobs, Ep. 292 explores whether we are witnessing a sustainable transformation or a systemic shock

Lenovo’s Hybrid AI Strategy for the Inference Era – Six Five Connected with Diana Blass

Lenovo’s strategy for enterprise AI centers on scaling inference into real-world operations. At CES 2026, the company unveiled hybrid AI solutions spanning edge, on-prem, and cloud environments—backed by advanced liquid cooling and rack-scale architectures built with NVIDIA and AMD.

See more

Other Categories

CYBERSECURITY

QUANTUM