Rene Haas
๐ค SpeakerAppearances Over Time
Podcast Appearances
This doesn't necessarily need to be chat GPT-5 running six months of training to figure out the next level of sophistication. But this could be just now you want to run a small level of inference that is helping the AI model run wherever it's at. So we are seeing AI workloads, as I said, running absolutely everywhere. So what does that mean for ARM?
This doesn't necessarily need to be chat GPT-5 running six months of training to figure out the next level of sophistication. But this could be just now you want to run a small level of inference that is helping the AI model run wherever it's at. So we are seeing AI workloads, as I said, running absolutely everywhere. So what does that mean for ARM?
This doesn't necessarily need to be chat GPT-5 running six months of training to figure out the next level of sophistication. But this could be just now you want to run a small level of inference that is helping the AI model run wherever it's at. So we are seeing AI workloads, as I said, running absolutely everywhere. So what does that mean for ARM?
So our core business is around CPUs, but we also do GPUs. We also do NPUs, neural processing engines. And what we are seeing is the need to add more and more compute capability to accelerate these AI workloads. We're seeing that kind of as table stakes. Either put a neural engine inside the GPU that can run acceleration or make the CPU more capable to run extensions that can accelerate your AI.
So our core business is around CPUs, but we also do GPUs. We also do NPUs, neural processing engines. And what we are seeing is the need to add more and more compute capability to accelerate these AI workloads. We're seeing that kind of as table stakes. Either put a neural engine inside the GPU that can run acceleration or make the CPU more capable to run extensions that can accelerate your AI.
So our core business is around CPUs, but we also do GPUs. We also do NPUs, neural processing engines. And what we are seeing is the need to add more and more compute capability to accelerate these AI workloads. We're seeing that kind of as table stakes. Either put a neural engine inside the GPU that can run acceleration or make the CPU more capable to run extensions that can accelerate your AI.
We are seeing that everywhere. And I think that I wouldn't even say that's going to accelerate. That now is going to be the default. So what you're going to have is from the tiniest of devices at the edge to the most sophisticated data centers, an AI workload is going to be running on top of everything else that you had to do, right?
We are seeing that everywhere. And I think that I wouldn't even say that's going to accelerate. That now is going to be the default. So what you're going to have is from the tiniest of devices at the edge to the most sophisticated data centers, an AI workload is going to be running on top of everything else that you had to do, right?
We are seeing that everywhere. And I think that I wouldn't even say that's going to accelerate. That now is going to be the default. So what you're going to have is from the tiniest of devices at the edge to the most sophisticated data centers, an AI workload is going to be running on top of everything else that you had to do, right?
So if you look at a mobile phone or a PC that has to run graphics, it has to run a game, it has to run the operating system, it has to run the apps. And oh, by the way, it now needs to run some level of co-pilot or it needs to run an agent. It's good for us because what that means is I need more and more compute capability inside a system that's already kind of constrained on cost.
So if you look at a mobile phone or a PC that has to run graphics, it has to run a game, it has to run the operating system, it has to run the apps. And oh, by the way, it now needs to run some level of co-pilot or it needs to run an agent. It's good for us because what that means is I need more and more compute capability inside a system that's already kind of constrained on cost.
So if you look at a mobile phone or a PC that has to run graphics, it has to run a game, it has to run the operating system, it has to run the apps. And oh, by the way, it now needs to run some level of co-pilot or it needs to run an agent. It's good for us because what that means is I need more and more compute capability inside a system that's already kind of constrained on cost.
It's kind of constrained on size. It's kind of constrained on area. But it's great for us because it gives us a bunch of hard problems to go off and solve. But that's clearly what we're seeing. So I would say AI is everywhere.
It's kind of constrained on size. It's kind of constrained on area. But it's great for us because it gives us a bunch of hard problems to go off and solve. But that's clearly what we're seeing. So I would say AI is everywhere.
It's kind of constrained on size. It's kind of constrained on area. But it's great for us because it gives us a bunch of hard problems to go off and solve. But that's clearly what we're seeing. So I would say AI is everywhere.
And I think there's two reasons for that. One is the... models and the capabilities are advancing very fast. And the capability of the model is advancing how you manage the balance between what runs locally, what runs in the cloud, things around latency and security. It's moving at an incredible pace.
And I think there's two reasons for that. One is the... models and the capabilities are advancing very fast. And the capability of the model is advancing how you manage the balance between what runs locally, what runs in the cloud, things around latency and security. It's moving at an incredible pace.
And I think there's two reasons for that. One is the... models and the capabilities are advancing very fast. And the capability of the model is advancing how you manage the balance between what runs locally, what runs in the cloud, things around latency and security. It's moving at an incredible pace.
I think OpenAI, and I was in a discussion with the OpenAI guys last week, they're doing the 12 days of Christmas. 12 days of shipments. 12 days of shipments, yeah. And they're doing something every day. It takes two or three years to develop a chip, right? So think about the chips that are in that new iPhone when they were conceived.
I think OpenAI, and I was in a discussion with the OpenAI guys last week, they're doing the 12 days of Christmas. 12 days of shipments. 12 days of shipments, yeah. And they're doing something every day. It takes two or three years to develop a chip, right? So think about the chips that are in that new iPhone when they were conceived.