Rene Haas
👤 SpeakerAppearances Over Time
Podcast Appearances
There is competition.
You know, Demis talked about with Google, they do their own chip called TPUs.
Obviously, NVIDIA is the leader with general purpose.
But right now, we're in this interesting world where people are looking at, is it a general purpose chip?
Is it a custom chip?
Et cetera, et cetera.
It's a fascinating time to be in this industry for sure.
Maybe a little bit of both.
I mean, today, the role we play is we are now increasingly that microprocessor that connects to these accelerators, whether it's something that's done by Cerebrus or it's something that's done by Nvidia, something done by Google, they're connected.
Could we do something ourselves custom?
It's possible.
Could we also supply the intellectual property to somebody building a custom chip?
We're doing that today.
So to some extent,
We're in a very unique place that not only can we provide the solution, whether it's standard or custom, but as AI moves from gigawatt data centers to running in these headsets or running in a wearable or running in something that needs to be energy efficient, you still need to run the compute workload, but now you need to run the AI workload.
And that is a place that I think only ARM is uniquely positioned to address.
I'm not going to say that today, but could we do that?
I hinted in the last conference call that we're looking at going a little bit further than we do today.
Yes, and I also think you have a third bucket where training distills down to simpler training chips that you don't need to run a trillion parameter model.
You can have a giant model that now treats and teaches smaller models.