Bill Gurley
๐ค SpeakerAppearances Over Time
Podcast Appearances
I think maybe some hyperscalers had different workloads.
they weren't quite sure how quickly they could digest it.
Everyone has now concluded that they dramatically underbuilt.
One of the applications that my favorite is just good old fashioned data processing.
To that point, one of the biggest key debates and controversies in the world is this question of GPUs versus ASICs.
Google's TPUs, Amazon's Tranium, and it seems like everyone from ARM to OpenAI to Anthropic are rumored to be building one.
Last year, you said, we're building systems, not chips, and you're driving performance through every single part of that stack.
you also said that many of these projects may never get to production scale.
But given the seeming- Most of them.
Most of them.
Given the seeming success of Google's TPUs, how are you thinking about this evolving landscape today?
Even for the customers who perhaps are successful with ASICs, isn't there an optimal balance in their compute fleet?
I think investors are very much binary creatures.
They just want a yes or no, black and white answer.
But even if you get the ASIC to work, isn't there an optimal balance because you think, I'm buying the NVIDIA platform.
CPX is going to come out for pre-fill for video generation.
Maybe a decode, you know, a platform.
A video transcoder.
Exactly.
So there will be, like, many different chips or parts to add to the NVIDIA ecosystem.