Mandeep Singh
๐ค SpeakerAppearances Over Time
Podcast Appearances
And so from that perspective, H200, just on the training side, could be a pretty sizable $25 to $30 billion option.
next year.
Yeah, and that's where, you know, the continuity is the main point because what NVIDIA gives you is that backward compatibility.
Even if you move to, you know, the newer version of their architecture, which NVIDIA will be releasing, Rubin.
And so, yes, Chinese market will be delayed from that standpoint.
But what you want to see is they being able to use, you know, whenever the Blackwell version is available to the Chinese market, they should be able to use that because then it becomes...
a cluster that they could use for inferencing down the line.
Right now, they use the same chips for training, but over time, they could use it for inferencing.
And what we have heard from the Neo Cloud providers here is the NVIDIA chips have a very long, useful life.
And everyone wants to use them for as long as possible.
And the Chinese market has no shortage of power, unlike the market here.
So from that perspective, it does make sense for them to use
those clusters for as long as they can.
Yeah, and look, every executive has called out tokens per watt as a key metric that they are focused on.
And that's a measure of intelligence, right?
So per unit of power, how many tokens can you generate?
So from that perspective, the Chinese market is no different.
They want to maximize the tokens generated per watt for their model companies, even though they have more watts available.
And look, over here to your prior question, I mean, right now,
The estimates are we will probably be adding up to 100 gigawatts of capacity over the next five years.