Eiso Kant
👤 PersonAppearances Over Time
Podcast Appearances
The demand of the world for AI for a Google and an Amazon with their own silicon is not the demand for chips, it's the demand for AI.
The demand of the world for AI for a Google and an Amazon with their own silicon is not the demand for chips, it's the demand for AI.
The demand of the world for AI for a Google and an Amazon with their own silicon is not the demand for chips, it's the demand for AI.
The way I look at this is that we're going to be in a world where those three and possibly new entrants or possibly, you know, AMD maybe catching up, but I think really those three and Microsoft with their own silicon one day are going to be the driving force in this industry.
The way I look at this is that we're going to be in a world where those three and possibly new entrants or possibly, you know, AMD maybe catching up, but I think really those three and Microsoft with their own silicon one day are going to be the driving force in this industry.
The way I look at this is that we're going to be in a world where those three and possibly new entrants or possibly, you know, AMD maybe catching up, but I think really those three and Microsoft with their own silicon one day are going to be the driving force in this industry.
Why? Because I'm training on H200s. And so the compute that I brought online at the end of August, these 10,000 H200s, mean that the longer the next generation chips is delayed, it helps me in a competitive nature in the world. But also there's a lot of marketing around the next generation of chips. And again, we have to separate training and inference.
Why? Because I'm training on H200s. And so the compute that I brought online at the end of August, these 10,000 H200s, mean that the longer the next generation chips is delayed, it helps me in a competitive nature in the world. But also there's a lot of marketing around the next generation of chips. And again, we have to separate training and inference.
Why? Because I'm training on H200s. And so the compute that I brought online at the end of August, these 10,000 H200s, mean that the longer the next generation chips is delayed, it helps me in a competitive nature in the world. But also there's a lot of marketing around the next generation of chips. And again, we have to separate training and inference.
Pretty much what we've seen consistently with every two-year generation of training from NVIDIA is about a 2x performance increase. But training is about 2x every two years. On inference, though, I think there's a lot of hope on Blackwell because it looks like for inference, Blackwell might potentially unlock a much, much larger gain.
Pretty much what we've seen consistently with every two-year generation of training from NVIDIA is about a 2x performance increase. But training is about 2x every two years. On inference, though, I think there's a lot of hope on Blackwell because it looks like for inference, Blackwell might potentially unlock a much, much larger gain.
Pretty much what we've seen consistently with every two-year generation of training from NVIDIA is about a 2x performance increase. But training is about 2x every two years. On inference, though, I think there's a lot of hope on Blackwell because it looks like for inference, Blackwell might potentially unlock a much, much larger gain.
The way we think about this, and I think the way to think about it, is that these chips, when they come two times more efficient, The operations we're doing on them is still the same. It's matrix multiplications and additions and such. It's math that we're doing on these chips. The Blackwell generation for us from a training perspective doesn't unlock anything new.
The way we think about this, and I think the way to think about it, is that these chips, when they come two times more efficient, The operations we're doing on them is still the same. It's matrix multiplications and additions and such. It's math that we're doing on these chips. The Blackwell generation for us from a training perspective doesn't unlock anything new.
The way we think about this, and I think the way to think about it, is that these chips, when they come two times more efficient, The operations we're doing on them is still the same. It's matrix multiplications and additions and such. It's math that we're doing on these chips. The Blackwell generation for us from a training perspective doesn't unlock anything new.
It just means that we have to, we can do more with a certain set of chips. My H200s become less valuable in the world, but it does not necessarily mean I have to go upgrade to the next generation.
It just means that we have to, we can do more with a certain set of chips. My H200s become less valuable in the world, but it does not necessarily mean I have to go upgrade to the next generation.
It just means that we have to, we can do more with a certain set of chips. My H200s become less valuable in the world, but it does not necessarily mean I have to go upgrade to the next generation.
GPT-5, what it won't deliver, isn't a question we're going to look back on in a decade from now.
GPT-5, what it won't deliver, isn't a question we're going to look back on in a decade from now.