Eiso Kant
👤 PersonAppearances Over Time
Podcast Appearances
And what that means is that the 10,000 GPUs that we've now brought online this summer, that came from this capital, allow us to make incredible advancements in model capabilities because of our ability to take reinforcement learning from code execution feedback and generate extremely large amounts of data, and then train very large models with it.
And what that means is that the 10,000 GPUs that we've now brought online this summer, that came from this capital, allow us to make incredible advancements in model capabilities because of our ability to take reinforcement learning from code execution feedback and generate extremely large amounts of data, and then train very large models with it.
It is enough for this moment in time, but over time, it won't be enough.
It is enough for this moment in time, but over time, it won't be enough.
It is enough for this moment in time, but over time, it won't be enough.
It's a very good question. There are real physical, real world constraints behind this. We've seen crazy numbers thrown out in our industry of, you know, compute cluster sizes and things like that. But the world actually still needs time to catch up with the real ability to do so. Today, interconnecting more than 32,000 GPUs is extremely challenging.
It's a very good question. There are real physical, real world constraints behind this. We've seen crazy numbers thrown out in our industry of, you know, compute cluster sizes and things like that. But the world actually still needs time to catch up with the real ability to do so. Today, interconnecting more than 32,000 GPUs is extremely challenging.
It's a very good question. There are real physical, real world constraints behind this. We've seen crazy numbers thrown out in our industry of, you know, compute cluster sizes and things like that. But the world actually still needs time to catch up with the real ability to do so. Today, interconnecting more than 32,000 GPUs is extremely challenging.
We're starting to be able to possibly interconnect 100,000. But right now, a million GPU cluster, a 10 million GPU cluster for training of models has both true algorithmic things that we have to overcome to be able to do this, and also has actual physical limitations still in the world. So we're not living in a world right now where unlimited money can buy you unlimited advantages.
We're starting to be able to possibly interconnect 100,000. But right now, a million GPU cluster, a 10 million GPU cluster for training of models has both true algorithmic things that we have to overcome to be able to do this, and also has actual physical limitations still in the world. So we're not living in a world right now where unlimited money can buy you unlimited advantages.
We're starting to be able to possibly interconnect 100,000. But right now, a million GPU cluster, a 10 million GPU cluster for training of models has both true algorithmic things that we have to overcome to be able to do this, and also has actual physical limitations still in the world. So we're not living in a world right now where unlimited money can buy you unlimited advantages.
It's why we get to exist with 10,000 GPUs.
It's why we get to exist with 10,000 GPUs.
It's why we get to exist with 10,000 GPUs.
I think, again, it depends on how much cash and how much compute. About a year and a half ago when we started as a company, there was a true imbalance between supply and demand in the world that even as a frontier AI company starting, everyone wants you to win. NVIDIA is incentivized to hyperscale. Everyone is incentivized actually to make early stage companies succeed with compute.
I think, again, it depends on how much cash and how much compute. About a year and a half ago when we started as a company, there was a true imbalance between supply and demand in the world that even as a frontier AI company starting, everyone wants you to win. NVIDIA is incentivized to hyperscale. Everyone is incentivized actually to make early stage companies succeed with compute.
I think, again, it depends on how much cash and how much compute. About a year and a half ago when we started as a company, there was a true imbalance between supply and demand in the world that even as a frontier AI company starting, everyone wants you to win. NVIDIA is incentivized to hyperscale. Everyone is incentivized actually to make early stage companies succeed with compute.
It's a lot easier when you're an early stage AI company to get compute than it is when you're an enterprise because they understand this is where the future is heading towards. But even then, there was a real mismatch between demand and supply, and we had to do an incredible amount of work of understanding the market, building relationships, and having plan A to Z to get there.
It's a lot easier when you're an early stage AI company to get compute than it is when you're an enterprise because they understand this is where the future is heading towards. But even then, there was a real mismatch between demand and supply, and we had to do an incredible amount of work of understanding the market, building relationships, and having plan A to Z to get there.
It's a lot easier when you're an early stage AI company to get compute than it is when you're an enterprise because they understand this is where the future is heading towards. But even then, there was a real mismatch between demand and supply, and we had to do an incredible amount of work of understanding the market, building relationships, and having plan A to Z to get there.