Mandeep Singh
๐ค SpeakerAppearances Over Time
Podcast Appearances
What Anthropic has shown is you can focus on a particular area and really improve the model to the point that you can gain a lead.
So that's where I think OpenAI will do well, as will Gemini, Grok, and all these frontier
models.
Nuclear is an interesting choice because a lot of the other hyperscalers have gone for more natural gas turbines.
But we know there is a big backlog with someone like GE Vernova for their natural gas turbines.
So from that perspective, nuclear is an interesting choice, you know, as an alternate.
Nuclear is an interesting choice because a lot of the other hyperscalers have gone for more natural gas turbines.
But we know there is a big backlog with someone like GE Vernova for their natural gas turbines.
So from that perspective, nuclear is an interesting choice, you know, as an alternate.
Nuclear is an interesting choice because a lot of the other hyperscalers have gone for more natural gas turbines.
But we know there is a big backlog with someone like GE Vernova for their natural gas turbines.
So from that perspective, nuclear is an interesting choice, you know, as an alternate.
They seem to be confident about their own model, which has so far trailed the likes of OpenAI, Anthropic, and Gemini in terms of capabilities.
But it sounds like they want to make sure they have the capacity to deploy AI, and that's where nuclear is an interesting choice.
Yeah, look, I mean, it's hard to pinpoint exactly what portion of that 50 billion NVIDIA can capture through H200s, but there is no doubt that, you know, the frontier LLM companies, you know, from DeepSeek to, you know, the Alibaba, Quen, and Kimi, all these models have been trained and they have kept up
in terms of functionality with the frontier models here, whether it's Gemini or OpenAI.
And so from that perspective, you have to ask yourself, how have these companies trained their models, and is it all based on their in-house or Huawei chips?
And the answer, it's hard to discern sitting here, but
To my mind, they would welcome any opportunity to get a big NVIDIA cluster because at the end of the day, when it comes to training, NVIDIA is proven to be the one chip company that is the most useful for building the big training clusters.
Yes, we have the TPU news and all that, but everyone universally wants to train their models on NVIDIA.