Mandeep Singh
๐ค SpeakerAppearances Over Time
Podcast Appearances
when you think about what is existing in terms of ai data centers these are 50 to 100 megawatt data centers we are already talking about power requirements going 10x to 1 gigawatt and so how will a 70 to 80 year old grid supply electricity which is 10 times more than what these data centers already consume
These hyperscalers spent almost $400 billion plus in capex last year.
And a lot of those investments are going to come online in the next 12 months.
So you have to ask yourself, how are they going to source the power?
And that's where I think some of that inflation when it comes to sourcing that power is going to show up in this form.
The trend is more towards specialization.
What Anthropic has shown is you can focus on a particular area and really improve the model to the point that you can gain a lead.
So that's where I think OpenAI will do well, as will Gemini, Grok, and all these frontier models.
At the CES, Amazon launched Alexa Plus.
So you know there's a lot going on with agentic commerce and voice, and Apple has to obviously step up in terms of
whichever LLM they want to use.
And to my mind, you know, Google is the most obvious choice.
They seem to be confident about their own model, which has so far trailed the likes of OpenAI, Anthropic, and Gemini in terms of capabilities.
But it sounds like they want to make sure they have the capacity to deploy AI, and that's where nuclear is an interesting choice.
Yeah, look, I mean, it's hard to pinpoint exactly what portion of that 50 billion NVIDIA can capture through H200s, but there is no doubt that, you know, the frontier LLM companies, you know, from DeepSeek to, you know, the Alibaba, Quen, and Kimi, all these models have been trained and they have kept up
in terms of functionality with the frontier models here, whether it's Gemini or OpenAI.
And so from that perspective, you have to ask yourself, how have these companies trained their models, and is it all based on their in-house or Huawei chips?
And the answer, it's hard to discern sitting here, but
To my mind, they would welcome any opportunity to get a big NVIDIA cluster because at the end of the day, when it comes to training, NVIDIA is proven to be the one chip company that is the most useful for building the big training clusters.
Yes, we have the TPU news and all that, but everyone universally wants to train their models on NVIDIA.