Zaid
๐ค SpeakerAppearances Over Time
Podcast Appearances
For one, we gotta go back to the hyperscalers.
They're planning to spend $700 billion on CapEx, a lot of that going towards building AI infrastructure.
And the thing is, these companies are starting to see a return on their AI investment.
Nvidia says that the use of generative AI is already delivering measurable results for clients in areas like search optimization, ad generation, recommendation systems, and developer productivity.
Meta is probably the best example of this.
In their recent earnings call, they said that AI is helping them improve ad targeting, which is leading to more revenue.
In fact, Meta saw an acceleration of revenue in the recent quarter.
Google also said something similar in their earnings call.
So I know these CapEx numbers are huge, but as long as these companies are seeing an ROI on their CapEx spend, which it looks like they are now, they're not gonna slow down spending anytime soon.
And I think investors are gonna be okay with it.
Again, as long as revenue is accelerating.
The reality is the more money these companies spend, a lot of that's going to trickle to Nvidia because they still have the best AI chips.
In fact, Nvidia has outlined a $500 billion cumulative revenue opportunity across its Blackwell and upcoming Rubin chip platforms.
Based on recent order trends, Nvidia says that it's already tracking ahead of the demand assumptions it provided just a year ago.
So demand has been stronger than they projected themselves.
Now, remember what I said earlier that Nvidia is starting to face competition, especially when it comes to inference chips.
Well, the Wall Street General just reported that Nvidia is working on a chip specifically designed for inference computing.
For this new inference chip, they've incorporated technology from a startup called Grok that Nvidia essentially acquired last year for $20 billion.
Now, this chip hasn't even come out yet, and OpenAI has already agreed to be one of the largest customers of this chip.
So, Nvidia saw the inference threat from AMD and others, and they're moving to address it head-on.