Christina Ruffini
๐ค SpeakerAppearances Over Time
Podcast Appearances
with Caroline Hyde in New York and Ed Ludlow in San Francisco.
It's insatiable demand from Meta.
Mark Zuckerberg last month announced Meta Compute with ambitions to get hundreds of gigawatts to fuel its data centers and ultimately reach that super intelligence goal where AI can outpace human intelligence.
And this is just the latest in a frenzy of deals.
Again, we're anticipating $135 billion in CapEx this year, but now we know the spending won't stop.
It's a really good question.
Not only is it similar to what Meta is doing with NVIDIA, but Meta also has its own internal pipeline of custom chips that it's building for AI purposes.
What we heard yesterday is that they see different applications, different workloads here being supported by all three of those verticals.
And so they're trying to diversify as they pursue this massive scale in terms of compute.
Ian, take us back to Nvidia, because we've got their earnings coming up tomorrow.
So we asked yesterday where they thought they would deploy these chips, and they couldn't specify which data centers.
We know that some of their biggest projects are targeting five gigawatts, right?
But they need to get certain regulatory approvals, and they need energy companies to be able to deliver there on the ground.
So as for the merging of the energy and the compute here, time will tell as we home in on where those chips will actually go.
Yeah, his strategy is just that.
He's used the language of front-loading capacity.
They're going to try to get as much as they can, gobble it up while there's still availability.
And this, he says, could be applied not just for AI purposes, but for that core social media business, which still drives more than 98% of their revenue, right?
So they see applications across the board, and they're not worried about getting too much in the interim.
Ian, why does AMD have to keep giving its shares away?