Jon Parrella
๐ค SpeakerAppearances Over Time
Podcast Appearances
So, I don't know that I quite agree with Jigar on 30 gigawatts.
I think it's going to be a little bit more than that.
We're working on quite a few of those projects in Texas.
And I don't know that I would put all the blame necessarily on OpenAI.
I used to work for Lancium.
That was the base land and power for the Stargate One project.
I did a lot of work on that project before we knew it was going to be Stargate One.
And in going back and looking at it, they've had a lot of issues because a lot of these companies, when they applied for their grid interconnect, they applied as if they were like Bitcoin mine or traditional data centers that were very flat, stable loads.
Right.
If you actually start to look at the load profile of an AI data center, it's one of the most volatile loads you've ever seen.
12, 14 times a minute load swings that are 30, 80 percent the size of the data center.
And that's wreaked havoc on these data centers moving quickly because it's tearing up gen sets.
They're literally breaking the crankshaft of the gen sets.
They're burning through batteries.
And it's devastating.
A lot of these data centers that are all about time to power and moving quickly, and they got ahead of the tips of their skis because they didn't architect the infrastructure correctly for it to succeed.
And so I know that even the Stargate project has had a lot of problems where the utility turned them back off, and they've had to scramble to try and find equipment to be able to try and solve for these problems and get them stable loads.
I don't know that I would necessarily blame everything.
It was definitely ambitious on their part to be able to say that they were going to scale that big that fast.
I think in a lot of ways they had to do that in order to get capital to be able to start scaling.