Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

BG2Pod with Brad Gerstner and Bill Gurley

NVIDIA: OpenAI, Future of Compute, and the American Dream | BG2 w/ Bill Gurley and Brad Gerstner

26 Sep 2025

Transcription

Full Episode

0.031 - 10.902 Jensen Huang

I think that OpenAI is likely going to be the next multi-trillion dollar hyperscale company.

0

22.272 - 39.774 Brad Gerstner

Jensen, great to be back, of course, with my partner, Clark Tang. You know, I can't believe it's been- Welcome to NVIDIA. Oh, and nice glasses. Those actually look really good on you. The problem is now everybody's going to want you to wear them all the time. They're going to say, where are the red glasses? I can vouch for that. So it's been over a year since we did the last pod. Yeah.

0

39.794 - 46.102 Brad Gerstner

Over 40% of your revenue today is inference. But inference is about ready because of chain of reasoning. Yeah. Right?

0

46.422 - 58.659 Jensen Huang

It's about ready- It's about to go up by a billion times. Right, by a million X, by a billion X. That's right, that's right. That's the part that most people haven't completely internalized. This is that industry we were talking about, but this is the industrial revolution.

0

59.521 - 82.361 Brad Gerstner

Honestly, it's felt like you and I have had a continuation of the pod every day since then. In AI time, it's been about 100 years. I was re-watching the pod recently and the many things that we talked about that stood out. The one that was probably most profound for me was you pounding the table. that, you know, remember at the time, there was kind of a slump in terms of pre-training?

83.203 - 103.139 Brad Gerstner

And people were like, oh my God. The end of pre-training. Right, the end of pre-training. We're overbuilding. This is about a year and a half ago. And you said, inference isn't going to 100x, 1,000x. It's going to 1 billion x. Mm-hmm. Which brings us to where we are today. You announced this huge deal. We ought to start there.

103.159 - 115.537 Jensen Huang

I underestimated. Let me just go on record. I underestimated. We now have three scaling laws. We have pre-training scaling law. We have post-training scaling law. Post-training is basically like AI practicing.

115.798 - 116.018 Brad Gerstner

Yes.

116.499 - 137.273 Jensen Huang

Practicing a skill until it gets it right. And so it tries a whole bunch of different ways. And in order to do that, you've got to do inference. So now training and inference are now integrated in reinforcement learning. Really complicated. And so that's called post-training. And then the third is inference. The old way of doing inference was one shot.

Comments

There are no comments yet.

Please log in to write the first comment.