Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dwarkesh Patel

๐Ÿ‘ค Speaker
12579 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

And this is very often fair.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

It's easy to underestimate the progress that AI has made over the last decade.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

But some amount of GoPro shifting is actually justified.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

If you showed me Gemini 3 in 2020, I would have been certain that it could automate half of knowledge work.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

And so we keep solving what we thought were the sufficient bottlenecks to AGI.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

We have models that have general understanding, they have few-shot learning, they have reasoning, and yet we still don't have AGI.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

So what is a rational response to observing this?

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

I think it's totally reasonable to look at this and say, oh, actually, there's much more to intelligence and labor than I previously realized.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

And while we're really close and in many ways have surpassed what I would have previously defined as AGI in the past, the fact that model companies are not making the trillions of dollars in revenue that would be implied by AGI clearly reveals that my previous definition of AGI was too narrow.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

And I expect this to keep happening into the future.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

I expect that by 2030, the labs will have made significant progress on my hobby horse of continual learning, and the models will be earning hundreds of billions of dollars in revenue a year.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

But they won't have automated all knowledge work.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

And I'll be like, look, we made a lot of progress, but we haven't hit AGI yet.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

We also need these other capabilities.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

We need X, Y and Z capabilities in these models.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

Models keep getting more impressive at the rate that the short timelines people predict, but more useful at the rate that the long timelines people predict.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

It's worth asking, what are we scaling?

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

With pre-trading, we had this extremely clean and general trend in improvement in loss across multiples orders of magnitude in compute.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

Albeit, this was on a power law, which is as weak as exponential growth is strong.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

But people are trying to launder the prestige that three-training scaling has, which is almost as predictable as a physical law of the universe, to justify bullish predictions about reinforcement learning from verifiable reward, for which we have no well-fit publicly known trend.