Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dwarkesh Patel

๐Ÿ‘ค Speaker
14445 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
How cosplaying Ancient Rome led to the scientific revolution

And there are these amazing people who by day are inquisitors and by night are going home to write their own scientific treatises as they do these experiments.

Dwarkesh Podcast
How cosplaying Ancient Rome led to the scientific revolution

It's not what we expect.

Dwarkesh Podcast
How cosplaying Ancient Rome led to the scientific revolution

But history is never what we expect.

Dwarkesh Podcast
How cosplaying Ancient Rome led to the scientific revolution

Thank you.

Dwarkesh Podcast
Elon Musk - "In 36 months, the cheapest place to put AI will be spaceโ€

You're renewed for the next season.

Dwarkesh Podcast
Elon Musk - "In 36 months, the cheapest place to put AI will be spaceโ€

Yeah.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

I'm confused why some people have super short timelines, yet at the same time are bullish on scaling up reinforcement learning atop LLMs.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

If we're actually close to a human-like learner, then this whole approach of training on verifiable outcomes is doomed.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

Now, currently the labs are trying to bake in a bunch of skills into these models through mid-training.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

There's an entire supply chain of companies that are building RL environments, which teach the model how to navigate a web browser or use Excel to build financial models.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

Now, either of these models will soon learn on the job in a self-directed way, which will make all this freebaking pointless, or they won't, which means that AGI is not imminent.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

Humans don't have to go through the special training phase where they need to rehearse every single piece of software that they might ever need to use on the job.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

Barron Milledge made an interesting point about this in a recent blog post he wrote.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

He writes, quote, When we see frontier models improving at various benchmarks, we should think not just about the increased scale and the clever ML research ideas, but the billions of dollars that are paid to PhDs, MDs, and other experts to write questions and provide example answers and reasoning targeting these precise capabilities.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

You can see this tension most vividly in robotics.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

In some fundamental sense, robotics is an algorithms problem, not a hardware or data problem.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

With very little training, a human can learn how to teleoperate current hardware to do useful work.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

So if we actually had a human-like learner, robotics would be, in large part, a solved problem.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

But the fact that we don't have such a learner makes it necessary to go out into a thousand different homes and practice a million times on how to pick up dishes or fold laundry.

Dwarkesh Podcast
An audio version of my blog post, Thoughts on AI progress (Dec 2025)

Now, one common argument I've heard from the people who think we're going to have a takeoff within the next five years is that we have to do all this kludgy RL in service of building a superhuman AI researcher.