Dwarkesh Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
And there are these amazing people who by day are inquisitors and by night are going home to write their own scientific treatises as they do these experiments.
It's not what we expect.
But history is never what we expect.
Thank you.
You're renewed for the next season.
Yeah.
I'm confused why some people have super short timelines, yet at the same time are bullish on scaling up reinforcement learning atop LLMs.
If we're actually close to a human-like learner, then this whole approach of training on verifiable outcomes is doomed.
Now, currently the labs are trying to bake in a bunch of skills into these models through mid-training.
There's an entire supply chain of companies that are building RL environments, which teach the model how to navigate a web browser or use Excel to build financial models.
Now, either of these models will soon learn on the job in a self-directed way, which will make all this freebaking pointless, or they won't, which means that AGI is not imminent.
Humans don't have to go through the special training phase where they need to rehearse every single piece of software that they might ever need to use on the job.
Barron Milledge made an interesting point about this in a recent blog post he wrote.
He writes, quote, When we see frontier models improving at various benchmarks, we should think not just about the increased scale and the clever ML research ideas, but the billions of dollars that are paid to PhDs, MDs, and other experts to write questions and provide example answers and reasoning targeting these precise capabilities.
You can see this tension most vividly in robotics.
In some fundamental sense, robotics is an algorithms problem, not a hardware or data problem.
With very little training, a human can learn how to teleoperate current hardware to do useful work.
So if we actually had a human-like learner, robotics would be, in large part, a solved problem.
But the fact that we don't have such a learner makes it necessary to go out into a thousand different homes and practice a million times on how to pick up dishes or fold laundry.
Now, one common argument I've heard from the people who think we're going to have a takeoff within the next five years is that we have to do all this kludgy RL in service of building a superhuman AI researcher.