Dwarkesh Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
I'm confused why some people have super short timelines, yet at the same time are bullish on scaling up reinforcement learning atop LLMs.
If we're actually close to a human-like learner, then this whole approach of training on verifiable outcomes is doomed.
Now, currently the labs are trying to bake in a bunch of skills into these models through mid-training.
There's an entire supply chain of companies that are building RL environments, which teach the model how to navigate a web browser or use Excel to build financial models.
Now, either of these models will soon learn on the job in a self-directed way, which will make all this freebaking pointless, or they won't, which means that AGI is not imminent.
Humans don't have to go through the special training phase where they need to rehearse every single piece of software that they might ever need to use on the job.
Barron Milledge made an interesting point about this in a recent blog post he wrote.
He writes, quote, When we see frontier models improving at various benchmarks, we should think not just about the increased scale and the clever ML research ideas, but the billions of dollars that are paid to PhDs, MDs, and other experts to write questions and provide example answers and reasoning targeting these precise capabilities.
You can see this tension most vividly in robotics.
In some fundamental sense, robotics is an algorithms problem, not a hardware or data problem.
With very little training, a human can learn how to teleoperate current hardware to do useful work.
So if we actually had a human-like learner, robotics would be, in large part, a solved problem.
But the fact that we don't have such a learner makes it necessary to go out into a thousand different homes and practice a million times on how to pick up dishes or fold laundry.
Now, one common argument I've heard from the people who think we're going to have a takeoff within the next five years is that we have to do all this kludgy RL in service of building a superhuman AI researcher.
And then the million copies of this automated ILLIA can go figure out how to solve robust and efficient learning from experience.
This just gives me the vibes of that old joke, we're losing money on every sale, but we'll make it up in volume.
Somehow this automated researcher is going to figure out the algorithm for AGI, which is a problem that humans have been banging their head against for the better half of a century, while not having the basic learning capabilities that children have.
I find this super implausible.
Besides, even if that's what you believe, it doesn't describe how the labs are approaching reinforcement learning from verifiable reward.
You don't need to prebake in a consultant skill at crafting PowerPoint slides in order to automate ILIA.