Daniel Kokotajlo
๐ค SpeakerAppearances Over Time
Podcast Appearances
Okay, now that we've brought up the intelligence explosion, let's just discuss that because I'm kind of skeptical.
It doesn't really seem to me that a notable bottleneck to AI progress or the main bottleneck to AI progress is the amount of researchers, engineers who are doing this kind of research.
It seems more like compute or some other thing is a bottleneck.
And the piece of evidence is that when I talk to my AI researcher friends,
At the labs, they say there's maybe 20 to 30 people on the core pre-training team that's discovering all these algorithmic breakthroughs.
If the headcount here was so valuable, you would think that, for example, Google DeepMind would take not just everybody from all their smartest people, not just from DeepMind, but for all of Google and just put them on pre-training or RL or whatever the big bottleneck was.
You'd think OpenAI would hire...
every single Harvard math PhD, and in six months, you're all going to be trained up on how to do AI research.
They don't seem that... I mean, I know they're increasing headcount, but they don't seem to treat this as the kind of bottleneck that...
It would have to be for millions of them in parallel to be rapidly speeding up AI research.
And there just is this, you know, there's this quote that Napoleon, one Napoleon is worth 40,000 soldiers, was commonly a thing that was said when he was fighting.
But 10 Napoleons is not 400,000 soldiers, right?
So why think that these million...
AI researchers are netting you something that looks like an intelligence explosion.
Is there some intuition pump from history where there's been...
Some output, and because of some really weird constraints, production of it has been rapidly skewed along one input, but not all the inputs that have been historically relevant, and you still get breakneck progress.
So maybe the reason that this sounds less plausible to me than the 25x number implies is that when I think about concretely what that would look like, where you have these AIs and we know that there's a gap in data efficiency between human brains and these AIs.
And so somehow there's a lot of them thinking and they think really hard and they figure out how to define a new architecture that is like AI.
the human brain or has the advantages of the human brain.
And I guess they can still do experiments, but not that many.