Sholto Douglas
๐ค SpeakerAppearances Over Time
Podcast Appearances
Our copies are generating the synthetic data, which we're trained on, and it's this really effective genetics, cultural, co-evolutionary loop.
Yeah.
I want to go back.
I'm just remembering something you mentioned a little while ago of given how sort of like empirical ML is, it really is an evolutionary process that's resulting in better performance and not necessarily an individual coming up with a breakthrough in like a top-down way.
That has interesting implications.
First being that
There really is, people are concerned about capabilities increasing because more people are going into the field.
I've somewhat been skeptical of that way of thinking, but from this perspective of just more input, it really does, yeah, it feels more like, oh, I actually abide the fact that more people are going to ICML means that there's faster progress towards GPT-5.
Another implication of this is...
this idea that like AGI just gonna come tomorrow, like somebody's just gonna discover a new algorithm and we have AGI.
That seems less plausible.
Like it will just be a matter of more and more ML researchers finding these marginal things that all add up together to make models better, right?
Like, yeah, that feels like the correct story to me, yeah.
Especially while we're still hardware constrained.
Right.
Do you buy this narrow window framing of the intelligence explosion of you have to each, you know, GPD-3 to GPD-4 is two ooms of orders of magnitude more compute, or at least more effective compute in the sense that if you didn't have any algorithmic progress, it would have to be
two orders of magnitude bigger, like the raw form to be as good.
Do you buy the framing that given that you have to be two orders of magnitude bigger at every generation, if you don't get AGI by GPT-7 that can help you catapult an intelligence explosion, like you're kind of just fucked as far as like much smarter intelligences go and you're kind of stuck with GPT-7 level models for a long time?
Cause at that point you're just like consuming significant fractions of the economy to make that model.
And we just don't have the wherewithal to like make GPT-8.