Gwern Branwen
👤 PersonAppearances Over Time
Podcast Appearances
So I think for many creatures, it just doesn't pay to be intelligent because that's not actually adaptive.
There are better ways to solve the problem than a general purpose intelligence.
So in any kind of niche where it's like static or where intelligence will be super expensive or where you don't have much time because you're a short-lived organism, it's gonna be really hard to evolve a general purpose learning mechanism when you could instead evolve one that's just tailor-made to the specific problem that you encounter.
So I think if I had to give an intellectual history of that for me, I think it would probably start in the mid-2000s when I was reading Moravec and Ray Kurzweil.
At the time, they were making this kind of fundamental connectionist argument that if you had enough computing power, that that could result in discovering the neural network architecture that matches the human brain.
And until that happens, until that amount of computing power is available, AI just seemed basically futile.
Right.
And to me, I think I found this argument very unlikely because it's very much a kind of build it and they will come view of progress, which I just didn't think was correct.
I thought that it just seemed ludicrous to suggest that, you know, just because you'd have some like really big supercomputer out there which matches the human brain, then that would kind of just summon out of nonexistence the correct algorithm.
Algorithms are really complex.
They're hard.
They require deep insight, or at least I thought they did.
And it seemed like really difficult mathematics.
You can't just like buy a bunch of computers and then expect to get this advanced AI out of it.
It just seemed like totally magical thinking.
So I knew the argument, but I was super skeptical and I didn't pay too much attention.
But then Shane Legg and some others were very big on this in the years following.
And as part of my interest in transhumanism and less wrong and AI risk, I was paying close attention to Legg's blog posts in particular, where he's extrapolating kind of out the trend with updated numbers from Kurzweil and Moravec.
And he's giving these kind of very precise predictions about how we're going to get the first generalist system around 2019.
as Moore's Law keeps going, and that by 2025, we would have kind of humanish agents with generalist capabilities, and that by 2030, he said, we should have AGI.