Eliezer Yudkowsky
๐ค SpeakerAppearances Over Time
Podcast Appearances
It's weaker than gradient descent because gradient descent also uses information about the derivative.
There's inclusive genetic fitness.
is the implicit loss function of evolution.
It cannot change.
The loss function doesn't change, the environment changes and therefore like what gets optimized for in the organism changes.
It's like take like GPT-3, there's like, can imagine like different versions of GPT-3 where they're all trying to predict the next word, but they're being run on different data sets of text.
And that's like natural selection, always includes your genetic fitness, but like different environmental problems.
Smarter than natural selection.
Smarter.
Stupider than the upper bound.
I mean, if you put enough matter energy compute into one place, it will collapse into a black hole.
There's only so much computation can do before you run out of negentropy and the universe dies.
So there's an upper bound, but it's very, very, very far up above here.
Like a supernova is only finitely hot.
It's not infinitely hot, but it's really, really, really, really hot.
The lesson of evolutionary biology.
If you just guess what an optimization does based on what you hope the results will be, it usually will not do that.
This is what the early biologists thought.
They were like, no, no, I'm not just like, they thought like, no, no, I'm not just like imagining stuff that would be pretty.
It's useful for organisms to restrain their own reproduction because then they don't overrun the prey populations and they actually have more kids in the long run.