Eliezer Yudkowsky
๐ค SpeakerAppearances Over Time
Podcast Appearances
Now I'm going to duplicate that feature onto everything immediately.
It has to run for hundreds of generations for a new mutation to rise to fixation.
It runs for a long time and eventually manages to optimize things.
It's weaker than gradient descent because gradient descent also uses information about the derivative.
There's inclusive genetic fitness.
is the implicit loss function of evolution.
It cannot change.
The loss function doesn't change, the environment changes and therefore like what gets optimized for in the organism changes.
It's like take like GPT-3, there's like, can imagine like different versions of GPT-3 where they're all trying to predict the next word, but they're being run on different data sets of text.
And that's like natural selection, always includes your genetic fitness, but like different environmental problems.
Smarter than natural selection.
Smarter.
Stupider than the upper bound.
I mean, if you put enough matter energy compute into one place, it will collapse into a black hole.
There's only so much computation can do before you run out of negentropy and the universe dies.
So there's an upper bound, but it's very, very, very far up above here.
Like a supernova is only finitely hot.
It's not infinitely hot, but it's really, really, really, really hot.
The lesson of evolutionary biology.
If you just guess what an optimization does based on what you hope the results will be, it usually will not do that.