Stephen McAleese
๐ค SpeakerAppearances Over Time
Podcast Appearances
In particular, it contains none of the information that an organism learns during its lifetime.
This means that evolution's ability to select for specific motives and behaviors in an organism is coarse-grained.
It is restricted to only what it can influence through genetic causation.
Similarly, the post-evolution is a bad analogy for AGI suggests that our intuitions about AI goals should be rooted in how humans learn values throughout their lives rather than how species evolve.
I think the balance of dissimilarities points to human learning.
Right arrow.
Human values being the closer reference class for the AI learning.
Right arrow.
AI values.
As a result, I think the vast majority of our intuitions regarding the likely outcomes of inner goals versus outer optimization should come from looking at the human learning.
Right arrow.
Human values canalogy, not the evolution.
Right arrow.
Human values canalogy.
End quote.
In the post against evolution as an analogy for how humans will create AGI, the author argues that ASI development is unlikely to mirror evolution's belevel optimization process where an outer search process selects an inner learning process.
Here's what AI training might look like if it involved a belevel optimization process like evolution.
1.
An outer optimization process like evolution finds an effective learning algorithm or AI architecture.
2.