Dwarkesh
๐ค SpeakerAppearances Over Time
Podcast Appearances
We do think that the AI will be closer to the eusocial insects in the sense that they all have the same goals, especially if these aren't indexical goals.
They're goals like have the research program succeed.
So that's going to be changing the weights of each individual AI.
I mean, before they're individuated, but it's going to be changing the weights of the AI class overall to be more amenable to cooperation.
And then, yes, you do have the cultural evolution.
Like you said, this takes...
hundreds of thousands of individuals.
We do expect there will be these hundreds of thousands of individuals.
It takes decades and decades.
Again, we expect this research multiplier such that decades of progress happen within this one year, 2027 or 2028.
So I think between the two of these, it is possible.
Also, they do have the advantage of all the cultural technology that humans have evolved so far.
This may not be perfectly suited to them.
It's more suited to humans.
But imagine that you have to make a business out of you and your hundred closest friends who you agree with on everything.
Maybe they're literally your identical twin.
They have never betrayed you ever and never will.
Like, I think this is just not that hard a problem.
So when I think of like the ways that they train AIs,
I think in our scenario at this point, there are two primary ways that they're doing it.