Dwarkesh
👤 PersonAppearances Over Time
Podcast Appearances
What do you mean, communicate the AI?
Yeah.
Well, I guess it's not even that, but I do think that is an important part of it.
The other big thing is I can't think of another discipline in human engineering and research where
The end artifact was made safer mostly through just thinking about how to make it safe, as opposed to why are airplane crashes per mile so much lower today than they were decades ago?
Why is it so much harder to find a bug in Linux than it would have been decades ago?
And I think it's mostly because these systems were deployed to the world.
You noticed failures.
Those failures were corrected and the systems became more robust.
Now, I'm not sure why
AGI and superhuman intelligence would be any different, especially given, and I hope we're going to get to this.
It seems like the harms of superintelligence are not just about like having some malevolent paper clipper out there, but it's just like, this is a really powerful thing and we don't even know how to conceptualize how people interact with it, what people will do with it.
And having gradual access to it seems like a better way to maybe spread out the impact of it and to help people prepare for it.
Okay, I see.
So you're suggesting that the thing you're pointing out with superintelligence is not some finished
mind which knows how to do every single job in the economy because the way say the original i think open ai charter or whatever defines agi is like it can do every single job that every single thing a human can do you're proposing instead a mind which can learn to do any single every single job yes and that is super intelligence and then but once you have the learning algorithm and
it gets deployed into the world the same way a human laborer might join an organization.
And it seems like one of these two things might happen.
Maybe neither of these happens.
One, this super efficient learning algorithm