Daniel Kokotajlo
๐ค SpeakerAppearances Over Time
Podcast Appearances
Interstate commerce.
The thing I want to point out is that your conclusions about where the world ends up as a result of changing many of these parameters is almost like a hash function.
You change it slightly and you just get a very different world on the other end.
And it's important to acknowledge that because
You sort of want to know how robust this whole end conclusion is to any part of this story changing.
And then it also informs if you do believe that things could just go one way or another.
You don't want to do big radical moves that only make sense under one specific story and are really counterproductive in other stories.
And I think nationalization might be one of them.
And in general, I think...
Classical liberalism just has been a helpful way to navigate the world when we're under this kind of epistemic hell of one thing changing just โ you know, people who have โ yeah.
Anyways, maybe one of you can actually flesh out that thought.
Better react to it if you disagree.
Here, here.
I agree.
So, so far, these systems, as they become smarter, seem to be more reliable agents who are more likely to do the thing I expect them to do.
Why does, like, I think in your scenario, at least one of the stories, you have two different stories, one with a slowdown, where we more aggressively, I'll let you characterize it.
But in one half of the scenario, why does the story end in humanity getting disempowered and the thing just having its own crazy values and taking over?
Yeah, so...
It seems like this community is very interested in solving this problem at a technical level of making sure AIs don't lie to us, or maybe they lie to us in the scenarios exactly where we would want them to lie to us or something.
Whereas, you know, as you were saying, humans have these exact same problems.