Daniel Kokotajlo
๐ค SpeakerAppearances Over Time
Podcast Appearances
Yeah.
Is this good or bad?
That the president and the companies are like... I think it's bad.
So if the big bottleneck to the good future here is just putting in not this Eliezer-type galaxy brain, high volatility, you know, there's a 1% chance this works, but we've got to come up with this crazy scheme in order to make alignment work.
But rather, as Daniel, you were saying, more like, hey, do the obvious thing of making sure you can read how the AI is thinking.
Make sure you're monitoring the AIs.
Make sure they're not forming some sort of hive mind where you can't really understand how the millions of them are coordinating with each other.
To the extent that, and I want to stay short forward, but to the extent that it is a matter of prioritizing it, closing all the obvious loopholes, it does make sense to leave it in the hands of people who have at least said that this is a thing that's worth doing, have been thinking about it for a while.
And I worry about...
One of the questions I was planning on asking you is, look, during โ one of my friends made this interesting point that during COVID, our community, less wrong whatever, were the first people on Mars to be saying this is a big deal, this is coming.
But there were also the people who are saying we've got to do the lockdowns now, they've got to be stringent and so forth.
At least some of them were.
And in retrospect, I think according to even their own views about what should have happened, they would say, actually, we were right about COVID, but we were wrong about lockdowns.
In fact, we should, lockdowns were on net negative or something.
I wonder what the equivalent for the AI safety community will be with respect to they saw AI coming, AGI coming sooner, they saw ASI coming.
What will they, in retrospect, regret?
My answer, just based on this initial discussion, seems to be nationalization.
Not only because it puts in...
it sort of deprioritizes the people who want to think about safety and more maybe prioritizes the national security state probably cares more about winning against China than making sure the chain of thought is interpretable.
And so you're just reducing the leverage of the people who care more about safety.