Dwarkesh
๐ค SpeakerAppearances Over Time
Podcast Appearances
Are you sure you're taking this into account?
And first of all, 99% of the time he says, yes, we have a supplement on it.
But even when he doesn't say that, he's like, yeah, that's one reason it could go wrong.
slower than that, here are 10 reasons it could go faster.
I think that it's a question of, like...
what is your default option or what are you comparing it to?
I think that naively people think like, well, every particular thing is potentially wrong.
So let's just have a default path where nothing ever happens.
And I think that
That has been the most consistently wrong prediction of all.
I think in order to have nothing ever happen, you actually need a lot to happen.
You need suddenly AI progress that has been going at this constant rate for so long stops.
Why does it stop?
Well, we don't know.
Whatever claim you're making about that is something where you would expect there to be a lot of out-of-model error, is where you would expect somebody must be making a pretty definite claim that you want to challenge.
So I don't think there's a neutral position where you can just say, well, given that out-of-model error is really high and we don't know anything, let's just choose that.
I think we are trying to take... I know this sounds crazy because if you read our document, all sorts of bizarre things happen.
It's probably the weirdest...
couple of years that have ever been but we're trying to take almost in some sense a conservative position where the trends don't change nobody does an insane thing nothing that we have no evidence to think will happen happens and the way that the ai intelligence explosion dynamics work are just so weird that in order to have nothing happen you need to have a lot of crazy things happen
Everything we've talked about has happened before.