Ryan Kidd
๐ค SpeakerAppearances Over Time
Podcast Appearances
But also we had, you know, Nathan Young recently compiled all these different like forecasting platforms.
So Metaculous, Manifold, another Metaculous poll that was like for weak AGI, it was like a little bit less good at Turing tests and all these other like, I don't know, some Cauchy thing where they're opening, I will achieve it.
And he came out with an average of 2030.
Now, I don't know, I still like the Metaculous 2033, but like I wouldn't bet against 2030 in terms of nearness of AGI.
As for superintelligence,
Complicated, right?
Could be six months or less.
Could be a very hard takeoff after this AGI thing.
If it's like a very software-only singularity scenario where you don't need a big bunch of hardware, scale up, you aren't limited by compute, it's just recursive self-improvement or something, algorithmic improvement, AIs are improving.
The algorithms are training eyes, and it's like, well, that's a fast feedback loop, right?
Or you might need a lot more experimentation, right?
You might need massive hardware scale-ups.
You might need just staggeringly more compute than exists in the world, in which case that could take you a decade to get your singularity.
I currently think that 2033 is a decent central estimate in terms of the median for what we're preparing for.
But obviously, 20% chance by 2028.
I think that's the meticulous prediction.
That's a lot, right?
So we should definitely be considering scenarios that are sooner, right?
In particular, I think the sooner AGI happens, the more dangerous it might be, right?
The less time we have to do critical technical research to prepare, the less time we have to implement policy solutions,