Ajeya Cotra
๐ค SpeakerAppearances Over Time
Podcast Appearances
The impressive thing here is that correcting the estimates of two parameters, compute growth and algorithmic progress, produce a forecast which would have seemed valuable and prescient six years later.
Even correcting one parameter, algorithmic progress, would have gotten it very close.
In that sense, the history of bio-anchors is a white pill for forecasting, and an antidote to the epistemic nihilism of the positions above.
But its bottom line was still wrong.
Even if you do almost everything correctly, invent new terms that become load-bearing pillars of the field, defeat your critics' main objections, and demonstrate a remarkably clear model of exactly how to think about a difficult subject, misestimating one parameter can ruin the whole project.
This is why you do a sensitivity analysis, and Kotra did this at least in spirit, talked about which parameters were most important, gave people widgets that they could use to play around with.
But it didn't work as well as she might have hoped, giving an around 10% chance of timelines as short as the current median.
Several later commenters and analysts had good takes here, especially Marius Hobhan of Apollo Research.
Along with correctly guessing that algorithmic progress would go much faster than bio-anchors predicted, albeit with the benefit of two more years of data, he wrote that, quote, the uncertainty from the model is probably too low.
That is, the model is overconfident because core variables like compute price, halving time, and algorithmic efficiency are modeled as static singular values, rather than distributions that change over time, end quote.
Plausibly, if these had been distributions, you could have done a more formal sensitivity analysis on them, and then it would have identified these as crucial terms.
Nostalgiabrace unofficially noticed this, but a formal analysis could have officially noticed and quantified it, and had more uncertainty about the possibility of very early AGI.
So what's the takeaway?
Trust forecasts more?
Trust them less.
Do better forecasting.
Don't bother.
These questions have no right answer, but one conclusion does seem pretty firm.
Most of the bad-faced critics, having identified that Ajaya's model was imperfect and could fail, defaulted to the safe uncertainty fallacy.
Since we can never be sure a model is exactly right, things are uncertain, which means we can continue to believe everything is fine and normal, and timelines are wrong and we don't have to worry.