Ajeya Cotra
๐ค SpeakerAppearances Over Time
Podcast Appearances
Length of software tasks AI agents are able to complete autonomously with a 50% success rate.
And so it has a timeline from 2019 to 2027, with points showing the results from various different models.
And we see that lines of best fit have been drawn, with a second one showing maybe a better fit for the last few years from 2024.
Scott writes, The 2010 kink was before Kotra's forecast and priced in.
The 2024 kink is interesting and relevant, but I don't think it rises to the level of paradigm-busting insight that I interpreted Yudkowsky as predicting.
Since it was on a parameter Kotra wasn't measuring, and probably too small to show up on the orders of magnitude scale we're talking about, it's probably not a major cause of the model's inaccuracy.
other things have been even more predictable.
Here's the EPIC Capabilities Index showing points for different models' results over time from 2023 to the present day.
And those points seem to be fitting around a straight line, more or less.
Scott writes, So Kotra's bet on progress being smooth and measurable has mostly paid off so far.
But Yudkowsky further explained that his timelines were shorter than BioAnchor's because people would be working hard to discover new paradigms, and if the current paradigm would only pay off in the 2050s, then probably they would discover one before then.
You could think of this as a disjunction.
Timelines will be shorter than Kotra thinks, either because deep learning pays off quickly, or because a new paradigm gets invented in the interim.
It turned out to be the first one.
So although Yudkowsky's new paradigm is yet to materialize, his disjunctive reasoning in favor of shorter-than-2050 timelines was basically on the mark.
Nostalgia Braced argued that Kotra's whole model was a wrapper for an assumption that Moore's Law will continue indefinitely.
If it does, obviously you get enough compute for AI at some point, even if it requires some absurd process like simulating all 500 million years of multicellular evolution.
I never entirely understood this objection, because although BioAnchors does depend on a story where Moore's Law doesn't break before we get the relevant amount of compute, this is only one of many background assumptions, like that a meteor doesn't hit Earth before we get the relevant amount of compute.
Given those assumptions, it does a useful, not-just-assumption-repeating job of calculating when transformative AI will happen.
As Kotra implicitly predicted, we seem on track to get AGI before Moore's Law breaks down, and so Moore's Law didn't end up mattering very much.