Toby Ord
๐ค SpeakerAppearances Over Time
Podcast Appearances
While in no idea when it will happen is underselling the information contained in this distribution, it is a much better summary than for years which would be understood by almost everyone as something like between 3 and 5 years.
While academics might hope people interpret a named year as the median time, most people interpret it as the moment they are allowed to start complaining the predicted event hasn't happened yet.
Indeed, these distributions are so hard to sum up with a single number that I think a substantial amount of disagreement on timelines stems from people describing different parts of the same elephant.
For example, both AI boosters and those concerned with existential risk talk a lot about short timelines because we could see the world transformed in just a few years' time.
It isn't that they think we will see that, but that it is big if true, and has a decent chance of being true.
In contrast, more conservative voices tend to focus on later years saying and it is more likely that it will take 10 to 20 years than that it will take just a few focusing on straight probability without weighting by importance or leverage.
Both of these can be true at the same time.
Both are true on my own distribution.
A particular danger in communicating timelines with a single number is that it raises the chance that this named year will come and go without incident, and the people who mentioned it, or the wider community they are part of, will be written off as having a false or discredited view.
I think we're going to see some of this come 2027 due to the vast number of people who heard about that scenario, combined with the fact that so many media outlets reported it as a sharp prediction, rather than as it was intended.
An important illustrative scenario.
As well as being bad for communication, compressing one's uncertainty into a single number would be very bad for your own planning.
For example, Coco Taljo's distribution implies a 28% chance transformative AI will happen during the current presidential term, a 35% chance it will happen in the next term, a 13% chance it will be the one after that, with 24% left over spread among ever more distant terms.
There's an image here.
These are very different scenarios and it would clearly be a mistake to just act as if the second one were correct since it is the most likely.
That would eliminate the possibility of hedging against transformative AI coming soon and of taking advantage of worlds where it comes late.
Heading Implications
Rather than attempting to adjudicate which length of timelines is correct, I think we should be taking the frame of how to act, or plan, under deeply uncertain timelines.
That is, we should be treating this as an exercise in rational decision-making under uncertainty, in a situation where the stakes are high and the uncertainty is vast.
Let's unpack some of the implications of this frame.