Toby Ord
๐ค SpeakerAppearances Over Time
Podcast Appearances
We'll start with two mistakes that are all too common in the policy world.
First, uncertainty about AI timelines isn't an excuse to just believe whichever timeline you want, so long as it is within the credible range.
Sadly, I think many government ministers are likely to take this approach if an expert explains this broad uncertainty to them.
While they would be right that the evidence isn't sufficient to disprove their preferred timeline, it would be irresponsible of them to not allow for other credible possibilities.
That would be like a mayor hearing there is a 20% chance the volcano next to their town erupts next year and feeling that they can continue to act as if it won't, since it not erupting is also found credible by the experts.
Uncertainty isn't an excuse to assume a plausible outcome of your choice will occur, it is more that rationality requires you to respect every plausible outcome.
Second, we can't just wait until the uncertainty is resolved.
Sometimes that works, but here we know the uncertainty is very unlikely to be resolved until the events are upon us.
At that stage it will be too late to enact all but the most knee-jerk responses.
So feeling that the cloud of uncertainty gives you permission to delay acting is tantamount to committing to choose one of the bluntest and least effective options available.
Instead, we are going to need to act under uncertainty, taking into account the full range of credible possibilities.
How can we do that?
Subheading Hedging
A natural and important idea is that of hedging against transformative AI coming soon.
While we are least prepared, we could do that by shifting our portfolio of activities or your individual contribution to humanity's portfolio to focus somewhat more on short timelines than the raw probabilities would warrant.
This makes a lot of sense.
I strongly recommend governments, civil society, and academics do more to hedge against transformative AI coming early.
Though when it comes to the communities of professionals already working on helping the AI transition go well, I think they are already hedging strongly against early transformative AI.
Indeed, there is even a risk that they're going beyond mere hedging and are actively betting on it coming early.
I'm not sure as it is hard to know the full portfolio of work.