Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Toby Ord

๐Ÿ‘ค Speaker
269 total appearances

Appearances Over Time

Podcast Appearances

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

We'll start with two mistakes that are all too common in the policy world.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

First, uncertainty about AI timelines isn't an excuse to just believe whichever timeline you want, so long as it is within the credible range.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

Sadly, I think many government ministers are likely to take this approach if an expert explains this broad uncertainty to them.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

While they would be right that the evidence isn't sufficient to disprove their preferred timeline, it would be irresponsible of them to not allow for other credible possibilities.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

That would be like a mayor hearing there is a 20% chance the volcano next to their town erupts next year and feeling that they can continue to act as if it won't, since it not erupting is also found credible by the experts.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

Uncertainty isn't an excuse to assume a plausible outcome of your choice will occur, it is more that rationality requires you to respect every plausible outcome.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

Second, we can't just wait until the uncertainty is resolved.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

Sometimes that works, but here we know the uncertainty is very unlikely to be resolved until the events are upon us.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

At that stage it will be too late to enact all but the most knee-jerk responses.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

So feeling that the cloud of uncertainty gives you permission to delay acting is tantamount to committing to choose one of the bluntest and least effective options available.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

Instead, we are going to need to act under uncertainty, taking into account the full range of credible possibilities.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

How can we do that?

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

Subheading Hedging

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

A natural and important idea is that of hedging against transformative AI coming soon.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

While we are least prepared, we could do that by shifting our portfolio of activities or your individual contribution to humanity's portfolio to focus somewhat more on short timelines than the raw probabilities would warrant.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

This makes a lot of sense.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

I strongly recommend governments, civil society, and academics do more to hedge against transformative AI coming early.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

Though when it comes to the communities of professionals already working on helping the AI transition go well, I think they are already hedging strongly against early transformative AI.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

Indeed, there is even a risk that they're going beyond mere hedging and are actively betting on it coming early.

LessWrong (Curated & Popular)
"Broad Timelines" by Toby_Ord

I'm not sure as it is hard to know the full portfolio of work.