Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Doom Debates

P(Doom) Estimates Shouldn't Inform Policy??

05 Aug 2024

Description

Princeton Comp Sci Ph.D. candidate Sayash Kapoor co-authored a blog post last week with his professor Arvind Narayanan called "AI Existential Risk Probabilities Are Too Unreliable To Inform Policy".While some non-doomers embraced the arguments, I see it as contributing nothing to the discourse besides demonstrating a popular failure mode: a simple misunderstanding of the basics of Bayesian epistemology.I break down Sayash's recent episode of Machine Learning Street Talk point-by-point to analyze his claims from the perspective of the one true epistemology: Bayesian epistemology.00:00 Introduction03:40 Bayesian Reasoning04:33 Inductive vs. Deductive Probability05:49 Frequentism vs Bayesianism16:14 Asteroid Impact and AI Risk Comparison28:06 Quantification Bias31:50 The Extinction Prediction Tournament36:14 Pascal's Wager and AI Risk40:50 Scaling Laws and AI Progress45:12 Final ThoughtsMy source material is Sayash's episode of Machine Learning Street Talk: https://www.youtube.com/watch?v=BGvQmHd4QPEI also recommend reading Scott Alexander’s related post: https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentistSayash's blogpost that he was being interviewed about is called "AI existential risk probabilities are too unreliable to inform policy": https://www.aisnakeoil.com/p/ai-existential-risk-probabilitiesFollow Sayash: https://x.com/sayashk Get full access to Doom Debates at lironshapira.substack.com/subscribe

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.