Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Astral Codex Ten Podcast

MR Tries The Safe Uncertainty Fallacy

04 Apr 2023

Description

https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy The Safe Uncertainty Fallacy goes: The situation is completely uncertain. We can't predict anything about it. We have literally no idea how it could go. Therefore, it'll be fine. You're not missing anything. It's not supposed to make sense; that's why it's a fallacy. For years, people used the Safe Uncertainty Fallacy on AI timelines: Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be "soon".

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
šŸ—³ļø Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.