Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

BlueDot Narrated

Why Might Misaligned, Advanced AI Cause Catastrophe?

13 May 2023

Description

You may have seen arguments (such as these) for why people might create and deploy advanced AI that is both power-seeking and misaligned with human interests. This may leave you thinking, “OK, but would such AI systems really pose catastrophic threats?” This document compiles arguments for the claim that misaligned, power-seeking, advanced AI would pose catastrophic risks.We’ll see arguments for the following claims, which are mostly separate/independent reasons for concern:Humanity’s past holds concerning analogiesAI systems have some major inherent advantages over humansAIs could come to out-number and out-resource humansPeople will face competitive incentives to delegate power to AI systems (giving AI systems a relatively powerful starting point)Advanced AI would accelerate AI research, leading to a major technological advantage (which, if developed outside of human control, could be used against humans)Source:https://aisafetyfundamentals.com/governance-blog/why-might-misaligned-advanced-ai-cause-catastrophe-compilationNarrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.