Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

EA Forum Podcast (All audio)

“New 80,000 Hours problem profile on the risks of power-seeking AI” by Zershaaneh Qureshi, 80000_Hours

29 Oct 2025

Description

Hi everyone, Zershaaneh here! Earlier this year, 80,000 Hours published an article explaining the risks of power-seeking AI. This post includes some context, the summary from the article, and the table of contents with links to each section. (I thought this would be easier to navigate than if we just reposted the full article here!)Context This is meant to be a detailed, introductory resource for understanding how advanced AI could disempower humanity and what people can do to stop it. It replaces our 2022 article on the existential risks from AI, which highlighted misaligned power-seeking as our main concern but didn't focus entirely on it. The original article made a mostly theoretical case for power-seeking risk — but the new one actually draws together recent empirical evidence which suggests AIs might develop goals we wouldn’t like, undermine humanity to achieve them, and avoid detection along the way.What [...] ---Outline:(00:35) Context(01:11) What else is new?(01:31) Why are we posting this here?(02:15) Summary (from the article)(03:09) Our overall view(03:13) Recommended -- highest priority --- First published: October 28th, 2025 Source: https://forum.effectivealtruism.org/posts/QrP3DAvyS4gTawBHc/new-80-000-hours-problem-profile-on-the-risks-of-power --- Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.