Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

EA Forum Podcast (All audio)

[Linkpost] “How AI may become deceitful, sycophantic... and lazy” by titotal

09 Oct 2025

Description

This is a link post. Disclaimers: I am a computational physicist, not a machine learning expert: set your expectations of accuracy accordingly. All my text in this post is 100% human-written without AI assistance. Introduction: The threat of human destruction by AI is generally regarded by longtermists as the most important cause facing humanity. The essay collection “essays on longtermism” includes two essays arguing in favour of this hypothesis: this essay is primarily a response to the latter essay by Richard Ngo and Adam Bales. Ngo and Bales argue that we should be concerned about an AI that would “disempower humanity and perhaps cause our extinction”. Rather than focussing on the whole chain of reasoning, they focus on one step of the argument: that some AI systems will learn to become scheming deceivers that go on to try and destroy humanity. In the essay, Ngo and Bales make a fairly [...] ---Outline:(04:18) Part 1: Why AI may be lazy, deceitful and sycophantic(28:29) Part 2: the implications of lazy, deceitful, sycophantic AI(37:58) Summary and conclusionThe original text contained 1 footnote which was omitted from this narration. --- First published: October 7th, 2025 Source: https://forum.effectivealtruism.org/posts/shcMvRatuzZxZ3fui/how-ai-may-become-deceitful-sycophantic-and-lazy Linkpost URL:https://titotal.substack.com/p/how-ai-may-become-deceitful-sycophantic --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.