Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Astral Codex Ten Podcast

Practically-A-Book Review: Yudkowsky Contra Ngo On Agents

19 Jan 2022

Description

https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky   I. The story thus far: AI safety, which started as the hobbyhorse of a few weird transhumanists in the early 2000s, has grown into a medium-sized respectable field. OpenAI, the people responsible for GPT-3 and other marvels, have a safety team. So do DeepMind, the people responsible for AlphaGo, AlphaFold, and AlphaWorldConquest (last one as yet unreleased). So do Stanford, Cambridge, UC Berkeley, etc, etc. Thanks to donations from people like Elon Musk and Dustin Moskowitz, everyone involved is contentedly flush with cash. They all report making slow but encouraging progress. Eliezer Yudkowsky, one of the original weird transhumanists, is having none of this. He says the problem is harder than everyone else thinks. Their clever solutions will fail. He's been flitting around for the past few years, Cassandra-like, insisting that their plans will explode and they are doomed.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.