Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AXRP - the AI X-risk Research Podcast

13 - First Principles of AGI Safety with Richard Ngo

31 Mar 2022

Description

How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with Eliezer Yudkowsky about the difficulty of AI alignment.   Topics we discuss, and timestamps:  - 00:00:40 - The nature of intelligence and AGI    - 00:01:18 - The nature of intelligence    - 00:06:09 - AGI: what and how    - 00:13:30 - Single vs collective AI minds  - 00:18:57 - AGI in practice    - 00:18:57 - Impact    - 00:20:49 - Timing    - 00:25:38 - Creation    - 00:28:45 - Risks and benefits  - 00:35:54 - Making AGI safe    - 00:35:54 - Robustness of the agency abstraction    - 00:43:15 - Pivotal acts  - 00:50:05 - AGI safety concepts    - 00:50:05 - Alignment    - 00:56:14 - Transparency    - 00:59:25 - Cooperation  - 01:01:40 - Optima and selection processes  - 01:13:33 - The AI alignment research community    - 01:13:33 - Updates from the Yudkowsky conversation    - 01:17:18 - Corrections to the community    - 01:23:57 - Why others don't join  - 01:26:38 - Richard Ngo as a researcher  - 01:28:26 - The world approaching AGI  - 01:30:41 - Following Richard's work   The transcript: axrp.net/episode/2022/03/31/episode-13-first-principles-agi-safety-richard-ngo.html   Richard on the Alignment Forum: alignmentforum.org/users/ricraz Richard on Twitter: twitter.com/RichardMCNgo The AGI Safety Fundamentals course: eacambridge.org/agi-safety-fundamentals   Materials that we mention:  - AGI Safety from First Principles: alignmentforum.org/s/mzgtmmTKKn5MuCzFJ  - Conversations with Eliezer Yudkowsky: alignmentforum.org/s/n945eovrA3oDueqtq  - The Bitter Lesson: incompleteideas.net/IncIdeas/BitterLesson.html  - Metaphors We Live By: en.wikipedia.org/wiki/Metaphors_We_Live_By  - The Enigma of Reason: hup.harvard.edu/catalog.php?isbn=9780674237827  - Draft report on AI timelines, by Ajeya Cotra: alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines  - More is Different for AI: bounded-regret.ghost.io/more-is-different-for-ai/  - The Windfall Clause: fhi.ox.ac.uk/windfallclause  - Cooperative Inverse Reinforcement Learning: arxiv.org/abs/1606.03137  - Imitative Generalisation: alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1  - Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit  - Draft report on existential risk from power-seeking AI, by Joseph Carlsmith: alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai  - The Most Important Century: cold-takes.com/most-important-century

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.