Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AXRP - the AI X-risk Research Podcast

2 - Learning Human Biases with Rohin Shah

11 Dec 2020

Description

One approach to creating useful AI systems is to watch humans doing a task, infer what they're trying to do, and then try to do that well. The simplest way to infer what the humans are trying to do is to assume there's one goal that they share, and that they're optimally achieving the goal. This has the problem that humans aren't actually optimal at achieving the goals they pursue. We could instead code in the exact way in which humans behave suboptimally, except that we don't know that either. In this episode, I talk with Rohin Shah about his paper about learning the ways in which humans are suboptimal at the same time as learning what goals they pursue: why it's hard, how he tried to do it, how well he did, and why it matters.   Link to the paper - On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference: arxiv.org/abs/1906.09624 Link to the transcript: axrp.net/episode/2020/12/11/episode-2-learning-human-biases-rohin-shah.html The Alignment Newsletter: rohinshah.com/alignment-newsletter Rohin's contributions to the AI alignment forum: alignmentforum.org/users/rohinmshah Rohin's website: rohinshah.com

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.