One approach to creating useful AI systems is to watch humans doing a task, infer what they're trying to do, and then try to do that well. The simplest way to infer what the humans are trying to do is to assume there's one goal that they share, and that they're optimally achieving the goal. This has the problem that humans aren't actually optimal at achieving the goals they pursue. We could instead code in the exact way in which humans behave suboptimally, except that we don't know that either. In this episode, I talk with Rohin Shah about his paper about learning the ways in which humans are suboptimal at the same time as learning what goals they pursue: why it's hard, how he tried to do it, how well he did, and why it matters. Link to the paper - On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference: arxiv.org/abs/1906.09624 Link to the transcript: axrp.net/episode/2020/12/11/episode-2-learning-human-biases-rohin-shah.html The Alignment Newsletter: rohinshah.com/alignment-newsletter Rohin's contributions to the AI alignment forum: alignmentforum.org/users/rohinmshah Rohin's website: rohinshah.com
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
3ª PARTE | 17 DIC 2025 | EL PARTIDAZO DE COPE
01 Jan 1970
El Partidazo de COPE
Buchladen: Tipps für Weihnachten
20 Dec 2025
eat.READ.sleep. Bücher für dich
LVST 19 de diciembre de 2025
19 Dec 2025
La Venganza Será Terrible (oficial)
Christmas Party, Debris & Ping-Pong
19 Dec 2025
My Therapist Ghosted Me
Episode 1320: Becoming 'The Monk': Rex Ryan on playing Gerry Hutch on stage (Part 1)
19 Dec 2025
Crime World
Friends Thru A Lens: The Holidays with Ella Risbridger
19 Dec 2025
Sentimental Garbage