Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Consistently Candid

#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more

02 Mar 2025

Description

A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year!We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more!Follow Nathan on TwitterListen to The Cognitive Revolution My Twitter & Substack 

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.