The team breaks down Anthropic’s new research paper, Tracing the Thoughts of a Language Model, which offers rare insight into how large language models process information. Using a replacement model and attribution graphs, Anthropic tries to understand how Claude actually “thinks.” The show unpacks key findings, philosophical questions, and the implications for future AI design.Key Points DiscussedAnthropic studied its smallest model, Haiku, using a tool called a replacement model to understand internal decision-making paths.Attribution graphs show how specific features activate as the model forms an answer, with many features pulling from multilingual patterns.The research shows Claude plans ahead more than expected. In poetry generation, it preselects rhyming words and builds toward them, rather than solving it at the end.The paper challenges assumptions about LLMs being purely token-to-token predictors. Instead, they show signs of planning, contextual reasoning, and even a form of strategy.Language-agnostic pathways were a surprise: Claude used words from various languages (including Chinese and Japanese) to form responses to English queries.This multilingual feature behavior raised questions about how human brains might also use internal translation or conceptual bridges unconsciously.The team likens the research to the invention of a microscope for AI cognition, revealing previously invisible structures in model thinking.They discussed how growing an AI might be more like cultivating a tree or garden than programming a machine. Inputs, pruning, and training shapes each model uniquely.Beth and Jyunmi highlighted the gap between proprietary research and open sharing, emphasizing the need for more transparent AI science.The show closed by comparing this level of research to studying human cognition, and how AI could be used to better understand our own thinking.Hashtags#Anthropic #Claude3Haiku #AIresearch #AttributionGraphs #MultilingualAI #LLMthinking #LLMinterpretability #AIplanning #AIphilosophy #BlackBoxAITimestamps & Topics00:00:00 🧠 Intro to Anthropic’s paper on model thinking00:03:12 📊 Overview of attribution graphs and methodology00:06:06 🌐 Multilingual pathways in Claude’s thought process00:08:31 🧠 What is Claude “thinking” when answering?00:12:30 🔁 Comparing Claude’s process to human cognition00:18:11 🌍 Language as a flexible layer, not a barrier00:25:45 📝 How Claude writes poetry by planning rhymes00:28:23 🔬 Microscopic insights from AI interpretability00:29:59 🤔 Emergent behaviors in intelligence models00:33:22 🔒 Calls for more research transparency and sharing00:35:35 🎶 Set-up and payoff in AI-generated rhyming00:39:29 🌱 Growing vs programming AI as a development model00:44:26 🍎 Analogies from agriculture and bonsai pruning00:45:52 🌀 Cyclical learning between humans and AI00:47:08 🎯 Constitutional AI and baked-in intention00:53:10 📚 Recap of the paper’s key discoveries00:55:07 🗣️ AI recognizing rhyme and sound without hearing00:56:17 🔗 Invitation to join the DAS community Slack00:57:26 📅 Preview of the week’s upcoming episodesThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other episodes from The Daily AI Show
Transcribed and ready to explore now
Anthropic Finds AI Answers with Interviewer
05 Dec 2025
The Daily AI Show
Anthropic's Chief Scientist Issues a Warning
05 Dec 2025
The Daily AI Show
Is It Really Code Red At OpenAI?
02 Dec 2025
The Daily AI Show
Deep Sea Strikes First and ChatGPT Turns 3
02 Dec 2025
The Daily AI Show
Black Friday AI, Data Breaches, Power Fights, and Autonomous Agents
28 Nov 2025
The Daily AI Show
Who Is Winning The AI Model Wars?
26 Nov 2025
The Daily AI Show