Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI talks AI

EP38: Grokked Transformers are Implicit Reasoners - A Mechanistic Journey to the Edge of Generalization by Boshi Wang, Xiang Yue, Yu Su and Huan Sun

19 Nov 2024

Description

Disclaimer: This podcast is completely AI generated by ⁠⁠⁠⁠⁠NoteBookLM⁠⁠⁠⁠⁠ 🤖 Summary In this episode we discuss this research paper, which investigates whether transformer-based language models can learn to reason implicitly over knowledge, a skill that even the most advanced models struggle with. The authors focus on two types of reasoning: composition (combining facts) and comparison (comparing entities' attributes). Their experiments show that transformers can learn implicit reasoning, but only after extended training, a phenomenon known as grokking. The study then investigates the model's internal mechanisms during training to understand how and why grokking happens. The authors discover that transformers develop distinct circuits for composition and comparison, which explains the differences in their ability to generalise to unseen data. Finally, the paper demonstrates the power of parametric memory for complex reasoning tasks, showcasing a fully grokked transformer's superior performance compared to state-of-the-art LLMs that rely on non-parametric memory.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.