Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Odyssey

The Illusion of Thinking: When More Reasoning Doesn’t Mean Better Reasoning

09 Jun 2025

Description

In this episode, we explore “The Illusion of Thinking”, a thought-provoking study from Apple researchers that dives into the true capabilities—and surprising limits—of Large Reasoning Models (LRMs). Despite being designed to "think harder," these advanced AI models often fall short when problem complexity increases, failing to generalize reasoning and even reducing effort just when it’s most needed.Using controlled puzzle environments, the authors reveal a curious three-phase behavior: standard language models outperform LRMs on simple tasks, LRMs shine on moderately complex ones, but both collapse entirely under high complexity. Even with access to explicit algorithms, LRMs struggle to follow logical steps consistently.This paper challenges our assumptions about AI reasoning and suggests we're still far from building models that trulythink. Generated using Google’s NotebookLM.🎧 Listen in and learn why scaling up “thinking” might not be the answer we thought it was.🔗 Read the full paper: https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf📚 Authors: Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, Mehrdad Farajtabar (Apple)

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.