Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

The Daily AI Show

Are Reasoning LLMs Changing The Game? (Ep. 506)

14 Jul 2025

Description

Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comthe team explores whether today’s AI models are just simulating thought or actually beginning to “think.” They break down advances in reasoning models, reinforcement learning, and world modeling, debating if AI’s step-by-step problem-solving can fairly be called thinking. The discussion dives into philosophy, practical use cases, and why the definition of “thinking” itself might need rethinking.Key Points DiscussedEarly chain-of-thought prompting looked like reasoning but was just simulated checklists, exposing AI’s explainability problem.Modern LLMs now demonstrate intrinsic deliberation, spending compute to weigh alternatives before responding.Reinforcement learning trains models to value structured thinking, not just the right answer, helping them plan steps and self-correct.Deduction, induction, abduction, and analogical reasoning methods are now modeled explicitly in advanced systems.The group debates whether this step-by-step reasoning counts as “thinking” or is merely sophisticated processing.Beth notes that models lack personal perspective or sensory grounding, limiting comparisons to human thought.Karl stresses client perception—many non-technical users interpret these models’ behavior as thinking.Brian draws a line at novel output—until models produce ideas outside their training data, it remains prediction.Andy argues that if we call human reasoning “thinking,” then machine reasoning using similar steps deserves the label too.Symbolic reasoning, code execution, and causality representation are key to closing the reasoning gap.Memory, world models, and external tool access push models toward human-like problem solving.Yann LeCun’s view that embodied AI will be required for human-level reasoning features heavily in the discussion.The debate surfaces differing views: practical usefulness vs. philosophical accuracy in labeling AI behavior.Conclusion: AI as a “process engine” may satisfy both camps, but the line between reasoning and thinking is getting blurry.Timestamps & Topics00:00:00 🧠 Reasoning models vs. chain-of-thought prompts00:02:05 💡 Native deliberation as a breakthrough00:03:15 🏛️ Thinking Fast and Slow analogy00:05:14 🔍 Deduction, induction, abduction, analogy00:07:03 🤔 Does problem-solving = thinking?00:09:00 📜 Legal hallucination as reasoning failure00:12:41 ⚙️ Symbolic logic and code interpreter role00:16:36 🛠️ Deterministic vs. generative outcomes00:20:05 📊 Real-world use case: invoice validation00:23:06 💬 Why non-experts believe AI “thinks”00:26:08 🛤️ Reasoning as multi-step prediction00:29:47 🎲 AlphaGo’s strange but optimal moves00:32:14 🧮 Longer processing vs. actual thought00:35:10 🌐 World models and sensory grounding gap00:38:57 🎨 Human taste and preference vs. AI outputs00:41:47 🧬 Creativity as human advantage—for now00:44:30 📈 Karl’s business growth powered by O3 reasoning00:47:01 ⚡ Future: lightning-speed multi-agent parallelism00:51:15 🧠 Memory + prediction defines thinking engines00:53:16 📅 Upcoming shows preview and community CTA#ThinkingMachines #LLMReasoning #ChainOfThought #ReinforcementLearning #WorldModeling #SymbolicAI #AIphilosophy #AIDebate #AgenticAI #DailyAIShowThe Daily AI Show Co-Hosts:Andy Halliday, Beth Lyons, Brian Maucere, Jyunmi Hatcher, and Karl Yeh

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.