Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comWhat started as a simple “let’s think step by step” trick has grown into a rich landscape of reasoning models that simulate logic, branch and revise in real time, and now even collaborate with the user. The episode explores three specific advancements: speculative chain of thought, collaborative chain of thought, and retrieval-augmented chain of thought (CoT-RAG).Key Points DiscussedChain of thought prompting began in 2022 as a method for improving reasoning by asking models to slow down and show their steps.By 2023, tree-of-thought prompting and more branching logic began emerging.In 2024, tools like DeepSeek and O3 showed dynamic reasoning with visible steps, sparking renewed interest in more transparent models.Andy explains that while chain of thought looks like sequential reasoning, it’s really token-by-token prediction with each output influencing the next.The illusion of “thinking” is shaped by the model’s training on step-by-step human logic and clever UI elements like “thinking…” animations.Speculative chain of thought uses a smaller model to generate multiple candidate reasoning paths, which a larger model then evaluates and improves.Collaborative chain of thought lets the user review and guide reasoning steps as they unfold, encouraging transparency and human oversight.Chain of Thought RAG combines structured reasoning with retrieval, using pseudocode-like planning and knowledge graphs to boost accuracy.Jyunmi highlighted how collaborative CoT mirrors his ideal creative workflow by giving humans checkpoints to guide AI thinking.Beth noted that these patterns often mirror familiar software roles, like sous chef and head chef, or project management tools like Gantt charts.The team discussed limits to context windows, attention, and how reasoning starts to break down with large inputs or long tasks.Several ideas were pitched for improving memory, including token overlays, modular context management, and step weighting.The conversation wrapped with a reflection on how each CoT model addresses different needs: speed, accuracy, or collaboration.Timestamps & Topics00:00:00 🧠 What is Chain of Thought evolved?00:02:49 📜 Timeline of CoT progress (2022 to 2025)00:04:57 🔄 How models simulate reasoning00:09:36 🤖 Agents vs LLMs in CoT00:14:28 📚 Research behind the three CoT variants00:23:18 ✍️ Overview of Speculative, Collaborative, and RAG CoT00:25:02 🧑🤝🧑 Why collaborative CoT fits real-world workflows00:29:23 📌 Brian highlights human-in-the-loop value00:32:20 ⚙️ CoT-RAG and pseudo-code style logic00:34:35 📋 Pretraining and structured self-ask methods00:41:11 🧵 Importance of short-term memory and chat history00:46:32 🗃️ Ideas for modular memory and reg-based workflows00:50:17 🧩 Visualizing reasoning: Gantt charts and context overlays00:52:32 ⏱️ Tradeoffs: speed vs accuracy vs transparency00:54:22 📬 Wrap-up and show announcementsHashtags#ChainOfThought #ReasoningAI #AIprompting #DailyAIShow #SpeculativeAI #CollaborativeAI #RetrievalAugmentedGeneration #LLMs #AIthinking #FutureOfAIThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other episodes from The Daily AI Show
Transcribed and ready to explore now
Anthropic Finds AI Answers with Interviewer
05 Dec 2025
The Daily AI Show
Anthropic's Chief Scientist Issues a Warning
05 Dec 2025
The Daily AI Show
Is It Really Code Red At OpenAI?
02 Dec 2025
The Daily AI Show
Deep Sea Strikes First and ChatGPT Turns 3
02 Dec 2025
The Daily AI Show
Black Friday AI, Data Breaches, Power Fights, and Autonomous Agents
28 Nov 2025
The Daily AI Show
Who Is Winning The AI Model Wars?
26 Nov 2025
The Daily AI Show