In this episode, Ryan and Luca explore one of the most practical aspects of working with LLMs: context management. They discuss what tokens are, how context windows work, and why managing context often matters more than crafting perfect prompts. The conversation covers the challenges of context window limitations, the phenomenon of "recency bias" where LLMs pay more attention to information at the beginning and end of their context, and practical strategies for keeping your AI assistant focused on the right things.The hosts share hands-on experiences with context compaction, using agents (or what Luca prefers to call "function calls") to segregate different tasks, and various methods for pulling in external information without overwhelming the LLM. They discuss everything from ingesting log files and design documents to using Model Context Protocol (MCP) for accessing external services. Throughout, they emphasize the importance of thinking like a product manager or requirements engineer when working with LLMs - breaking down problems into manageable chunks and deliberately curating what information the AI needs at each step.The episode wraps up with practical advice on avoiding common pitfalls like context pollution, dealing with outdated API knowledge in LLMs, and knowing when to start fresh rather than trying to compact an overloaded context window.Key Topics:[00:00] Introduction and defining tokens in LLMs[03:30] Understanding context windows and their limitations[07:15] The sweet spot: too little vs. too much context[10:45] Recency bias: why position matters in your context window[15:20] Context compaction and when to start fresh[21:00] Using agents (slash commands) to segregate tasks and manage context[28:30] Pulling in external context: files, documentation, and selective ingestion[35:45] Model Context Protocol (MCP) and accessing external services[40:15] Dealing with outdated LLM knowledge and API versions[45:00] Ingesting log files and serial output in embedded development[48:30] Thinking like a product manager: breaking down problems for LLMsNotable Quotes:"I find myself worrying about context quite a lot... the explicit instructions that you give to an LLM will often only be a very small part of the overall instructions that you pass to it." — Luca Ingianni"As you're trying to do work, you can only think about so many things at the same time... I just need to sit down and compress this for a second. Let this kind of sink in and percolate and get rid of this stuff that I don't need to think about anymore." — Ryan Torvik"If you want the LLM to pay particular attention to something, you should put it either at the beginning or at the end of your prompt. So it will be sort of very fresh in the LLM's mind." — Luca Ingianni"You as the user need to be a better product manager and think globally about the problem you're trying to solve... Don't give it a task that's going to blow out the context window. Break down the problem into sufficiently small enough steps." — Ryan Torvik"I used to be a requirements engineer. I find myself going back to my requirements engineering mindset and really thinking about, okay, what am I talking about? What do I need to define? What context do I need to give?" — Luca Ingianni
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
3ª PARTE | 17 DIC 2025 | EL PARTIDAZO DE COPE
01 Jan 1970
El Partidazo de COPE
13:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
10:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
13:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana