Tired of AI agents that forget context mid-conversation or drift subtly off course in production? You're not alone. In this episode, the AI, Actually crew unpacks six critical engineering principles for building reliable AI agents—principles that separate proof-of-concepts from production-ready systems.Pete, Mike, Andy, and Stew break down insights from AI expert Nate B. Jones, translating technical concepts into business-focused guidance. They explore why AI memory isn't just about storage, how to bound uncertainty without killing creativity, and why monitoring AI systems requires a completely different approach than traditional software.This episode covers:Why stateful intelligence and memory management are fundamental to useful AI interactionsHow to engineer controls that bound uncertainty without over-constraining your modelsThe shift from binary failures to subtle quality drift in AI systemsCapability-based routing: matching the right model to the right jobPost-production monitoring strategies that catch problems before your users doContinuous validation techniques for multi-turn agent conversationsThis episode of AI, Actually centers around a video by @nate.b.jones about the 6 principles of AI Agents. That video can be watched in its entirety here: I've Built Over 100 AI Agents: Only 1% of Builders Know These 6 PrinciplesFollow the Gang:Mike Finley, CTO, AnswerRocket - https://www.linkedin.com/in/mikefinley/ Pete Reilly, COO, AnswerRocket - https://www.linkedin.com/in/petereilly Andy Sweet, VP Enterprise AI Solutions, AnswerRocket - https://www.linkedin.com/in/andrewdsweet/ Stew Chisam, Operating Partner, StellarIQ - https://www.linkedin.com/in/stewart-chisam-7242543/ Chapters: 00:00 Introduction to AI Agents and Engineering Principles01:34 Introducing Nate B. Jones' AI Engineering Principles03:03 Stateful Intelligence10:16 Bounded Uncertainty19:55 Intelligent Failure Detection20:51 Evaluating LLM Responses22:16 Monitoring Quality and Performance23:53 Active Maintenance of LLM Systems26:18 Understanding Subtle Failures26:55 Capability-Based Routing30:22 Aligning Models with Business Processes33:41 Nuanced Health State Monitoring37:36 Continuous Input Validation41:36 Closing ThoughtsKeywords: AI agents, agentic AI, AI engineering, AI memory, stateful intelligence, AI monitoring, capability-based routing, AI evaluation, production AI, enterprise AI, AI agent development, LLM engineering, AI testing, AI agent failures, AI system monitoring
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Erich G. Anderer, Chief of the Division of Neurosurgery and Surgical Director of Perioperative Services at NYU Langone Hospital–Brooklyn
09 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Nolan Wessell, Assistant Professor and Well-being Co-Director, Department of Orthopedic Surgery, Division of Spine Surgery, University of Colorado School of Medicine
08 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-08-2025 1AM EST
08 Dec 2025
NPR News Now