Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Eye On A.I.

#299 Jacob Buckman: Why the Future of AI Won't Be Built on Transformers

09 Nov 2025

Description

This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents.    Visit https://agntcy.org/ and add your support.   Why do today's LLMs forget key details over long context, and what would it take to give them real memory that scales?   In this episode of Eye on AI, host Craig Smith explores Manifest AI's Power Retention architecture and how it rethinks memory, context, and learning for modern models. We look at why transformers struggle with long inputs, how state space and retention models keep context at linear cost, and how scaling state size unlocks reliable recall across lengthy conversations, code, and documents. We also cover practical paths to retrofit existing transformer models, how in context learning can replace frequent fine tuning, and what this means for teams building agents and RAG systems.   Learn how product leaders and researchers measure true long context quality, which pitfalls to avoid when extending context windows, and which metrics matter most for success, including recall consistency, answer fidelity, task completion, CSAT, and cost per resolution. You will also hear how to design per user memory, set governance that prevents regressions, evaluate LLM as judge with human review, and plan a secure rollout that improves retrieval, multi step workflows, and agent reliability across chat, email, and voice. Stay Updated: Craig Smith on X:https://x.com/craigss  Eye on A.I. on X: https://x.com/EyeOn_AI   

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.