Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Chain of Thought

Explaining Eval Engineering | Galileo's Vikram Chatterji

19 Dec 2025

Description

You've heard of evaluations—but eval engineering is the difference between AI that ships and AI that's stuck in prototype.Most teams still treat evals like unit tests: write them once, check a box, move on. But when you're deploying agents that make real decisions, touch real customers, and cost real money, those one-time tests don't cut it. The companies actually shipping production AI at scale have figured out something different—they've turned evaluations into infrastructure, into IP, into the layer where domain expertise becomes executable governance.Vikram Chatterji, CEO and Co-founder of Galileo, returns to Chain of Thought to break down eval engineering: what it is, why it's becoming a dedicated discipline, and what it takes to actually make it work. Vikram shares why generic evals are plateauing, how continuous learning loops drive accuracy, and why he predicts "eval engineer" will become as common a role as "prompt engineer" once was.In this conversation, Conor and Vikram explore:Why treating evals as infrastructure—not checkboxes—separates production AI from prototypesThe plateau problem: why generic LLM-as-a-judge metrics can't break 90% accuracyHow continuous human feedback loops improve eval precision over timeThe emerging "eval engineer" role and what the job actually looks likeWhy 60-70% of AI engineers' time is already spent on evalsWhat multi-agent systems mean for the future of evaluationVikram's framework for baking trust AND control into agentic applicationsPlus: Conor shares news about his move to Modular and what it means for Chain of Thought going forward.Chapters:00:00 – Introduction: Why Evals Are Becoming IP01:37 – What Is Eval Engineering?04:24 – The Eval Engineering Course for Developers05:24 – Generic Evals Are Plateauing08:21 – Continuous Learning and Human Feedback11:01 – Human Feedback Loops and Eval Calibration13:37 – The Emerging Eval Engineer Role16:15 – What Production AI Teams Actually Spend Time On18:52 – Customer Impact and Lessons Learned24:28 – Multi-Agent Systems and the Future of Evals30:27 – MCP, A2A Protocols, and Agent Authentication33:23 – The Eval Engineer Role: Product-Minded + Technical34:53 – Final Thoughts: Trust, Control, and What's NextConnect with Conor Bronsdon:Substack – https://conorbronsdon.substack.com/LinkedIn – https://www.linkedin.com/in/conorbronsdon/X (Twitter) – https://x.com/ConorBronsdonLearn more about Eval Engineering:⁠https://galileo.ai/evalengineering⁠Connect with Vikram Chatterji:LinkedIn – ⁠https://www.linkedin.com/in/vikram-chatterji/⁠

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.