Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

An Hour of Innovation with Vit Lyoshin

RAG, LLMs & the Hidden Costs of AI: What Companies Must Fix Before It’s Too Late

06 Dec 2025

Description

Most companies have no idea how risky and expensive their AI systems truly are until a single mistake turns into millions in unexpected costs.In this episode of An Hour of Innovation podcast, host Vit Lyoshin explores the truth about AI safety, enterprise-scale LLMs, and the unseen risks that organizations must fix before it’s too late.Vit is joined by Dorian Selz, co-founder and CEO of Squirro, an enterprise AI company trusted by global banks, central banks, and highly regulated industries. His experience gives him a rare inside look at the operational, financial, and security challenges that most companies overlook.They dive into the hidden costs of AI, why RAG has become essential for accuracy and cost-efficiency, and how a single architectural mistake can lead to a $4 million monthly LLM bill. They discuss why enterprises underestimate AI risk, how guardrails and observability protect data, and why regulated environments demand extreme trust and auditability. Dorian explains the gap between perceived vs. actual AI safety, how insurance companies will shape future AI governance, and why vibe coding creates dangerous long-term technical debt. Whether you’re deploying AI in an enterprise or building products on top of LLMs.Dorian Selz is a veteran entrepreneur, known for building secure, compliant, and enterprise-grade AI systems used in finance, healthcare, and other regulated sectors. He specializes in AI safety, RAG architecture, knowledge retrieval, and auditability at scale, capabilities that are increasingly critical as AI enters mission-critical operations. His work sits at the intersection of innovation and regulation, making him one of the most important voices in enterprise AI today.Takeaways* Most enterprises dramatically overestimate their AI security readiness.* A single architectural mistake with LLMs can create a $4M-per-month operational cost.* RAG is essential because enterprises only need to expose relevant snippets, not entire documents, to an LLM.* Trust in regulated industries takes years to build and can be lost instantly.* Real AI safety requires end-to-end observability, not just disclaimers or “verify before use” warnings.* Insurance companies will soon force AI safety by refusing coverage without documented guardrails.* AI liability remains unresolved: Should the model provider, the user, or the enterprise be responsible?* Vibe coding creates massive future technical debt because AI-generated code is often unreadable or unmaintainable.Timestamps00:00 Introduction to Enterprise AI Risks02:23 Why AI Needs Guardrails for Safety05:26 AI Challenges in Regulated Industries11:57 AI Safety: Perception vs. Real Security15:29 Risk Management & Insurance in AI21:35 AI Liability: Who’s Actually Responsible?25:08 Should AI Have Its Own Regulatory Agency?32:44 How RAG (Retrieval-Augmented Generation) Works40:02 Future Security Threats in AI Systems42:32 The Hidden Dangers of Vibe Coding48:34 Startup Strategy for Regulated AI Markets50:38 Innovation Q&A QuestionsSupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with Dorian* Website: https://squirro.com/ * LinkedIn: https://www.linkedin.com/in/dselz/ * X: https://x.com/dselz Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/* Podcast: https://www.anhourofinnovation.com/

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.