Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AiShed

S1E3 Agentic AI: Beyond Explainability

29 Nov 2025

Description

In this episode, Alessandro and Giovanni explore why explainability is no longer enough for modern AI and agentic systems—and why observability must take center stage. They explain that explainability is retrospective (“why the system thinks it acted”), whereas observability provides real-time insight into what the system is doing, whether it remains within its safety envelope, and how far it is from violating constraints.Drawing parallels with fly-by-wire and safety-critical software, they show that autonomy increases—not reduces—the need for instrumentation, logging, monitoring, and traceable reasoning. The conversation emphasizes that trustworthy agentic AI requires continuous telemetry, drift detection, guardrail activation logs, and visibility into planning and sub-goal generation.To illustrate the risks of missing explainability, they recount a striking real-world example: a Boeing 737 automated braking system that behaved “perfectly” according to its internal logic but offered no cues to pilots. This opacity led to confusing and unsafe events until engineers added simple cockpit messages. The system didn’t need new logic; it needed to communicate.The core message of the episode is clear: Autonomous systems cannot be trusted unless their behaviour is continuously observable. Explainability is helpful, but observability is essential for safety, certification, and human-machine collaboration.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.