Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Compliance Perspectives

Alessia Falsarone on AI Explainability [Podcast]

23 Oct 2025

Description

By Adam Turteltaub Why did the AI do that? It’s a simple and common question, but the answer is often opaque, with people referring to black boxes, algorithms and other words that only those in the know tend to understand. Alessia Falsarone, a non-executive director of Innovate UK, says that’s a problem.  In cases where AI has run amok, the fallout is often worse because the company is unable to explain why the AI made the decision it made and what data it was relying on. AI, she argues, needs to be explainable to regulators and the public.  That way all sides can understand what the AI is doing (or has done) and why. To create more explainable AI, she recommends the creation of a dashboard showing the factors that influence the decisions made.  In addition, teams need to track changes made to the model over time. By doing so, when the regulator or public asks why something happened, the organization can respond quickly and clearly. In addition, by embracing a more transparent process, and involving compliance early, organizations can head off potential AI issues early in the process. Listen is to hear her explain the virtues of explainability.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.