Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Rounds by the Cumming School of Medicine

Teaching Machines to Say "I Don't Know"—The AI Hallucination Problem

24 Sep 2025

Description

Why do GenAI systems confidently state incorrect medical facts instead of saying "I don't know?" Groundbreaking research from OpenAI and Georgia Tech reveals that AI hallucinations aren't bugs to be fixed—they're inevitable consequences of how these systems are trained. This episode explores the "singleton problem" that makes AI systematically unreliable on rare facts, connects to our previous discussion of AI benchmark saturation (Episode 9), and explains why the same evaluation methods that create impressive test scores actually reward confident guessing over appropriate uncertainty. For medical faculty evaluating AI tools, understanding these statistical realities is crucial for teaching students, conducting research, and developing institutional policies that account for AI's fundamental limitations.Links from this episode:https://openai.com/index/why-language-models-hallucinate

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.