Large Language Models (LLMs) seem to know everything—until they don’t. In this deep dive, we explore the fascinating phenomenon of AI hallucinations, where LLMs confidently generate false information. Why does this happen? Enter knowledge overshadowing, a cognitive trap that causes AI to prioritize dominant information while overlooking lesser-known facts.We break down the groundbreaking log-linear law that predicts when LLMs are most likely to hallucinate and introduce KOD (Contrastive Decoding), a cutting-edge technique designed to make AI more truthful. Plus, we ask the big question: Should we always aim for perfect factuality in AI, or is there a place for creative generation?Join us as we uncover what these AI mistakes reveal—not just about technology, but about the way human cognition works. If you're curious about the future of AI accuracy, misinformation, and ethics, this is an episode you won't want to miss!Read more: https://arxiv.org/abs/2502.16143
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Erich G. Anderer, Chief of the Division of Neurosurgery and Surgical Director of Perioperative Services at NYU Langone Hospital–Brooklyn
09 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Nolan Wessell, Assistant Professor and Well-being Co-Director, Department of Orthopedic Surgery, Division of Spine Surgery, University of Colorado School of Medicine
08 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-08-2025 1AM EST
08 Dec 2025
NPR News Now