Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Build Wiz AI Show

😵‍💫 Why Language Models Hallucinate

07 Sep 2025

Description

In this episode, we delve into why language models "hallucinate," generating plausible yet incorrect information instead of admitting uncertainty. We'll explore how these overconfident falsehoods arise from the statistical objectives minimized during pretraining and are further reinforced by current evaluation methods that reward guessing over expressing doubt. Join us as we uncover the socio-technical factors behind this persistent problem and discuss proposed solutions to foster more trustworthy AI systems.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.