Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Deep Dive - Frontier AI with Dr. Jerry A. Smith

Why AI Hallucinates: The Math OpenAI Got Right and the Politics They Ignored

08 Sep 2025

Description

Medium: https://medium.com/@jsmith0475/why-ai-hallucinates-the-math-openai-got-right-and-the-politics-they-ignored-1802138739f5 The article, by Dr. Jerry A. Smith, explores the multifaceted nature of AI hallucinations, arguing that they are not merely technical glitches but also socio-technical constructs. It highlights two key perspectives: first, Kalai et al. (2025) statistically explain why hallucinations are mathematically inevitable due to training and evaluation methods, advocating for rewarding model abstention when uncertain. Second, Smith (2025) introduces a Kantian framework, positing that the definition of a "hallucination" is inherently subjective and shaped by human evaluative choices, including benchmarks that embed specific cultural and political values. The text ultimately calls for a move beyond a "neutrality myth" in AI evaluation, advocating for multi-perspective assessments and the democratization of benchmark governance to ensure AI systems are more accountable and reflective of diverse human realities.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.