Across Acoustics
Why don't speech recognition systems understand African American English?
08 Jul 2024
Most people have encountered speech recognition software in their day-to-day lives, whether through personal digital assistants, auto transcription, or other such modern marvels. As the technology advances, though, it still fails to understand speakers of African American English (AAE). In this episode, we talk to Michelle Cohn (Google Research and University of California Davis) and Zion Mengesha (Google Research and Stanford University) about their research into why these problems with speech recognition software seem to persist and what can be done to make sure more voices are understood by the technology.Associated paper: Michelle Cohn, Zion Mengesha, Michal Lahav, and Courtney Heldreth. "African American English speakers’ pitch variation and rate adjustments for imagined technological and human addressees." JASA Express Letters 4, 047601 (2024). https://doi.org/10.1121/10.0025484.Read more from JASA Express Letters. Learn more about Acoustical Society of America Publications Music: Min 2019 by minwbu from Pixabay.
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
Before the Crisis: How You and Your Relatives Can Prepare for Financial Caregiving
06 Dec 2025
Motley Fool Money
OpenAI's Code Red, Sacks vs New York Times, New Poverty Line?
06 Dec 2025
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's Code Red, Sacks vs New York Times, New Poverty Line?
06 Dec 2025
All-In with Chamath, Jason, Sacks & Friedberg
Anthropic Finds AI Answers with Interviewer
05 Dec 2025
The Daily AI Show
#2423 - John Cena
05 Dec 2025
The Joe Rogan Experience
Warehouse to wellness: Bob Mauch on modern pharmaceutical distribution
05 Dec 2025
McKinsey on Healthcare