Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Across Acoustics

Why don't speech recognition systems understand African American English?

08 Jul 2024

Description

Most people have encountered speech recognition software in their day-to-day lives, whether through personal digital assistants, auto transcription, or other such modern marvels. As the technology advances, though, it still fails to understand speakers of African American English (AAE). In this episode, we talk to Michelle Cohn (Google Research and University of California Davis) and Zion Mengesha (Google Research and Stanford University) about their research into why these problems with speech recognition software seem to persist and what can be done to make sure more voices are understood by the technology.Associated paper: Michelle Cohn, Zion Mengesha, Michal Lahav, and Courtney Heldreth. "African American English speakers’ pitch variation and rate adjustments for imagined technological and human addressees." JASA Express Letters  4,  047601 (2024). https://doi.org/10.1121/10.0025484.Read more from JASA Express Letters. Learn more about Acoustical Society of America Publications  Music: Min 2019 by minwbu from Pixabay. 

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.