Humans hallucinate. Algorithms lie. At least, that's one difference that Joy Buolamwini and Kyle Chayka want to make clear. When ChatGPT tells you that a book exists when it doesn't – or professes its undying love – that's often called a "hallucination." Buolamwini, a computer scientist, prefers to call it "spicy autocomplete." But not all algorithmic errors are as innocuous. So today's show, we get into: How do algorithms work? What are their impacts? And how can we speak up about changing them? This is a shortened version of Joy and Kyle's live interview, moderated by Regina G. Barber, at this year's Library of Congress National Book Festival.If you liked this episode, check out our other episodes on facial recognition in Gaza, why AI is not a silver bullet and tech companies limiting police use of facial recognition.Interested in hearing more technology stories? Email us at [email protected] — we'd love to consider your idea for a future episode!Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other episodes from Short Wave
Transcribed and ready to explore now
Parasites Have Haunted Us For Millions Of Years
24 Oct 2025
Short Wave
Migrating Birds Have a Big, Clear Problem
22 Oct 2025
Short Wave
We Have the Cure. Why is Tuberculosis Still Around?
21 Oct 2025
Short Wave
Should Scientists Genetically Engineer Wild Species?
20 Oct 2025
Short Wave
Science Says Quitting Smoking At Any Age Is Good For The Brain
17 Oct 2025
Short Wave
What Happens When You're Under Anesthesia?
15 Oct 2025
Short Wave