Menu
Sign In Search Podcasts Charts Entities Add Podcast API Pricing
Podcast Image

Odd Lots

The Movement That Wants Us to Care About AI Model Welfare

30 Oct 2025

Description

You hear a lot about AI safety, and this idea that sufficiently advanced AI could pose some kind of threat to humans. So people are always talking about and researching "alignment" to ensure that new AI models comport with human needs and values. But what about humans' collective treatment of AI? A small but growing number of researchers talk about AI models potentially being sentient. Perhaps they are "moral patients." Perhaps they feel some kind of equivalent of pleasure and pain -- all of which, if so, raises questions about how we use AI. They argue that one day we'll be talking about AI welfare the way we talk about animal rights, or humane versions of animal husbandry. On this episode we speak with Larissa Schiavo of Eleos AI. Eleos is an an organization that says it's "preparing for AI sentience and welfare." In this conversation we discuss the work being done in the field, why some people think it's an important area for research, whether it's in tension with AI safety, and how our use and development of AI might change in a world where models' welfare were to be seen as an important consideration. Only Bloomberg.com subscribers can get the Odd Lots newsletter in their inbox — now delivered every weekday — plus unlimited access to the site and app. Subscribe at bloomberg.com/subscriptions/oddlotsSee omnystudio.com/listener for privacy information.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

No transcription available yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.