As huge tech companies race to develop ever more powerful AI systems, the creation of super-intelligent machines seems almost inevitable. But what happens when, one day, we set these advanced AIs loose? How can we be sure they’ll have humanity’s best interests in their cold silicon hearts?Inspired by Stuart Russell’s fourth and final Reith lecture, AI-expert Hannah Fry and AI-curious Adam Rutherford imagine how we might build an artificial mind that knows what’s good for us and always does the right thing.Can we ‘programme’ machine intelligence to always be aligned with the values of its human creators? Will it be suitably governed by a really, really long list of rules - or will it need a set of broad moral principles to guide its behaviour? If so, whose morals should we pick?On hand to help Fry and Rutherford unpick the ethical quandaries of our fast-approaching future are Adrian Weller, Programme Director for AI at The Alan Turing Institute, and Brian Christian, author of The Alignment Problem.Producer - Melanie Brown Assistant Producer - Ilan Goodman
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
3ª PARTE | 17 DIC 2025 | EL PARTIDAZO DE COPE
01 Jan 1970
El Partidazo de COPE
13:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
10:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
13:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana