Guest: Nicholas Carlini, Research Scientist @ Google Topics: What is your threat model for a large-scale AI system? How do you approach this problem? How do you rank the attacks? How do you judge if an attack is something to mitigate? How do you separate realistic from theoretical? Are there AI threats that were theoretical in 2020, but may become a daily occurrence in 2025? What are the threat-derived lessons for securing AI? Do we practice the same or different approaches for secure AI and reliable AI? How does relative lack of transparency in AI helps (or hurts?) attackers and defenders? Resources: "Red Teaming AI Systems: The Path, the Prospect and the Perils" at RSA 2022 "Killed by AI Much? A Rise of Non-deterministic Security!" Books on Adversarial ML
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
3ª PARTE | 17 DIC 2025 | EL PARTIDAZO DE COPE
01 Jan 1970
El Partidazo de COPE
13:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
10:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
13:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana