The podcast episode features a discussion between the host and Ishan Sharma, an AI practitioner, on various facets of artificial intelligence, particularly focusing on AI safety. The episode covers the following key points: AI Safety Debate: Ishan explains the concept of "Grokking" — an AI's deep understanding of data — and suggests that it contributes to mistrust in AI systems. He outlines two main camps in the AI safety debate: AI accelerationists, who downplay risks and advocate for rapid progress, and AI doomers, who emphasize the potentially catastrophic risks of AI. Primary Concerns: Ishan mentions three major concerns regarding AI safety: unpredictable emergence of capabilities, alignment with human values, and risks from deceptive AI. Safety Interventions: Various safety measures are proposed, ranging from extreme actions like pausing AI development to more moderate ones like better governance and oversight. Current Limitations: Ishan points out that current AI systems, like transformer architectures, are nearing their peak performance and that future advancements might require experiential learning akin to human experiences. Recorded Sept 10th, 2023. Other ways to connect Follow us on X and Instagram Follow Shubham on X Follow Ishan on X
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
3ª PARTE | 17 DIC 2025 | EL PARTIDAZO DE COPE
01 Jan 1970
El Partidazo de COPE
13:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
10:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
13:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana