Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Safety Newsletter

AISN #11: An Overview of Catastrophic AI Risks.

22 Jun 2023

Description

An Overview of Catastrophic AI RisksGlobal leaders are concerned that artificial intelligence could pose catastrophic risks. 42% of CEOs polled at the Yale CEO Summit agree that AI could destroy humanity in five to ten years. The Secretary General of the United Nations said we “must take these warnings seriously.” Amid all these frightening polls and public statements, there’s a simple question that’s worth asking: why exactly is AI such a risk?The Center for AI Safety has released a new paper to provide a clear and comprehensive answer to this question. We detail the precise risks posed by AI, the structural dynamics making these problems so difficult to solve, and the technical, social, and political responses required to overcome this [...] ---Outline:(00:08) An Overview of Catastrophic AI Risks(00:56) Malicious actors can use AIs to cause harm.(02:18) Racing towards an AI disaster.(04:05) Safety should be a goal, not a constraint.(05:46) The challenge of AI control.(07:53) Positive visions for the future of AI.(09:02) Links --- First published: June 22nd, 2023 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-11 --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.