Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Safety Newsletter

AISN #9: Statement on Extinction Risks, Competitive Pressures, and When Will AI Reach Human-Level? .

06 Jun 2023

Description

Top Scientists Warn of Extinction Risks from AILast week, hundreds of AI scientists and notable public figures signed a public statement on AI risks written by the Center for AI Safety. The statement reads:“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”The statement was signed by a broad, diverse coalition. The statement represents a historic coalition of AI experts — along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists — establishing the risk of extinction from advanced, future AI systems as one of the world’s most important problems. The international community is [...] ---Outline:(00:10) Top Scientists Warn of Extinction Risks from AI(03:35) Competitive Pressures in AI Development(07:22) When Will AI Reach Human Level?(12:47) Links --- First published: June 6th, 2023 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-9 --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.