Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Safety Newsletter

AISN #8: Why AI could go rogue, how to screen for AI risks, and grants for research on democratic governance of AI.

30 May 2023

Description

Yoshua Bengio makes the case for rogue AIAI systems pose a variety of different risks. Renowned AI scientist Yoshua Bengio recently argued for one particularly concerning possibility: that advanced AI agents could pursue goals in conflict with human values. Human intelligence has accomplished impressive feats, from flying to the moon to building nuclear weapons. But Bengio argues that across a range of important intellectual, economic, and social activities, human intelligence could be matched and even surpassed by AI. How would advanced AIs change our world? Many technologies are tools, such as toasters and calculators, which humans use to accomplish our goals. AIs are different, Bengio says. [...] ---Outline:(00:11) Yoshua Bengio makes the case for rogue AI(05:11) How to screen AIs for extreme risks(09:12) Funding for Work on Democratic Inputs to AI(10:43) Links --- First published: May 30th, 2023 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-8 --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.