Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Safety Newsletter

AISN #20: LLM Proliferation, AI Deception, and Continuing Drivers of AI Capabilities.

29 Aug 2023

Description

AI Deception: Examples, Risks, SolutionsAI deception is the topic of a new paper from researchers at and affiliated with the Center for AI Safety. It surveys empirical examples of AI deception, then explores societal risks and potential solutions.The paper defines deception as “the systematic production of false beliefs in others as a means to accomplish some outcome other than the truth.” Importantly, this definition doesn't necessarily imply that AIs have beliefs or intentions. Instead, it focuses on patterns of behavior that regularly cause false beliefs and would be considered deceptive if exhibited by humans.Deception by Meta’s CICERO AI. Meta developed the AI system CICERO to play Diplomacy, a game where players build and betray alliances in [...] ---Outline:(00:11) AI Deception: Examples, Risks, Solutions(04:35) Proliferation of Large Language Models(09:25) Continuing Drivers of AI Capabilities(14:30) Links --- First published: August 29th, 2023 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-20 --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.