Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

EA Forum Podcast (Curated & popular)

“Alignment Faking in Large Language Models” by Ryan Greenblatt

19 Dec 2024

Description

What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. Abstract We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...] ---Outline:(00:24) Abstract(02:20) Twitter thread(05:43) Blog post(07:43) Experimental setup(12:04) Further analyses(15:47) Caveats(17:19) Conclusion(18:00) Acknowledgements(18:11) Career opportunities at Anthropic(18:43) Career opportunities at Redwood Research The original text contained 2 footnotes which were omitted from this narration. --- First published: December 18th, 2024 Source: https://forum.effectivealtruism.org/posts/RHqdSMscX25u7byQF/alignment-faking-in-large-language-models --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.