Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

EA Forum Podcast (All audio)

“What I Think An AI Safety Givewell For Video Work Should Look Like” by Michaël Trazzi

16 Sep 2025

Description

A few days ago, Austin Chen and Marcus Abramovitch published How cost-effective are AI safety YouTubers?, an "Early work on ”GiveWell for AI Safety”, ranking different interventions in the AI Safety Video space, using a framework that measured impact by basically multiplying watchtime by three quality factors (Quality of Audience, Fidelity of Message and Alignment of Message). Quality-adjusted viewer minute = Views × Video length × Watch % × Qa × Qf × Qm The goal of this post is to explain to what extent I think this framework is useful, things I think it got wrong, and provide some additional criteria that I would personally want to see in a more comprehensive "AI Safety Givewell for video work". tl;dr: I think Austin's and Marcus' framework has a lot of good elements, especially the three factors Quality, Fidelity and Alignment of message. Viewer Minutes is the wrong proxy [...] ---Outline:(00:58) tl;dr:(01:45) What The Framework Gets Right(02:50) Limitation: Viewer Minutes(04:55) Siliconversations And The Art of Call to Action(07:51) Moving Talent Through The Video Pipeline(09:46) Categorizing The Video Pipeline(12:18) Original Video Content(14:21) Conclusion--- First published: September 15th, 2025 Source: https://forum.effectivealtruism.org/posts/d9kEfvKq3uqwjeRFJ/what-i-think-an-ai-safety-givewell-for-video-work-should --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.