Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Astral Codex Ten Podcast

Why Not Slow AI Progress?

09 Aug 2022

Description

Machine Alignment Monday 8/8/22 https://astralcodexten.substack.com/p/why-not-slow-ai-progress The Broader Fossil Fuel Community Imagine if oil companies and environmental activists were both considered part of the broader "fossil fuel community". Exxon and Shell would be "fossil fuel capabilities"; Greenpeace and the Sierra Club would be "fossil fuel safety" - two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties - fossil fuel community parties - and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron. This is how AI safety works now. AI capabilities - the work of researching bigger and better AI - is poorly differentiated from AI safety - the work of preventing AI from becoming dangerous. Two of the biggest AI safety teams are at DeepMind and OpenAI, ie the two biggest AI capabilities companies. Some labs straddle the line between capabilities and safety research. Probably the people at DeepMind and OpenAI think this makes sense. Building AIs and aligning AIs could be complementary goals, like building airplanes and preventing the airplanes from crashing. It sounds superficially plausible. But a lot of people in AI safety believe that unaligned AI could end the world, that we don't know how to align AI yet, and that our best chance is to delay superintelligent AI until we do know. Actively working on advancing AI seems like the opposite of that plan. So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies. Nothing violent or illegal - doing violent illegal things is the best way to lose 100% of your support immediately. But maybe glare a little at your friend who goes into AI capabilities research, instead of getting excited about how cool their new project is. Or agitate for government regulation of AI - either because you trust the government to regulate wisely, or because you at least expect them to come up with burdensome rules that hamstring the industry. While there are salient examples of government regulatory failure, some regulations - like the EU's ban on GMO or the US restrictions on nuclear power - have effectively stopped their respective industries.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
šŸ—³ļø Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.