Scott Alexander (Astral Codex Ten)
๐ค SpeakerAppearances Over Time
Podcast Appearances
The subreddit discusses career planning in a post-GPT world.
L. Rudolph L., author of the post on capital slash labor in the singularity that I discussed here, link in post, has a proposed history of the future scenario, three links here, tracking what he thinks will happen from now to 2040.
Extremely slow takeoff, assumes alignment will be solved, etc.
I want to challenge some of these assumptions, but will wait until a different scenario I'm waiting on gets published.
The part I found most interesting here is Rudolph's suggestion that there will be neither universal unemployment nor UBI, but a sort of vapid jobs program where even after AI can make all decisions without human input, the government passes regulations mandating that humans be, quote, in the loop, using safety as a fig leaf.
And we get a world where everyone works 40-hour weeks attending useless meetings where everyone tells each other what the AIs did, then rubber stamps it.
Sort of like longshoremen hereditary fiefdoms that were in the news last year.
Boaz Barak, a friend of Scott Aronson's now working on OpenAI alignment team, has six thoughts on AI safety.
It's all pretty moderate and thoughtful stuff.
What I find interesting about it is that the acknowledgements say Sam Altman provided feedback, although, quote, does not necessarily endorse any of its views, end quote.
I think this is a useful window into OpenAI's current alignment thinking, or at least into the fact that they currently have alignment thinking.
Not much to complain about in terms of specifics, and glad people like Boaz are involved.
If you ask Grok3, quote, who is the worst spreader of misinformation, it will say Elon.
If you ask it who deserves the death penalty, it will say Trump, with Elon close behind.