Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

L. Rudolph L

๐Ÿ‘ค Speaker
159 total appearances

Appearances Over Time

Podcast Appearances

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

They met at a house party in the mission throne by Researcher C, who left DeepMind last year and now runs a small alignment non-profit.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

Researcher D at Google and Researcher E at Meta were roommates in graduate school and still share a group house with three other ML researchers who work at various startups.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

The safety lead at one major lab and the policy director at another were in the same Miri Summer program in 2017.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

The CEO of one frontier lab and the chief scientist of another served on the same non-profit board.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

This is not corruption in any conventional sense.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

It is simply how small, specialized communities work.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

The official story is that the AI labs are competitors.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

But the social topology undermines this story.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

When researchers move fluidly between organizations, they carry knowledge, assumptions, and culture with them.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

The result is a kind of uniparty, a shared culture that supersedes corporate affiliation.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

The Uniparty has its own beliefs that AGI is coming relatively soon, that the current paradigm will scale, that technical alignment work is tractable, its own values, intellectual rigour, effective altruism, cosmopolitan liberalism, its own taboos, excessive pessimism, appeals to regulation, anything that smacks of luddism.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

These shared beliefs, values, and taboos operate across organisational boundaries, creating a remarkable homogeneity of outlook among people who are nominally competitors.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

The AI uniparty's shared premises include that intelligence is the key variable in the future of civilization, that artificial intelligence will soon exceed human intelligence, that the people currently working on AI are therefore the most important people in history.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

that their technical and intellectual capabilities qualify them to make decisions for humanity.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

These premises are rarely stated explicitly, but they structure everything.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

They explain why the community can tolerate such high levels of risk, because the alternative, letting less capable people control the development, seems even worse.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

One cannot believe that AI development should stop entirely.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

One cannot believe that the risks are so severe that no level of benefit justifies them.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

One cannot believe that the people currently working on AI are not the right people to be making these decisions.

LessWrong (Curated & Popular)
"The Possessed Machines (summary)" by L Rudolf L

One cannot believe that traditional political processes might be better equipped to govern AI development than the informal governance of the research community.