L. Rudolph L
๐ค SpeakerAppearances Over Time
Podcast Appearances
They met at a house party in the mission throne by Researcher C, who left DeepMind last year and now runs a small alignment non-profit.
Researcher D at Google and Researcher E at Meta were roommates in graduate school and still share a group house with three other ML researchers who work at various startups.
The safety lead at one major lab and the policy director at another were in the same Miri Summer program in 2017.
The CEO of one frontier lab and the chief scientist of another served on the same non-profit board.
This is not corruption in any conventional sense.
It is simply how small, specialized communities work.
The official story is that the AI labs are competitors.
But the social topology undermines this story.
When researchers move fluidly between organizations, they carry knowledge, assumptions, and culture with them.
The result is a kind of uniparty, a shared culture that supersedes corporate affiliation.
The Uniparty has its own beliefs that AGI is coming relatively soon, that the current paradigm will scale, that technical alignment work is tractable, its own values, intellectual rigour, effective altruism, cosmopolitan liberalism, its own taboos, excessive pessimism, appeals to regulation, anything that smacks of luddism.
These shared beliefs, values, and taboos operate across organisational boundaries, creating a remarkable homogeneity of outlook among people who are nominally competitors.
The AI uniparty's shared premises include that intelligence is the key variable in the future of civilization, that artificial intelligence will soon exceed human intelligence, that the people currently working on AI are therefore the most important people in history.
that their technical and intellectual capabilities qualify them to make decisions for humanity.
These premises are rarely stated explicitly, but they structure everything.
They explain why the community can tolerate such high levels of risk, because the alternative, letting less capable people control the development, seems even worse.
One cannot believe that AI development should stop entirely.
One cannot believe that the risks are so severe that no level of benefit justifies them.
One cannot believe that the people currently working on AI are not the right people to be making these decisions.
One cannot believe that traditional political processes might be better equipped to govern AI development than the informal governance of the research community.