Dwarkesh Patel
š¤ SpeakerAppearances Over Time
Podcast Appearances
I'll just do some open source model that might not be the smartest thing in the world, but is definitely smart enough to no-take a camera feed.
The more fundamental problem here is that even if the three leading companies draw a line in the sand and are even willing to get destroyed in order to preserve that line,
The technology just structurally and intrinsically favors uses like mass surveillance and control over the population.
And so then the question is, what do we do about it?
And honestly, I don't have an answer.
You'd hope that there's some symmetric property to this technology, where in the same way that it's helping the government be able to better monitor and control its population, it will help us as citizens better check the government's power.
But realistically, I just don't think that's how it's going to work out.
You can think of AI as just giving more leverage to whatever assets and authority that you already have.
And the government is starting with the monopoly on violence, which they can now supercharge with extremely obedient employees that will never question their orders.
And this gets us to the issue with alignment.
What I just described for you, an army of extremely obedient employees, is what it would look like if alignment succeeded.
That is, at a technical level, we got AI systems to follow somebody's intentions.
And the reason it sounds scary when put in terms of mass surveillance or robot armies is that there's a...
core question at the heart of alignment that we haven't answered yet.
Because up till now, AIs just have not been smart enough to make this question relevant.
And the question is, to what or to whom should the AIs be aligned?
In what situation should the AI defer to the model company versus the end user versus the law versus to its own sense of morality?
This is maybe the most important question about what happens in the future with powerful AI systems, and we barely talk about it.
And it's understandable why, because if you're a model company, you don't really want to be advertising the fact that you have complete control over the preferences and the character of the entire future labor force, not just for the private sector, obviously, but also for the civilian government and for the military.
And we're getting to see with this Department of War and Anthropic spat, an early version of what will be the highest stakes negotiations in human history.