Stuart Russell
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
You get into trouble with humans.
As I would say with corporations, in fact, some people argue that
you know, we don't have to look forward to a time when AI systems take over the world.
They already have, and they call corporations, right?
That corporations happen to be using people as components right now, but they are effectively algorithmic machines and they're optimizing an objective, which is quarterly profit that isn't aligned with,
with overall well-being of the human race, and they are destroying the world.
They are primarily responsible for our inability to tackle climate change.
So I think that's one way of thinking about what's going on with corporations.
I think the point you're making is valid, that there are many systems in the real world where we've sort of prematurely fixed on the objective and then decoupled the machine from those that it's supposed to be serving.
And I think you see this with government systems.
Government is supposed to be a machine that serves people, but instead it tends to be taken over by people who have their own objective and use government to optimize that objective regardless of what people want.
Yeah, I think that the nature of debate and disagreement and argument takes as a premise the idea that you could be wrong.
Which means that you're not necessarily absolutely convinced that your objective is the correct one.
right um if you were absolutely there'd be no point in having any discussion or argument because you would never change your mind um and there wouldn't be any any sort of synthesis or or anything like that so so i think you can think of argumentation as a as an implementation of a form of uncertain reasoning uh and um
You know, I've been reading recently about utilitarianism and the history of efforts to define, in a sort of clear mathematical way, if you like, a formula for moral or political decision-making.
And it's really interesting that the parallels between the philosophical discussions going back 200 years and what you see now in discussions about existential risk, because it's almost exactly the same.
So someone would say, okay, well, here's a formula for how we should make decisions, right?
So utilitarianism is roughly, you know, each person has a utility function and then we make decisions to maximize the sum of everybody's utility.
And then people point out, well, you know, in that case, the best policy is one that leads to the enormously vast population, all of whom are living a life that's barely worth living.