Joshua Greene
๐ค SpeakerAppearances Over Time
Podcast Appearances
So you should save the people in the car.
Right.
But then people push back and was like, oh, so you've got these, you know, basically like bigoted cars that are going to only care about the people inside them.
And then Mercedes said, no, no, no, no, no.
That's not what we mean.
No car should ever make any value judgments at all.
which of course is actually impossible, but good PR, right?
But critically, someone who's inside the car might be much more protected than a pedestrian or a cyclist.
So cars have to deal with this stuff.
Now, it's pretty clear that we're not going to be able to solve these problems with
a hard and fast set of rules.
So like if you're giving the car, let's say, simulations and it does different things, let's say it swerves around that cyclist and it doesn't hit the cyclist, but almost does.
Is that a win or a lose?
Is that something you want to reinforce with your machine learning algorithm?
Or is that something you want to dissuade?
So there are value judgments that are made in training.
Yeah, so this is not research that I've done, but I can tell you, people have looked at lots of different
neurodivergent conditions, some of which would go under the heading of psychopathology and others not.
You mentioned psychopathy.
This is something that's been studied and what you find is that people who have diagnosed with psychopathy are more likely to say that it's okay to push the guy off the footbridge.