Adam Kucharski
๐ค SpeakerAppearances Over Time
Podcast Appearances
Let's go back to self-driving cars.
A common thought experiment when it comes to AI is what's known as the trolley problem.
Suppose we have a heavy trolley or a big car and it's going to hit a group of people.
But you have the option of pulling a lever to divert the vehicle so it hits only one person.
Would you pull that lever?
And would it matter whether the people are old or young?
These kinds of decisions can sometimes crop up in real life with human drivers.
In 2020, a car in Michigan swerved to avoid a truck and hit a young couple walking on the pavement, putting them in hospital for several months.
Would AI have reacted differently?
Well, it turned out that the car was also racing side by side with another vehicle at the time, and the driver didn't have a valid license.
Before we get too deep into theoretical dilemmas, we should remember that humans often aren't very good drivers.
If we could ensure there were far fewer accidents on our roads, would you mind being unable to explain the ones that did happen?
In this complex world of ours, maybe we should just abandon the pursuit of explanation altogether.
After all, many data-driven areas of science increasingly focus on prediction because it's fundamentally an easier problem than explaining.
Like anesthesia, we can often make useful predictions about what something will do without fully understanding it.
But explanation can sometimes really matter if we want a better world.
The focus on prediction is particularly troubling in the field of justice.
Increasingly, algorithms are used to decide whether to release people on bail or parole.
The computer isn't deciding whether they've committed a crime.
In effect, it's predicting whether they'll commit one in future.