Adam Kucharski
๐ค SpeakerAppearances Over Time
Podcast Appearances
We know what combination of drugs will make a patient unconscious, but it's still not entirely clear exactly why they do.
And yet, you'd probably still get the operation, just like you'd still take that flight.
For a long time,
This lack of explanation didn't really bother me.
Throughout my career as a mathematician, I've worked to separate truth from fiction, whether investigating epidemics or designing new statistical methods.
But the world is complicated, and that's something I'd become comfortable with.
For example, if we want to know whether a new treatment is effective against a disease, we can run a clinical trial to get the answer.
It won't tell us why the treatment works, but it will give us the evidence we need to take action.
So I found it interesting that in other areas of life, a lack of explainability does visibly bother people.
Take AI.
One of the concerns about autonomous machines like self-driving cars is we don't really understand why they make the decisions they do.
There will be some situations where we can get an idea of why they make mistakes.
Last year, a self-driving car blocked off a fire truck responding to an emergency in Las Vegas.
The reason?
The fire truck was yellow, and the car had been trained to recognize red ones.
But even if the car had been trained to recognize yellow fire trucks, it wouldn't go through the same thought process we do when we see an emergency vehicle.
Self-driving AI views the world as a series of shapes and probabilities.
With sufficient training, it can convert this view into useful actions, but fundamentally, it's not seeing what we're seeing.
This tension between the benefits that computers can bring and the understanding that humans have to relinquish isn't new.
In 1976, two mathematicians named Kenneth Apple and Wolfgang Harkin announced the first ever computer-aided proof.