Alex Lintner
๐ค SpeakerAppearances Over Time
Podcast Appearances
we assume, Hey, if we provide the following information to a team, uh, to a person as an assistant, uh, in, in their workflow, they are going to use it that way.
And therefore it's a good thing.
Well, we're not always right.
You know, people don't, I, I, I sometimes compare it with, you know, I, I, I, I drive a, I drive a, I drive a Mercedes car and I can talk to the car and, and, you know, it has a, uh, uh,
a map that I can talk to and say, hey, Mercedes, take me to, tonight I'm seeing the Colorado Avalanche play hockey, take me to a ball arena in Denver.
And so it will put in the directions from where I'm at, and I will be taken there.
I got so used to the tool that I now listen to the tool all the time.
Though I know the area really well, and sometimes it doesn't give me the right route.
Is that good?
Are you describing a good outcome?
No, it's not a good outcome.
And that is the outcome you want to avoid.
It's the answer to your question.
If you trust AI to the point where you blindly trust it and always follow it, and you don't check yourself through the data scientist in the example that we discussed a couple of minutes ago,
it bears risk.
So the real job that we have is to make sure that doesn't happen.
And the interaction with the human still happens.
You can sort of force it in rather than it automatically, AI automatically doing what it does.
In my car, I can turn on the lights.
I can turn on the radio.