Brian O’Malley
👤 SpeakerAppearances Over Time
Podcast Appearances
And I think we've all been trained our entire lives that people make mistakes and that that's just a part of living.
And so if you get in a little fender bender from someone else, it's frustrating, but there's some understanding that people are going to screw things up.
When computers screw things up, people don't really know how to process that.
I think there's a similar example when my flight's been canceled, I call United and I kind of yell agent until they finally put me through to someone human.
And in that case, that person, I've had them screw up and book me the wrong flight.
But like you're more understanding that people are going to make mistakes when the computer systems make mistakes.
I think people ultimately get a lot more frustrated.
And so that's why for some of these businesses, even having a human front end to provide some level of empathy and some level of connection, if you screw it up or if you get it wrong, there's going to be a greater level of forgiveness than there currently is for these systems.
And so I think you talked about Tesla earlier.
This is something that we're seeing it in self-driving, but it's going to move itself all the way through as people have more challenges.
And the reality is we're sitting at a point right now where there's a complete lack of trust in big tech.
I would say this is kind of the lowest that it's been in a long period of time.
So people don't fully trust big tech to have their best interests at heart.
And so when it screws up, not only are you frustrated, but you're also wondering, is this one of these examples where I'm not the customer, I'm the product, and there's someone else who's ultimately determining how this plays out?
There's a study that recently pitted AI versus cardiologists in 2023.
So AI was on the first time off more.
And then when they got to the final gold standard answer, the cardiologists were off on average by 3.77%.
So again, to this point, yes, is AI perfect?
It should be at, you know, hopefully 1% error if it's something as important as somebody's heart.
So how many lives are being hurt by not applying more AI?