Michael Pollan
๐ค SpeakerAppearances Over Time
Podcast Appearances
We would lose control of them completely by giving them rights.
But I find this whole tender care for the possible consciousness of chatbots really odd because we have not extended moral consideration to billions of people
Not to mention the animals that we eat that we know are conscious.
So we're going to start worrying about the computers.
That seems like our priorities are screwed up.
Well, yeah.
Think about Frankenstein, right?
I mean, Frankenstein's monster didn't just have human intelligence, which would have been one thing.
It also had consciousness.
And it was the consciousness which got injured by the way he was treated by humans that turned him into a homicidal maniac.
So people in Silicon Valley say, yeah, a conscious AI is going to be more responsible because it'll have empathy.
I don't think we should assume that.
I think Frankenstein is a good cautionary tale that giving consciousness to your creation, why should it have any more conscience than lots of humans do?
So I think they're kidding themselves about that.
Yeah, this was really interesting.
So I heard about this guy named Russell Hurlburt, who's a psychologist at the University of Las Vegas.
And for the last 50 years, he's been doing an experiment to sample people's what he calls their inner experience.
And the idea is you wear a beeper.
And it goes off at random times during the day.
And he gives you a little pad and you write down exactly what you were thinking at that moment.