Tristan Harris
๐ค SpeakerAppearances Over Time
Podcast Appearances
That tells you which world we're going to get.
There is no arguing with that.
And so if everybody just saw that clearly, we'd say, okay, great, let's not do that.
Let's not have that incentive, which starts with culture, public clarity that we say no to that bad outcome, to that path.
And then with that clarity, what are the other solutions that we want?
We can have narrow AI tutors that are non-anthropomorphic, that are not trying to be your best friend, that are not trying to be therapists at the same time that they're helping you with your homework, more like Khan Academy, which does those things.
So you can have carefully designed different kinds of AI tutors that are doing it the right way.
You can have AI therapists that are not trying to say, tell me your most intimate thoughts and let me separate you from your mother, and instead do very limited kinds of therapy that are not screwing with your attachment.
So if I do cognitive behavioral therapy, I'm not screwing with your attachment system.
We can have mandatory testing.
Currently, the companies are not mandated to do that safety testing.
We can have common safety standards that they all do.
we can have common transparency measures so that the public and the world's leading governments know what's going on inside these AI labs, especially before this recursive self-improvement threshold.
So that if we need to negotiate treaties between the largest countries on this,
they will have the information that they need to make that possible.
We can have stronger whistleblower protections so that if you're a whistleblower and currently your incentives are, I would lose all of my stock options if I told the world the truth and those stock options are going up every day, we can empower whistleblowers with ways of sharing that information that don't risk losing their stock options.
So there's a whole, and we can have, instead of building general, inscrutable, autonomous, like dangerous AI that we don't know how to control, that blackmails people and is self-aware and copies its own code, we can build narrow AI systems that are about actually applied to the things that we want more of.
So, you know, making stronger and more efficient agriculture, better manufacturing, better educational services that would actually boost those areas of our economy without creating this risk that we don't know how to control.
So there's a totally different way to do this if we were crystal clear that the current path is unacceptable.
Let's not make it theoretical then because it's so important that it's just all crystal clear in here right now.