Tristan Harris
๐ค SpeakerAppearances Over Time
Podcast Appearances
You can do all those things, but then the question is, will we be able to control that technology or will it not be hackable?
And right now... Well, the government will control it.
I'll be incredibly obedient in a world where there's robots strolling the streets that if I do anything wrong, they can evaporate me, lock me up or take me... We often say that the future right now is sort of one of two outcomes, which is either you mass decentralize this technology for everyone...
And that creates catastrophes that rule of law doesn't know how to prevent.
Or this technology gets centralized in other companies or governments and can create mass surveillance states or automated robot armies or police officers that are controlled by single entities that can tell them to do anything that they want and cannot be checked by the regular people.
And so we're heading towards catastrophes and dystopias.
And the goal is that both of these outcomes are undesirable.
We have to have something like a narrow path that preserves checks and balances on power, that prevents decentralized catastrophes and prevents runaway power concentration in which people are totally and forever and irreversibly disempowered.
Governments have an incentive to increasingly use AI to surveil and control the population.
If we don't want that to be the case, that pressure has to be exerted now before that happens.
And I think of it as when you increase power, you have to also increase counter rights to defend against that power.
So for example, we didn't need the right to be forgotten until technology had the power to remember us forever.
We don't need the right to our likeness until AI can just suck your likeness with three seconds of your voice or look at all your photos online and make an avatar of you.
We don't need the right to our cognitive liberty until AI can manipulate our deep cognition because it knows us so well.
So anytime you increase power, you have to increase the oppositional forces of the rights and protections that we have.
That's one of the other aspects of this ego-religious, godlike, that it's not even a bad thing.
The quote I read you at the beginning of the biological life replaced by digital life says,
they actually think that we shouldn't feel bad.
Richard Sutton, a famous Turing Award-winning AI scientist who invented, I think, reinforcement learning, says that we shouldn't fear the succession of our species into this digital species.
And that whether this all goes away is not actually of concern to us because we will have birthed something that is more intelligent than us.