Tristan Harris
👤 PersonAppearances Over Time
Podcast Appearances
But because that power is not bound with responsibility, there's no one preventing people from using that power in dangerous ways.
It's also increasing the risk of cyber hacking, flooding our environment, deep fakes, fraud, scams, dangerous things with biology.
Whatever the models can do, there's no thing stopping people from using it that way.
And so the end game of that is what we call chaos.
And that's one of the probable places that this can go.
In response to that, this other community in AI says that we should do this safely.
We should lock this up, have regulated AI control, just have a few trusted players.
And the benefit of that is that it's like a biosafety level four lab.
Like this is a dangerous activity.
We should do this in a safe lockdown way.
But because AI confers all this power, the million geniuses in a data center, and you just make crazy amounts of money with that, that'll create the risk of just unprecedented concentrations of wealth and power.
So who would you trust to be a million times more wealthy or powerful than anybody else, like any government or any CEO or any president?
So that's a different, difficult outcome.
Yes, exactly.
Yes, yes.
So understandably, people are not comfortable with the outcome, and that's what we call the dystopia attractor.
It's a second different way to fail.
So there's chaos and dystopia.
But the good news is, because rather than there being this dysfunctional debate where some people say accelerate is the answer, other people say safety is the answer, well, we actually need to walk the narrow path where...
We want to avoid chaos.