Tristan Harris
๐ค SpeakerAppearances Over Time
Podcast Appearances
It only took what, like 50 Nobel Prize level scientists to make the Manhattan Project, the nuclear bomb.
And it only took a couple Nobel Prize level scientists to make CRISPR, which is the ability to read and write DNA.
So if you can have a hundred million Nobel Prize winning sort of like minds working on creating new scientific discoveries, some of those things are going to be insanely dangerous.
And as Tristan says, we can't conceptualize them.
So the bottom line is we need to do...
We need to regulate, we need to have laws, and we need to have international limits on where the whole world does not have an interest in building dangerous AI that we lose control of.
Think about that China would not want the US to build dangerous AI that we lose control of.
The US doesn't want China to build AI that they lose control of, meaning that we all- But we're both racing to get to
What?
A crazier, more uncontrollable form of AI.
Because right now we're making AIs, there's a 2000 to one gap in the amount of money going into making AI more powerful than the money making AI more safe or controllable.
2,000 to one.
2,000 to one gap.
You said to me backstage that there's more regulation on a sandwich.
There's more regulation on a sandwich in New York City than there is on building potentially world-ending AGI.
This is not rocket science.
This is very, very basic.
If there's danger up ahead, the point that Asa made is if we all saw what we're building as dangerous, which it is, then intrinsically everyone would start to take actions.
Actions that we can't even predict.
But I think everybody sort of...