Tristan Harris
👤 SpeakerAppearances Over Time
Podcast Appearances
It's making decisions.
AI can contemplate AI and ask what would make the code that trains AI more efficient and then generate new code that's even more efficient than the previous code.
AI can be applied to making AI go faster.
So AI can look at the chip design for Nvidia chips that train AI and say, let me use AI to make those chips 20% more efficient, which it's doing.
So
In a way, all technology does improve.
Like a hammer can give you a tool that you can use to like hammer things that make more efficient hammers.
But AI in a much tighter loop is the basis of all improvement.
And so this is called in the AI literature, recursive self-improvement.
I mean, Bostrom wrote about this early, early days.
And what people are most worried about in AI is you take the same system that Alibaba, you just saw in the Alibaba example,
But then now you're running the AI through a recursive self-improvement loop where you just hit go.
And instead of having the engineers, the human engineers at OpenAI or Anthropic do AI research and figure out how to improve AI, you now have a million digital AI researchers that are testing and running experiments and inventing new forms of AI.
And literally not a single human on planet Earth knows what happens when someone hits that button.
It's like what people worried about with the first nuclear explosion, where there was like a chance to ignite the atmosphere because there'd be a chain reaction that set off.
And we don't know what happens when that chain reaction set off.
And there's this sort of chain reaction of AI improving itself that leads to a place that no one knows.
And it's not safe.
Like I think that the fundamental thing is
if people believe that AI is like power and I have to race for that power and I can control that power, the incentive is I have to race as fast as possible.