Sam Harris
๐ค SpeakerAppearances Over Time
Podcast Appearances
I mean, I know it sounds like you're not worried that LLMs will produce such a thing, but in principle, are you worried?
Do you take IJ Goods and others' early fears seriously that
Once we build AGI on the basis of whatever platform, we're in the presence of something that can become recursively self-improving and get away from us.
Yes.
Yeah, yeah.
Are you worried that the field is operating under a kind of a system of incentives, essentially an arms race that is going to select for reckless behavior?
If there is this potential failure mode of building something that destroys us, it seems, at least from the statements of the people who are doing this work, you know, the people who are running the major companies,
the probability of encountering such existential risk is, in their minds, pretty high.
I mean, we're not hearing people like Sam Altman say, oh yeah, I think the chances are one in a million that we're going to destroy the future with this technology.
They're putting the chances at like 20%, and yet they're still going as fast as possible.
Doesn't an arms race seem like the worst condition to do this carefully?
But what I find alarming about those utterances is that, I mean, if you just imagine if the physicists who gave us the bomb, you know, the Manhattan Project, if one asked about their initial concern that it might ignite the atmosphere and destroy all of life on planet Earth, if they had been the one saying,
yeah, maybe it's 20%, maybe it's 15%, and yet they were still moving forward with the work, that would have been alarming.
But of course, that's not what they were saying.
They did some calculation and they put the chances to be infinitesimal, though not zero.
It just seems bizarre culturally that we have the people doing the work who are not expressing
You know, fallaciously or not, I'll grant you that all of this is made up and it's hard to come up with a rational estimate, but for the people doing the work, plowing, you know, trillions of dollars into the build-out of AI to be giving numbers like 20% seems culturally strange.
Do you have any thoughts about how a system would have to be built so as to be perpetually aligned with our interests?
I mean, if you're taking intelligence seriously, right?
So we're talking about building an autonomous intelligence system that exceeds our own intelligence and in the limit improves itself, one would imagine.