Yann LeCun
👤 PersonAppearances Over Time
Podcast Appearances
And some of them are going to succeed at making intelligent systems that are controllable and safe and have the right guardrails. And if some other goes rogue, then we can use the good ones to go against the rogue ones. So it's going to be my smart AI police against your rogue AI. So it's not going to be like we're going to be exposed to a single rogue AI that's going to kill us all.
That's just not happening. Now, there is another fallacy, which is the fact that because the system is intelligent, it necessarily wants to take over.
That's just not happening. Now, there is another fallacy, which is the fact that because the system is intelligent, it necessarily wants to take over.
That's just not happening. Now, there is another fallacy, which is the fact that because the system is intelligent, it necessarily wants to take over.
There are several arguments that make people scared of this, which I think are completely false as well. One of them is, in nature, it seems to be that the more intelligent species are the ones that end up dominating the other, and even extinguishing the others, sometimes by design, sometimes just by mistake.
There are several arguments that make people scared of this, which I think are completely false as well. One of them is, in nature, it seems to be that the more intelligent species are the ones that end up dominating the other, and even extinguishing the others, sometimes by design, sometimes just by mistake.
There are several arguments that make people scared of this, which I think are completely false as well. One of them is, in nature, it seems to be that the more intelligent species are the ones that end up dominating the other, and even extinguishing the others, sometimes by design, sometimes just by mistake.
And so there is sort of thinking by which you say, well, if AI systems are more intelligent than us, surely they're going to eliminate us, if not by design, simply because they don't care about us. And that's just preposterous for a number of reasons. First reason is they're not going to be a species. They're not going to be a species that competes with us.
And so there is sort of thinking by which you say, well, if AI systems are more intelligent than us, surely they're going to eliminate us, if not by design, simply because they don't care about us. And that's just preposterous for a number of reasons. First reason is they're not going to be a species. They're not going to be a species that competes with us.
And so there is sort of thinking by which you say, well, if AI systems are more intelligent than us, surely they're going to eliminate us, if not by design, simply because they don't care about us. And that's just preposterous for a number of reasons. First reason is they're not going to be a species. They're not going to be a species that competes with us.
They're not going to have the desire to dominate because the desire to dominate is something that has to be hardwired into an intelligent system. It is hardwired in humans. It is hardwired in baboons, in chimpanzees, in wolves, not in orangutans. The species in which this desire to dominate or submit or attain status in other ways is is specific to social species.
They're not going to have the desire to dominate because the desire to dominate is something that has to be hardwired into an intelligent system. It is hardwired in humans. It is hardwired in baboons, in chimpanzees, in wolves, not in orangutans. The species in which this desire to dominate or submit or attain status in other ways is is specific to social species.
They're not going to have the desire to dominate because the desire to dominate is something that has to be hardwired into an intelligent system. It is hardwired in humans. It is hardwired in baboons, in chimpanzees, in wolves, not in orangutans. The species in which this desire to dominate or submit or attain status in other ways is is specific to social species.
Non-social species like orangutans don't have it, right? And they are as smart as we are, almost, right?
Non-social species like orangutans don't have it, right? And they are as smart as we are, almost, right?
Non-social species like orangutans don't have it, right? And they are as smart as we are, almost, right?
Well, there's all kinds of incentive to make AI systems submissive to humans, right? I mean, this is the way we're going to build them, right? And so then people say, oh, but look at LLMs. LLMs are not controllable. And they're right. LLMs are not controllable. But object-driven AI, so systems that derive their answers by optimization of an objective, means they have to optimize this objective.
Well, there's all kinds of incentive to make AI systems submissive to humans, right? I mean, this is the way we're going to build them, right? And so then people say, oh, but look at LLMs. LLMs are not controllable. And they're right. LLMs are not controllable. But object-driven AI, so systems that derive their answers by optimization of an objective, means they have to optimize this objective.
Well, there's all kinds of incentive to make AI systems submissive to humans, right? I mean, this is the way we're going to build them, right? And so then people say, oh, but look at LLMs. LLMs are not controllable. And they're right. LLMs are not controllable. But object-driven AI, so systems that derive their answers by optimization of an objective, means they have to optimize this objective.
And that objective can include guardrails. One guardrail is... Obey humans. Another gut reality is don't obey humans if it's hurting other humans.