Yann LeCun
👤 PersonAppearances Over Time
Podcast Appearances
And that objective can include guardrails. One guardrail is... Obey humans. Another gut reality is don't obey humans if it's hurting other humans.
And that objective can include guardrails. One guardrail is... Obey humans. Another gut reality is don't obey humans if it's hurting other humans.
Yes. Maybe in a book.
Yes. Maybe in a book.
Yes. Maybe in a book.
No, of course. So this is not a simple problem, right? I mean, designing those guardrails so that the system behaves properly is not going to be a simple issue for which there is a silver bullet, for which you have a mathematical proof that the system can be safe.
No, of course. So this is not a simple problem, right? I mean, designing those guardrails so that the system behaves properly is not going to be a simple issue for which there is a silver bullet, for which you have a mathematical proof that the system can be safe.
No, of course. So this is not a simple problem, right? I mean, designing those guardrails so that the system behaves properly is not going to be a simple issue for which there is a silver bullet, for which you have a mathematical proof that the system can be safe.
It's going to be a very progressive, iterative design system where we put those guardrails in such a way that the system behaves properly. And sometimes they're going to do something that was unexpected because the guardrail wasn't right, and we're going to correct them so that they do it right.
It's going to be a very progressive, iterative design system where we put those guardrails in such a way that the system behaves properly. And sometimes they're going to do something that was unexpected because the guardrail wasn't right, and we're going to correct them so that they do it right.
It's going to be a very progressive, iterative design system where we put those guardrails in such a way that the system behaves properly. And sometimes they're going to do something that was unexpected because the guardrail wasn't right, and we're going to correct them so that they do it right.
The idea somehow that we can't get it slightly wrong because if we get it slightly wrong, we all die is ridiculous. We're just going to go progressively. And it's just going to be, the analogy I've used many times is turbojet design. How did we figure out how to make turbojets so unbelievably reliable, right? I mean, those are like, you know, incredibly complex things
The idea somehow that we can't get it slightly wrong because if we get it slightly wrong, we all die is ridiculous. We're just going to go progressively. And it's just going to be, the analogy I've used many times is turbojet design. How did we figure out how to make turbojets so unbelievably reliable, right? I mean, those are like, you know, incredibly complex things
The idea somehow that we can't get it slightly wrong because if we get it slightly wrong, we all die is ridiculous. We're just going to go progressively. And it's just going to be, the analogy I've used many times is turbojet design. How did we figure out how to make turbojets so unbelievably reliable, right? I mean, those are like, you know, incredibly complex things
pieces of hardware that run at really high temperatures for 20 hours at a time sometimes. We can fly halfway around the world on a two-engine jetliner at near the speed of sound. How incredible is this? It's just unbelievable. Did we do this because we invented a general principle of how to make turbojets safe? No.
pieces of hardware that run at really high temperatures for 20 hours at a time sometimes. We can fly halfway around the world on a two-engine jetliner at near the speed of sound. How incredible is this? It's just unbelievable. Did we do this because we invented a general principle of how to make turbojets safe? No.
pieces of hardware that run at really high temperatures for 20 hours at a time sometimes. We can fly halfway around the world on a two-engine jetliner at near the speed of sound. How incredible is this? It's just unbelievable. Did we do this because we invented a general principle of how to make turbojets safe? No.
It took decades to kind of fine-tune the design of those systems so that they were safe. Is there a separate group within General Electric or Snekma or whatever that is specialized in turbojet safety? No, the design is all about safety because a better turbojet is also a safer turbojet. So a more reliable one. It's the same for AI. Do you need specific provisions to make AI safe?
It took decades to kind of fine-tune the design of those systems so that they were safe. Is there a separate group within General Electric or Snekma or whatever that is specialized in turbojet safety? No, the design is all about safety because a better turbojet is also a safer turbojet. So a more reliable one. It's the same for AI. Do you need specific provisions to make AI safe?
It took decades to kind of fine-tune the design of those systems so that they were safe. Is there a separate group within General Electric or Snekma or whatever that is specialized in turbojet safety? No, the design is all about safety because a better turbojet is also a safer turbojet. So a more reliable one. It's the same for AI. Do you need specific provisions to make AI safe?