Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Yann LeCun

👤 Person
1086 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And that objective can include guardrails. One guardrail is... Obey humans. Another gut reality is don't obey humans if it's hurting other humans.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

And that objective can include guardrails. One guardrail is... Obey humans. Another gut reality is don't obey humans if it's hurting other humans.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yes. Maybe in a book.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yes. Maybe in a book.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

Yes. Maybe in a book.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, of course. So this is not a simple problem, right? I mean, designing those guardrails so that the system behaves properly is not going to be a simple issue for which there is a silver bullet, for which you have a mathematical proof that the system can be safe.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, of course. So this is not a simple problem, right? I mean, designing those guardrails so that the system behaves properly is not going to be a simple issue for which there is a silver bullet, for which you have a mathematical proof that the system can be safe.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

No, of course. So this is not a simple problem, right? I mean, designing those guardrails so that the system behaves properly is not going to be a simple issue for which there is a silver bullet, for which you have a mathematical proof that the system can be safe.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's going to be a very progressive, iterative design system where we put those guardrails in such a way that the system behaves properly. And sometimes they're going to do something that was unexpected because the guardrail wasn't right, and we're going to correct them so that they do it right.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's going to be a very progressive, iterative design system where we put those guardrails in such a way that the system behaves properly. And sometimes they're going to do something that was unexpected because the guardrail wasn't right, and we're going to correct them so that they do it right.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It's going to be a very progressive, iterative design system where we put those guardrails in such a way that the system behaves properly. And sometimes they're going to do something that was unexpected because the guardrail wasn't right, and we're going to correct them so that they do it right.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

The idea somehow that we can't get it slightly wrong because if we get it slightly wrong, we all die is ridiculous. We're just going to go progressively. And it's just going to be, the analogy I've used many times is turbojet design. How did we figure out how to make turbojets so unbelievably reliable, right? I mean, those are like, you know, incredibly complex things

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

The idea somehow that we can't get it slightly wrong because if we get it slightly wrong, we all die is ridiculous. We're just going to go progressively. And it's just going to be, the analogy I've used many times is turbojet design. How did we figure out how to make turbojets so unbelievably reliable, right? I mean, those are like, you know, incredibly complex things

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

The idea somehow that we can't get it slightly wrong because if we get it slightly wrong, we all die is ridiculous. We're just going to go progressively. And it's just going to be, the analogy I've used many times is turbojet design. How did we figure out how to make turbojets so unbelievably reliable, right? I mean, those are like, you know, incredibly complex things

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

pieces of hardware that run at really high temperatures for 20 hours at a time sometimes. We can fly halfway around the world on a two-engine jetliner at near the speed of sound. How incredible is this? It's just unbelievable. Did we do this because we invented a general principle of how to make turbojets safe? No.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

pieces of hardware that run at really high temperatures for 20 hours at a time sometimes. We can fly halfway around the world on a two-engine jetliner at near the speed of sound. How incredible is this? It's just unbelievable. Did we do this because we invented a general principle of how to make turbojets safe? No.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

pieces of hardware that run at really high temperatures for 20 hours at a time sometimes. We can fly halfway around the world on a two-engine jetliner at near the speed of sound. How incredible is this? It's just unbelievable. Did we do this because we invented a general principle of how to make turbojets safe? No.

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It took decades to kind of fine-tune the design of those systems so that they were safe. Is there a separate group within General Electric or Snekma or whatever that is specialized in turbojet safety? No, the design is all about safety because a better turbojet is also a safer turbojet. So a more reliable one. It's the same for AI. Do you need specific provisions to make AI safe?

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It took decades to kind of fine-tune the design of those systems so that they were safe. Is there a separate group within General Electric or Snekma or whatever that is specialized in turbojet safety? No, the design is all about safety because a better turbojet is also a safer turbojet. So a more reliable one. It's the same for AI. Do you need specific provisions to make AI safe?

Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

It took decades to kind of fine-tune the design of those systems so that they were safe. Is there a separate group within General Electric or Snekma or whatever that is specialized in turbojet safety? No, the design is all about safety because a better turbojet is also a safer turbojet. So a more reliable one. It's the same for AI. Do you need specific provisions to make AI safe?