Yann LeCun
👤 PersonAppearances Over Time
Podcast Appearances
Big companies are really careful about not producing things of this type, because they don't want to hurt anyone, first of all, and then second, they want to preserve their business.
It's essentially impossible for systems like this that can inevitably formulate political opinions and opinions about various things that may be political or not, but that people may disagree about, about, you know, moral issues and, you know, things about like questions about religion and things like that, right? Or cultural issues.
It's essentially impossible for systems like this that can inevitably formulate political opinions and opinions about various things that may be political or not, but that people may disagree about, about, you know, moral issues and, you know, things about like questions about religion and things like that, right? Or cultural issues.
It's essentially impossible for systems like this that can inevitably formulate political opinions and opinions about various things that may be political or not, but that people may disagree about, about, you know, moral issues and, you know, things about like questions about religion and things like that, right? Or cultural issues.
issues that people from different communities would disagree with in the first place so there's only kind of a relatively small number of things that people will sort of agree on you know basic principles but beyond that if you if you want those systems to be useful they will necessarily have to offend a number of people inevitably
issues that people from different communities would disagree with in the first place so there's only kind of a relatively small number of things that people will sort of agree on you know basic principles but beyond that if you if you want those systems to be useful they will necessarily have to offend a number of people inevitably
issues that people from different communities would disagree with in the first place so there's only kind of a relatively small number of things that people will sort of agree on you know basic principles but beyond that if you if you want those systems to be useful they will necessarily have to offend a number of people inevitably
That's right. Open source enables diversity.
That's right. Open source enables diversity.
That's right. Open source enables diversity.
Yeah. I mean, there are some limits to what, you know, the same way there are limits to free speech, there has to be some limit to the kind of stuff that those systems might be authorized to do. to produce some guardrails.
Yeah. I mean, there are some limits to what, you know, the same way there are limits to free speech, there has to be some limit to the kind of stuff that those systems might be authorized to do. to produce some guardrails.
Yeah. I mean, there are some limits to what, you know, the same way there are limits to free speech, there has to be some limit to the kind of stuff that those systems might be authorized to do. to produce some guardrails.
So, I mean, that's one thing I've been interested in, which is in the type of architecture that we were discussing before, where the output of a system is a result of an inference to satisfy an objective. That objective can include guardrails. And we can put guardrails in open source systems. I mean, if we eventually have systems that are built with this blueprint.
So, I mean, that's one thing I've been interested in, which is in the type of architecture that we were discussing before, where the output of a system is a result of an inference to satisfy an objective. That objective can include guardrails. And we can put guardrails in open source systems. I mean, if we eventually have systems that are built with this blueprint.
So, I mean, that's one thing I've been interested in, which is in the type of architecture that we were discussing before, where the output of a system is a result of an inference to satisfy an objective. That objective can include guardrails. And we can put guardrails in open source systems. I mean, if we eventually have systems that are built with this blueprint.
We can put guardrails in those systems that guarantee that there is sort of a minimum set of guardrails that make the system non-dangerous and non-toxic, etc. You know, basic things that everybody would agree on. And then, you know, the fine-tuning that people will add or the additional guardrails that people will add will kind of cater to their community, whatever it is.
We can put guardrails in those systems that guarantee that there is sort of a minimum set of guardrails that make the system non-dangerous and non-toxic, etc. You know, basic things that everybody would agree on. And then, you know, the fine-tuning that people will add or the additional guardrails that people will add will kind of cater to their community, whatever it is.
We can put guardrails in those systems that guarantee that there is sort of a minimum set of guardrails that make the system non-dangerous and non-toxic, etc. You know, basic things that everybody would agree on. And then, you know, the fine-tuning that people will add or the additional guardrails that people will add will kind of cater to their community, whatever it is.
Right, so the increasing number of studies on this seems to point to the fact that it doesn't help. So having an LLM doesn't help you design or build a bioweapon or a chemical weapon if you already have access to a search engine and a library. And so the sort of increased information you get or the ease with which you get it doesn't really help you. That's the first thing. The second thing is,