Bret Weinstein
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
And I would expect that that's probably what's going on. There's no โ if you have a technology as transformative as this, giving it away for free is counterintuitive, which leaves those of us in the public more or less at the mercy of the people who have it. So I don't see the reason for comfort. We are at the dawn of this radical transformation of technology.
And I would expect that that's probably what's going on. There's no โ if you have a technology as transformative as this, giving it away for free is counterintuitive, which leaves those of us in the public more or less at the mercy of the people who have it. So I don't see the reason for comfort. We are at the dawn of this radical transformation of technology.
humans that by its very nature as a truly complex and emergent innovation, nobody on earth can predict what's going to happen. We can, we're on the event horizon of something. And the problem is, you know, we can talk about the obvious disruptions, the job disruption, and that's going to be massive. And does that lead to
humans that by its very nature as a truly complex and emergent innovation, nobody on earth can predict what's going to happen. We can, we're on the event horizon of something. And the problem is, you know, we can talk about the obvious disruptions, the job disruption, and that's going to be massive. And does that lead to
some group of elites to decide, oh, well, suddenly we have a lot of useless eaters and what are we going to do about that? Because that conversation tends to lead somewhere very dark very quickly. But I think that's just the beginning of the various ways in which this could go wrong without the Doomer scenarios coming into play.
some group of elites to decide, oh, well, suddenly we have a lot of useless eaters and what are we going to do about that? Because that conversation tends to lead somewhere very dark very quickly. But I think that's just the beginning of the various ways in which this could go wrong without the Doomer scenarios coming into play.
This is an uncontrolled experiment in which all of humanity is downstream.
This is an uncontrolled experiment in which all of humanity is downstream.
So I don't really understand this. And maybe this is actually, this is the exact discussion that you would expect between somebody at the frontier of the highly complicated staring at a complex system and a biologist who comes from the land of the complex and is looking back at highly complicated systems. In game theory, we have something called a collective action problem.
So I don't really understand this. And maybe this is actually, this is the exact discussion that you would expect between somebody at the frontier of the highly complicated staring at a complex system and a biologist who comes from the land of the complex and is looking back at highly complicated systems. In game theory, we have something called a collective action problem.
And in the market that you're describing, an individual company has no capacity to hold back the abuses of AI. The most you can do is not participate in them. You can't stop other people from programming LLMs in some dangerous way. And you can limit your own ability to earn based on your own limitations of what you're willing to do.
And in the market that you're describing, an individual company has no capacity to hold back the abuses of AI. The most you can do is not participate in them. You can't stop other people from programming LLMs in some dangerous way. And you can limit your own ability to earn based on your own limitations of what you're willing to do.
And then effectively what happens is the technology gets invented anyway. It's just that the dollars end up in somebody else's pocket. So the incentive is not to restrain yourself so that you can at least compete and participate in the market that's going to be opened. And so the number of ways in which you can abuse this technology, let's take a couple.
And then effectively what happens is the technology gets invented anyway. It's just that the dollars end up in somebody else's pocket. So the incentive is not to restrain yourself so that you can at least compete and participate in the market that's going to be opened. And so the number of ways in which you can abuse this technology, let's take a couple.
what is to stop somebody from training llms on an individual's creative output and then creating an llm that can out-compete that individual can effectively not only produce what they would naturally produce over the course of a lifetime but can extrapolate from it and can even hybridize it with the insights of other people so that effectively
what is to stop somebody from training llms on an individual's creative output and then creating an llm that can out-compete that individual can effectively not only produce what they would naturally produce over the course of a lifetime but can extrapolate from it and can even hybridize it with the insights of other people so that effectively
Those who have the LLM can train it on the creativity of others, not cut them in on the use of that insight. You can effectively end up putting yourself out of business by putting your creative ideas in the world where they get sucked up as training data for future LLMs. That is unscrupulous, but it's effectively guaranteed. In fact, it's already happened. So that's a problem.
Those who have the LLM can train it on the creativity of others, not cut them in on the use of that insight. You can effectively end up putting yourself out of business by putting your creative ideas in the world where they get sucked up as training data for future LLMs. That is unscrupulous, but it's effectively guaranteed. In fact, it's already happened. So that's a problem.
And likewise, what would stop somebody... From interacting with an individual and training an LLM to become like a personalized con artist, something that would play exactly to your blind spots.
And likewise, what would stop somebody... From interacting with an individual and training an LLM to become like a personalized con artist, something that would play exactly to your blind spots.