Nathan Lambert
๐ค SpeakerAppearances Over Time
Podcast Appearances
right? Like it's like, there's like copper cables are innovating, right? Like you wouldn't think it, but copper cables, like are, there's some innovations happening there with like the density of how you can pack them. And like, it's like all of these layers of the stack, all the way up to the models, human progress is at a pace that's never been seen before.
There's a big team.
There's a big team.
There's a big team.
Yeah, thank you.
Yeah, thank you.
Yeah, thank you.
Generally, humanity is going to suffer a lot less, right? I'm very optimistic about that. I do worry of techno-fascism type stuff arising as AI becomes more and more prevalent and powerful, and those who control it can do more and more. Maybe it doesn't kill us all, but at some point, every very powerful human is going to want a brain-computer interface so that they can interact with the AGI system
Generally, humanity is going to suffer a lot less, right? I'm very optimistic about that. I do worry of techno-fascism type stuff arising as AI becomes more and more prevalent and powerful, and those who control it can do more and more. Maybe it doesn't kill us all, but at some point, every very powerful human is going to want a brain-computer interface so that they can interact with the AGI system
Generally, humanity is going to suffer a lot less, right? I'm very optimistic about that. I do worry of techno-fascism type stuff arising as AI becomes more and more prevalent and powerful, and those who control it can do more and more. Maybe it doesn't kill us all, but at some point, every very powerful human is going to want a brain-computer interface so that they can interact with the AGI system
and all of its advantages in many more way and merge its mind with, you know, sort of like, and its capabilities or that person's capabilities, uh, can leverage those much better than anyone else and therefore be, you know, it won't be one person rule them all, but it will be, uh, you know, the thing I worry about is it'll be like few people, you know, you know, hundreds, thousands, tens of thousands, maybe millions of people rule whoever's left.
and all of its advantages in many more way and merge its mind with, you know, sort of like, and its capabilities or that person's capabilities, uh, can leverage those much better than anyone else and therefore be, you know, it won't be one person rule them all, but it will be, uh, you know, the thing I worry about is it'll be like few people, you know, you know, hundreds, thousands, tens of thousands, maybe millions of people rule whoever's left.
and all of its advantages in many more way and merge its mind with, you know, sort of like, and its capabilities or that person's capabilities, uh, can leverage those much better than anyone else and therefore be, you know, it won't be one person rule them all, but it will be, uh, you know, the thing I worry about is it'll be like few people, you know, you know, hundreds, thousands, tens of thousands, maybe millions of people rule whoever's left.
Right. Um, And the economy around it, right? And I think that's the thing that's probably more worrisome is human-machine amalgamations. This enables an individual human to have more impact on the world, and that impact can be both positive and negative, right?
Right. Um, And the economy around it, right? And I think that's the thing that's probably more worrisome is human-machine amalgamations. This enables an individual human to have more impact on the world, and that impact can be both positive and negative, right?
Right. Um, And the economy around it, right? And I think that's the thing that's probably more worrisome is human-machine amalgamations. This enables an individual human to have more impact on the world, and that impact can be both positive and negative, right?
Generally, humans have positive impacts on the world, at least societally, but it's possible for individual humans to have such negative impacts. And AGI, at least as I think the labs define it, which is not a runaway sentient thing, but rather just something that can do a lot of tasks really efficiently, amplifies the capabilities of someone causing extreme damage.
Generally, humans have positive impacts on the world, at least societally, but it's possible for individual humans to have such negative impacts. And AGI, at least as I think the labs define it, which is not a runaway sentient thing, but rather just something that can do a lot of tasks really efficiently, amplifies the capabilities of someone causing extreme damage.
Generally, humans have positive impacts on the world, at least societally, but it's possible for individual humans to have such negative impacts. And AGI, at least as I think the labs define it, which is not a runaway sentient thing, but rather just something that can do a lot of tasks really efficiently, amplifies the capabilities of someone causing extreme damage.
But for the most part, I think it'll be used for profit-seeking motives, which will increase the abundance and supply of things and therefore reduce suffering, right?