Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1367 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

The the power of the models and their ability to solve all these problems in biology, neuroscience, economic development, government, governance and peace, large parts of the economy. Those those come with risks as well. Right. With great power comes great responsibility. Right. That's the two are the two are paired. Things that are powerful can do good things and they can do bad things.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think of those risks as being in several different categories. Perhaps the two biggest risks that I think about, and that's not to say that there aren't risks today that are important, but when I think of the things that would happen on the grandest scale, one is what I call catastrophic misuse. These are misuse of the models in domains like cyber, bio, radiological, nuclear, right?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think of those risks as being in several different categories. Perhaps the two biggest risks that I think about, and that's not to say that there aren't risks today that are important, but when I think of the things that would happen on the grandest scale, one is what I call catastrophic misuse. These are misuse of the models in domains like cyber, bio, radiological, nuclear, right?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think of those risks as being in several different categories. Perhaps the two biggest risks that I think about, and that's not to say that there aren't risks today that are important, but when I think of the things that would happen on the grandest scale, one is what I call catastrophic misuse. These are misuse of the models in domains like cyber, bio, radiological, nuclear, right?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Things that could... harm or even kill thousands, even millions of people if they really, really go wrong. These are the number one priority to prevent. And here, I would just make a simple observation, which is that

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Things that could... harm or even kill thousands, even millions of people if they really, really go wrong. These are the number one priority to prevent. And here, I would just make a simple observation, which is that

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Things that could... harm or even kill thousands, even millions of people if they really, really go wrong. These are the number one priority to prevent. And here, I would just make a simple observation, which is that

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

The models, you know, if I look today at people who have done really bad things in the world, I think actually humanity has been protected by the fact that the overlap between really smart, well-educated people and people who want to do really horrific things has generally been small. Like, you know, let's say I'm someone who, you know, I have a PhD in this field. I have a well-paying job.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

The models, you know, if I look today at people who have done really bad things in the world, I think actually humanity has been protected by the fact that the overlap between really smart, well-educated people and people who want to do really horrific things has generally been small. Like, you know, let's say I'm someone who, you know, I have a PhD in this field. I have a well-paying job.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

The models, you know, if I look today at people who have done really bad things in the world, I think actually humanity has been protected by the fact that the overlap between really smart, well-educated people and people who want to do really horrific things has generally been small. Like, you know, let's say I'm someone who, you know, I have a PhD in this field. I have a well-paying job.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There's so much to lose. Why do I want to like, even assuming I'm completely evil, which most people are not, why would such a person risk their life, risk their legacy, their reputation to do something like truly, truly evil? If we had a lot more people like that, the world would be a much more dangerous place. And so my worry is that by being a much more intelligent agent,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There's so much to lose. Why do I want to like, even assuming I'm completely evil, which most people are not, why would such a person risk their life, risk their legacy, their reputation to do something like truly, truly evil? If we had a lot more people like that, the world would be a much more dangerous place. And so my worry is that by being a much more intelligent agent,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There's so much to lose. Why do I want to like, even assuming I'm completely evil, which most people are not, why would such a person risk their life, risk their legacy, their reputation to do something like truly, truly evil? If we had a lot more people like that, the world would be a much more dangerous place. And so my worry is that by being a much more intelligent agent,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

AI could break that correlation. And so I do have serious worries about that. I believe we can prevent those worries, but I think as a counterpoint to machines of loving grace, I wanna say that there's still serious risks. And the second range of risks would be the autonomy risks,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

AI could break that correlation. And so I do have serious worries about that. I believe we can prevent those worries, but I think as a counterpoint to machines of loving grace, I wanna say that there's still serious risks. And the second range of risks would be the autonomy risks,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

AI could break that correlation. And so I do have serious worries about that. I believe we can prevent those worries, but I think as a counterpoint to machines of loving grace, I wanna say that there's still serious risks. And the second range of risks would be the autonomy risks,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

which is the idea that models might on their own, particularly as we give them more agency than they've had in the past, particularly as we give them supervision over wider tasks like writing whole code bases or someday even effectively operating entire companies, They're on a long enough leash. Are they doing what we really want them to do?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

which is the idea that models might on their own, particularly as we give them more agency than they've had in the past, particularly as we give them supervision over wider tasks like writing whole code bases or someday even effectively operating entire companies, They're on a long enough leash. Are they doing what we really want them to do?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

which is the idea that models might on their own, particularly as we give them more agency than they've had in the past, particularly as we give them supervision over wider tasks like writing whole code bases or someday even effectively operating entire companies, They're on a long enough leash. Are they doing what we really want them to do?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

It's very difficult to even understand in detail what they're doing, let alone control it. And like I said, these early signs that it's hard to perfectly draw the boundary between things the model should do and things the model shouldn't do, that, you know, If you go to one side, you get things that are annoying and useless, and you go to the other side, you get other behaviors.