Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1367 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

In many ways, the manner and personality of these models is more an art than it is a science.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

In many ways, the manner and personality of these models is more an art than it is a science.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, so there's different processes. There's pre-training, which is, you know, just kind of the normal language model training. And that takes a very long time. That uses, you know, these days, you know,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, so there's different processes. There's pre-training, which is, you know, just kind of the normal language model training. And that takes a very long time. That uses, you know, these days, you know,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, so there's different processes. There's pre-training, which is, you know, just kind of the normal language model training. And that takes a very long time. That uses, you know, these days, you know,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

tens, you know, tens of thousands, sometimes many tens of thousands of, uh, GPUs or TPUs or tranium, or, you know, what we use different platforms, but, you know, accelerator chips, um, often, often training for months.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

tens, you know, tens of thousands, sometimes many tens of thousands of, uh, GPUs or TPUs or tranium, or, you know, what we use different platforms, but, you know, accelerator chips, um, often, often training for months.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

tens, you know, tens of thousands, sometimes many tens of thousands of, uh, GPUs or TPUs or tranium, or, you know, what we use different platforms, but, you know, accelerator chips, um, often, often training for months.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Uh, there's then a kind of post-training phase where we do reinforcement learning from human feedback, as well as other kinds of reinforcement learning that, that phase is getting, uh, larger and larger now. And, you know, Often, that's less of an exact science. It often takes effort to get it right.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Uh, there's then a kind of post-training phase where we do reinforcement learning from human feedback, as well as other kinds of reinforcement learning that, that phase is getting, uh, larger and larger now. And, you know, Often, that's less of an exact science. It often takes effort to get it right.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Uh, there's then a kind of post-training phase where we do reinforcement learning from human feedback, as well as other kinds of reinforcement learning that, that phase is getting, uh, larger and larger now. And, you know, Often, that's less of an exact science. It often takes effort to get it right.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Models are then tested with some of our early partners to see how good they are, and they're then tested both internally and externally for their safety, particularly for catastrophic and autonomy risks. Uh, so, uh, we do internal testing according to our responsible scaling policy, which I, you know, could talk more about that in detail.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Models are then tested with some of our early partners to see how good they are, and they're then tested both internally and externally for their safety, particularly for catastrophic and autonomy risks. Uh, so, uh, we do internal testing according to our responsible scaling policy, which I, you know, could talk more about that in detail.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Models are then tested with some of our early partners to see how good they are, and they're then tested both internally and externally for their safety, particularly for catastrophic and autonomy risks. Uh, so, uh, we do internal testing according to our responsible scaling policy, which I, you know, could talk more about that in detail.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And then we have an agreement with the U S and the UK AI safety Institute, as well as other third-party testers in specific domains to test the models for what are called CBRN risk, chemical, biological, radiological, and nuclear, which are, you know, we don't think that models are

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And then we have an agreement with the U S and the UK AI safety Institute, as well as other third-party testers in specific domains to test the models for what are called CBRN risk, chemical, biological, radiological, and nuclear, which are, you know, we don't think that models are

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And then we have an agreement with the U S and the UK AI safety Institute, as well as other third-party testers in specific domains to test the models for what are called CBRN risk, chemical, biological, radiological, and nuclear, which are, you know, we don't think that models are

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

pose these risks seriously yet, but every new model we want to evaluate to see if we're starting to get close to some of these more dangerous capabilities. So those are the phases. And then it just takes some time to get the model working in terms of inference and launching it in the API. So there's just just a lot of steps to actually making a model work.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

pose these risks seriously yet, but every new model we want to evaluate to see if we're starting to get close to some of these more dangerous capabilities. So those are the phases. And then it just takes some time to get the model working in terms of inference and launching it in the API. So there's just just a lot of steps to actually making a model work.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

pose these risks seriously yet, but every new model we want to evaluate to see if we're starting to get close to some of these more dangerous capabilities. So those are the phases. And then it just takes some time to get the model working in terms of inference and launching it in the API. So there's just just a lot of steps to actually making a model work.