Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1367 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But, you know, clearly it has the same, you know, size and shape of previous models. Uh, uh, so I think those two together, as well as the timing, timing issues, any kind of scheme you come up with, uh, you know, the reality tends to kind of frustrate that scheme, right? It tends to kind of break out of the breakout of the scheme. It's not like software where you can say, oh, this is like,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But, you know, clearly it has the same, you know, size and shape of previous models. Uh, uh, so I think those two together, as well as the timing, timing issues, any kind of scheme you come up with, uh, you know, the reality tends to kind of frustrate that scheme, right? It tends to kind of break out of the breakout of the scheme. It's not like software where you can say, oh, this is like,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But, you know, clearly it has the same, you know, size and shape of previous models. Uh, uh, so I think those two together, as well as the timing, timing issues, any kind of scheme you come up with, uh, you know, the reality tends to kind of frustrate that scheme, right? It tends to kind of break out of the breakout of the scheme. It's not like software where you can say, oh, this is like,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

you know, 3.7, this is 3.8. No, you have models with different, different trade-offs. You can change some things in your models. You can train, you can change other things. Some are faster and slower at inference. Some have to be more expensive. Some have to be less expensive. And so I think all the companies have struggled with this.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

you know, 3.7, this is 3.8. No, you have models with different, different trade-offs. You can change some things in your models. You can train, you can change other things. Some are faster and slower at inference. Some have to be more expensive. Some have to be less expensive. And so I think all the companies have struggled with this.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

you know, 3.7, this is 3.8. No, you have models with different, different trade-offs. You can change some things in your models. You can train, you can change other things. Some are faster and slower at inference. Some have to be more expensive. Some have to be less expensive. And so I think all the companies have struggled with this.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think we did very, you know, I think, think we were in a good, good position in terms of naming when we had Haiku, Sonnet and Opus. Great start. We're trying to maintain it, but it's not perfect. So we'll try and get back to the simplicity, but just the nature of the field, I feel like no one's figured out naming. It's somehow a different paradigm from normal software. And so...

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think we did very, you know, I think, think we were in a good, good position in terms of naming when we had Haiku, Sonnet and Opus. Great start. We're trying to maintain it, but it's not perfect. So we'll try and get back to the simplicity, but just the nature of the field, I feel like no one's figured out naming. It's somehow a different paradigm from normal software. And so...

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think we did very, you know, I think, think we were in a good, good position in terms of naming when we had Haiku, Sonnet and Opus. Great start. We're trying to maintain it, but it's not perfect. So we'll try and get back to the simplicity, but just the nature of the field, I feel like no one's figured out naming. It's somehow a different paradigm from normal software. And so...

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

we just, none of the companies have been perfect at it. It's something we struggle with surprisingly much relative to how trivial it is for the grand science of training the models. So from the user side,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

we just, none of the companies have been perfect at it. It's something we struggle with surprisingly much relative to how trivial it is for the grand science of training the models. So from the user side,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

we just, none of the companies have been perfect at it. It's something we struggle with surprisingly much relative to how trivial it is for the grand science of training the models. So from the user side,

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah. Yeah. I definitely think this question of there are lots of properties of the models that are not reflected in the benchmarks. I think I think that's that's definitely the case. And everyone agrees. And not all of them are capabilities. Some of them are, you know, models can be polite or brusque. They can be, you know, very reactive or they can ask you questions.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah. Yeah. I definitely think this question of there are lots of properties of the models that are not reflected in the benchmarks. I think I think that's that's definitely the case. And everyone agrees. And not all of them are capabilities. Some of them are, you know, models can be polite or brusque. They can be, you know, very reactive or they can ask you questions.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah. Yeah. I definitely think this question of there are lots of properties of the models that are not reflected in the benchmarks. I think I think that's that's definitely the case. And everyone agrees. And not all of them are capabilities. Some of them are, you know, models can be polite or brusque. They can be, you know, very reactive or they can ask you questions.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

They can have what feels like a warm personality or a cold personality. They can be boring or they can be very distinctive like Golden Gate Claude was. And we have a whole, you know, we have a whole team kind of focused on, I think we call it Claude character. Amanda leads that team and we'll talk to you about that. But it's still a very inexact science.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

They can have what feels like a warm personality or a cold personality. They can be boring or they can be very distinctive like Golden Gate Claude was. And we have a whole, you know, we have a whole team kind of focused on, I think we call it Claude character. Amanda leads that team and we'll talk to you about that. But it's still a very inexact science.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

They can have what feels like a warm personality or a cold personality. They can be boring or they can be very distinctive like Golden Gate Claude was. And we have a whole, you know, we have a whole team kind of focused on, I think we call it Claude character. Amanda leads that team and we'll talk to you about that. But it's still a very inexact science.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And often we find that models have properties that we're not aware of. The fact of the matter is that you can talk to a model 10,000 times and there are some behaviors you might not see. Just like with a human, right? I can know someone for a few months and not know that they have a certain skill or not know that there's a certain side to them. And so I think we just have to get used to this idea.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And often we find that models have properties that we're not aware of. The fact of the matter is that you can talk to a model 10,000 times and there are some behaviors you might not see. Just like with a human, right? I can know someone for a few months and not know that they have a certain skill or not know that there's a certain side to them. And so I think we just have to get used to this idea.