Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1367 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And then as time goes on, they become very aware of the limitations. So that may be another effect. But that's all a very long-winded way of saying, for the most part, with some fairly narrow exceptions, the models are not changing.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And then as time goes on, they become very aware of the limitations. So that may be another effect. But that's all a very long-winded way of saying, for the most part, with some fairly narrow exceptions, the models are not changing.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And then as time goes on, they become very aware of the limitations. So that may be another effect. But that's all a very long-winded way of saying, for the most part, with some fairly narrow exceptions, the models are not changing.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And now I'm like, I can't get this thing to work. This is such a piece of crap.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And now I'm like, I can't get this thing to work. This is such a piece of crap.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And now I'm like, I can't get this thing to work. This is such a piece of crap.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah. So a couple points on this first. One is like things that people say on Reddit and Twitter or X or whatever it is. There's actually a huge distribution shift between like the stuff that people complain loudly about on social media and what actually kind of like statistically users care about and that drives people to use the models.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah. So a couple points on this first. One is like things that people say on Reddit and Twitter or X or whatever it is. There's actually a huge distribution shift between like the stuff that people complain loudly about on social media and what actually kind of like statistically users care about and that drives people to use the models.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah. So a couple points on this first. One is like things that people say on Reddit and Twitter or X or whatever it is. There's actually a huge distribution shift between like the stuff that people complain loudly about on social media and what actually kind of like statistically users care about and that drives people to use the models.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

People are frustrated with things like the model not writing out all the code or the model just not being as good at code as it could be, even though it's the best model in the world on code. I think the majority of things are about that, but certainly a kind of vocal minority are you know, kind of raise these concerns, right?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

People are frustrated with things like the model not writing out all the code or the model just not being as good at code as it could be, even though it's the best model in the world on code. I think the majority of things are about that, but certainly a kind of vocal minority are you know, kind of raise these concerns, right?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

People are frustrated with things like the model not writing out all the code or the model just not being as good at code as it could be, even though it's the best model in the world on code. I think the majority of things are about that, but certainly a kind of vocal minority are you know, kind of raise these concerns, right?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Are frustrated by the model, refusing things that it shouldn't refuse or like apologizing too much or just having these kind of like annoying verbal tics. The second caveat, and I just want to say this like super clearly because I think it's like, some people don't know it. Others like kind of know it, but forget it.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Are frustrated by the model, refusing things that it shouldn't refuse or like apologizing too much or just having these kind of like annoying verbal tics. The second caveat, and I just want to say this like super clearly because I think it's like, some people don't know it. Others like kind of know it, but forget it.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Are frustrated by the model, refusing things that it shouldn't refuse or like apologizing too much or just having these kind of like annoying verbal tics. The second caveat, and I just want to say this like super clearly because I think it's like, some people don't know it. Others like kind of know it, but forget it.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Like it is very difficult to control across the board how the models behave, right? You cannot just reach in there and say, oh, I want the model to like apologize less. Like you can do that. You can include trading data that says like, oh, the model should like apologize less.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Like it is very difficult to control across the board how the models behave, right? You cannot just reach in there and say, oh, I want the model to like apologize less. Like you can do that. You can include trading data that says like, oh, the model should like apologize less.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Like it is very difficult to control across the board how the models behave, right? You cannot just reach in there and say, oh, I want the model to like apologize less. Like you can do that. You can include trading data that says like, oh, the model should like apologize less.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But then in some other situation, they end up being like super rude or like overconfident in a way that's like misleading people. So there are all these trade-offs, right? For example, another thing is if there was a period during which models, ours and I think others as well, were too verbose, right? They would like repeat themselves. They would say too much.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But then in some other situation, they end up being like super rude or like overconfident in a way that's like misleading people. So there are all these trade-offs, right? For example, another thing is if there was a period during which models, ours and I think others as well, were too verbose, right? They would like repeat themselves. They would say too much.