Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1367 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Um, but those are typically very close to when a model is being, is being, uh, released and for a very small fraction of time. Um, so, uh, you know, like the, you know, the, the day before the new sonnet 3.5, I agree. We should have had a better name. It's clunky to refer to it. Um, there were some comments from people that like, it's got, it's got, it's gotten a lot better.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Um, but those are typically very close to when a model is being, is being, uh, released and for a very small fraction of time. Um, so, uh, you know, like the, you know, the, the day before the new sonnet 3.5, I agree. We should have had a better name. It's clunky to refer to it. Um, there were some comments from people that like, it's got, it's got, it's gotten a lot better.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And that's because, you know, a fraction were exposed to, to an AB test for, for those one or for those one or two days. Um, the other is that occasionally the system prompt will change, um, on the system prompt can have some effects, although it's on, it's unlikely to dumb down models. It's unlikely to make them dumber.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And that's because, you know, a fraction were exposed to, to an AB test for, for those one or for those one or two days. Um, the other is that occasionally the system prompt will change, um, on the system prompt can have some effects, although it's on, it's unlikely to dumb down models. It's unlikely to make them dumber.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And that's because, you know, a fraction were exposed to, to an AB test for, for those one or for those one or two days. Um, the other is that occasionally the system prompt will change, um, on the system prompt can have some effects, although it's on, it's unlikely to dumb down models. It's unlikely to make them dumber.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Um, and, and, and, and we've seen that while these two things, which I'm listing to be very complete, um, happen relatively, happen quite infrequently. The complaints for us and for other model companies about the model change, the model isn't good at this, the model got more censored, the model was dumbed down, those complaints are constant.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Um, and, and, and, and we've seen that while these two things, which I'm listing to be very complete, um, happen relatively, happen quite infrequently. The complaints for us and for other model companies about the model change, the model isn't good at this, the model got more censored, the model was dumbed down, those complaints are constant.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Um, and, and, and, and we've seen that while these two things, which I'm listing to be very complete, um, happen relatively, happen quite infrequently. The complaints for us and for other model companies about the model change, the model isn't good at this, the model got more censored, the model was dumbed down, those complaints are constant.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I don't want to say people are imagining it or anything, but the models are, for the most part, not changing. If I were to offer a theory, I think it actually relates to one of the things I said before, which is that Models are very complex and have many aspects to them.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I don't want to say people are imagining it or anything, but the models are, for the most part, not changing. If I were to offer a theory, I think it actually relates to one of the things I said before, which is that Models are very complex and have many aspects to them.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I don't want to say people are imagining it or anything, but the models are, for the most part, not changing. If I were to offer a theory, I think it actually relates to one of the things I said before, which is that Models are very complex and have many aspects to them.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so often, if I ask the model a question, if I'm like, do task X versus can you do task X, the model might respond in different ways. And so there are all kinds of subtle things that you can change about the way you interact with the model that can give you very different results.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so often, if I ask the model a question, if I'm like, do task X versus can you do task X, the model might respond in different ways. And so there are all kinds of subtle things that you can change about the way you interact with the model that can give you very different results.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so often, if I ask the model a question, if I'm like, do task X versus can you do task X, the model might respond in different ways. And so there are all kinds of subtle things that you can change about the way you interact with the model that can give you very different results.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

To be clear, this itself is like a failing by us and by the other model providers that the models are just often sensitive to like small changes in wording. It's yet another way in which the science of how these models work is very poorly developed.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

To be clear, this itself is like a failing by us and by the other model providers that the models are just often sensitive to like small changes in wording. It's yet another way in which the science of how these models work is very poorly developed.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

To be clear, this itself is like a failing by us and by the other model providers that the models are just often sensitive to like small changes in wording. It's yet another way in which the science of how these models work is very poorly developed.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so if I go to sleep one night and I was talking to the model in a certain way and I slightly change the phrasing of how I talk to the model, I could get different results. So that's one possible way. The other thing is, man, it's just hard to quantify this stuff. It's hard to quantify this stuff. I think people are very excited by new models when they come out.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so if I go to sleep one night and I was talking to the model in a certain way and I slightly change the phrasing of how I talk to the model, I could get different results. So that's one possible way. The other thing is, man, it's just hard to quantify this stuff. It's hard to quantify this stuff. I think people are very excited by new models when they come out.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so if I go to sleep one night and I was talking to the model in a certain way and I slightly change the phrasing of how I talk to the model, I could get different results. So that's one possible way. The other thing is, man, it's just hard to quantify this stuff. It's hard to quantify this stuff. I think people are very excited by new models when they come out.