Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Chris Olah

๐Ÿ‘ค Speaker
762 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Like if I, if I'm using a prompt to like classify things or to create data, that's when you're like, it's actually worth just spending like a lot of time, like really thinking it through.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Like if I, if I'm using a prompt to like classify things or to create data, that's when you're like, it's actually worth just spending like a lot of time, like really thinking it through.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You know, there's a concern that people over-anthropomorphize models, and I think that's a very valid concern. I also think that people often under-anthropomorphize them, because sometimes when I see issues that people have run into with Claude, you know, say Claude is refusing a task that it shouldn't refuse, but then I look at the text and the specific wording of what they wrote, and I'm like...

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You know, there's a concern that people over-anthropomorphize models, and I think that's a very valid concern. I also think that people often under-anthropomorphize them, because sometimes when I see issues that people have run into with Claude, you know, say Claude is refusing a task that it shouldn't refuse, but then I look at the text and the specific wording of what they wrote, and I'm like...

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You know, there's a concern that people over-anthropomorphize models, and I think that's a very valid concern. I also think that people often under-anthropomorphize them, because sometimes when I see issues that people have run into with Claude, you know, say Claude is refusing a task that it shouldn't refuse, but then I look at the text and the specific wording of what they wrote, and I'm like...

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I see why Claude did that. And I'm like, if you think through how that looks to Claude, you probably could have just written it in a way that wouldn't evoke such a response. Especially this is more relevant if you see failures or if you see issues. It's sort of like, think about what the model failed at. Like, what did it do wrong?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I see why Claude did that. And I'm like, if you think through how that looks to Claude, you probably could have just written it in a way that wouldn't evoke such a response. Especially this is more relevant if you see failures or if you see issues. It's sort of like, think about what the model failed at. Like, what did it do wrong?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I see why Claude did that. And I'm like, if you think through how that looks to Claude, you probably could have just written it in a way that wouldn't evoke such a response. Especially this is more relevant if you see failures or if you see issues. It's sort of like, think about what the model failed at. Like, what did it do wrong?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

and then maybe it gave that will give you a sense of like why um so is it the way that i phrased the thing and obviously like as models get smarter you're going to need less in this less of this and i already see like people needing less of it but that's probably the advice is sort of like try to have sort of empathy for the model like read what you wrote as if you were like a kind of like person just encountering this for the first time how does it look to you and what would have made you behave in the way that the model behaved so if it misunderstood what kind of like

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

and then maybe it gave that will give you a sense of like why um so is it the way that i phrased the thing and obviously like as models get smarter you're going to need less in this less of this and i already see like people needing less of it but that's probably the advice is sort of like try to have sort of empathy for the model like read what you wrote as if you were like a kind of like person just encountering this for the first time how does it look to you and what would have made you behave in the way that the model behaved so if it misunderstood what kind of like

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

and then maybe it gave that will give you a sense of like why um so is it the way that i phrased the thing and obviously like as models get smarter you're going to need less in this less of this and i already see like people needing less of it but that's probably the advice is sort of like try to have sort of empathy for the model like read what you wrote as if you were like a kind of like person just encountering this for the first time how does it look to you and what would have made you behave in the way that the model behaved so if it misunderstood what kind of like

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

what coding language you wanted to use. Is that because like, it was just very ambiguous and it kind of had to take a guess in which case next time you could just be like, Hey, make sure this is in Python or, I mean, that's the kind of mistake I think models are much less likely to make now. But you know, if you, if you do see that kind of mistake, that's, that's probably the advice I'd have.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

what coding language you wanted to use. Is that because like, it was just very ambiguous and it kind of had to take a guess in which case next time you could just be like, Hey, make sure this is in Python or, I mean, that's the kind of mistake I think models are much less likely to make now. But you know, if you, if you do see that kind of mistake, that's, that's probably the advice I'd have.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

what coding language you wanted to use. Is that because like, it was just very ambiguous and it kind of had to take a guess in which case next time you could just be like, Hey, make sure this is in Python or, I mean, that's the kind of mistake I think models are much less likely to make now. But you know, if you, if you do see that kind of mistake, that's, that's probably the advice I'd have.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, I mean, I've done this with the models. It doesn't always work, but sometimes I'll just be like, why did you do that? I mean, people underestimate the degree to which you can really interact with models. And sometimes I'll just quote word for word the part that made you... And you don't know that it's fully accurate, but sometimes you do that and then you change a thing.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, I mean, I've done this with the models. It doesn't always work, but sometimes I'll just be like, why did you do that? I mean, people underestimate the degree to which you can really interact with models. And sometimes I'll just quote word for word the part that made you... And you don't know that it's fully accurate, but sometimes you do that and then you change a thing.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, I mean, I've done this with the models. It doesn't always work, but sometimes I'll just be like, why did you do that? I mean, people underestimate the degree to which you can really interact with models. And sometimes I'll just quote word for word the part that made you... And you don't know that it's fully accurate, but sometimes you do that and then you change a thing.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I mean, I also use the models to help me with all of this stuff, I should say. Prompting can end up being a little factory where... You're actually building prompts to generate prompts. And so like, yeah, anything where you're like having an issue, asking for suggestions, sometimes just do that. Like you made that error. What could I have said? That's actually not uncommon for me to do.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I mean, I also use the models to help me with all of this stuff, I should say. Prompting can end up being a little factory where... You're actually building prompts to generate prompts. And so like, yeah, anything where you're like having an issue, asking for suggestions, sometimes just do that. Like you made that error. What could I have said? That's actually not uncommon for me to do.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I mean, I also use the models to help me with all of this stuff, I should say. Prompting can end up being a little factory where... You're actually building prompts to generate prompts. And so like, yeah, anything where you're like having an issue, asking for suggestions, sometimes just do that. Like you made that error. What could I have said? That's actually not uncommon for me to do.