Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Chris Olah

๐Ÿ‘ค Speaker
762 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

So I think of it as patching issues and slightly adjusting behaviors to make it better and more to people's preferences. So yeah, it's almost like the less robust but faster way of just like solving problems.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

So I think of it as patching issues and slightly adjusting behaviors to make it better and more to people's preferences. So yeah, it's almost like the less robust but faster way of just like solving problems.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

So I think of it as patching issues and slightly adjusting behaviors to make it better and more to people's preferences. So yeah, it's almost like the less robust but faster way of just like solving problems.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, no, I think that that is actually really interesting because I remember seeing this happen when people were flagging this on the internet. And it was really interesting because I knew that, at least in the cases I was looking at, it was like nothing has changed. Literally, it cannot. It is the same model with the same system prompt, same everything.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, no, I think that that is actually really interesting because I remember seeing this happen when people were flagging this on the internet. And it was really interesting because I knew that, at least in the cases I was looking at, it was like nothing has changed. Literally, it cannot. It is the same model with the same system prompt, same everything.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, no, I think that that is actually really interesting because I remember seeing this happen when people were flagging this on the internet. And it was really interesting because I knew that, at least in the cases I was looking at, it was like nothing has changed. Literally, it cannot. It is the same model with the same system prompt, same everything.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think when there are changes, then it makes more sense. So one example is there... you know, you can have artifacts turned on or off on cloud.ai. And because this is like a system prompt change, I think it does mean that the behavior changes a little bit.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think when there are changes, then it makes more sense. So one example is there... you know, you can have artifacts turned on or off on cloud.ai. And because this is like a system prompt change, I think it does mean that the behavior changes a little bit.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think when there are changes, then it makes more sense. So one example is there... you know, you can have artifacts turned on or off on cloud.ai. And because this is like a system prompt change, I think it does mean that the behavior changes a little bit.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I did flag this to people where I was like, if you love cloud's behavior and then artifacts was turned from like the, I think you had to turn on to the default, just try turning it off and see if the issue you were facing was that change.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I did flag this to people where I was like, if you love cloud's behavior and then artifacts was turned from like the, I think you had to turn on to the default, just try turning it off and see if the issue you were facing was that change.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I did flag this to people where I was like, if you love cloud's behavior and then artifacts was turned from like the, I think you had to turn on to the default, just try turning it off and see if the issue you were facing was that change.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But it was fascinating because, yeah, you sometimes see people indicate that there's like a regression when I'm like, there cannot like I, you know, and like I'm like, I'm again, you know, you should never be dismissive. And so you should always investigate. You're like, maybe something is wrong that you're not seeing. Maybe there was some change made.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But it was fascinating because, yeah, you sometimes see people indicate that there's like a regression when I'm like, there cannot like I, you know, and like I'm like, I'm again, you know, you should never be dismissive. And so you should always investigate. You're like, maybe something is wrong that you're not seeing. Maybe there was some change made.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But it was fascinating because, yeah, you sometimes see people indicate that there's like a regression when I'm like, there cannot like I, you know, and like I'm like, I'm again, you know, you should never be dismissive. And so you should always investigate. You're like, maybe something is wrong that you're not seeing. Maybe there was some change made.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But then you look into it and you're like, this is just the same model doing the same thing. And I'm like, I think it's just that you got kind of unlucky with a few prompts or something. And it looked like it was getting much worse. And actually, it was just, yeah, it was maybe just like luck.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But then you look into it and you're like, this is just the same model doing the same thing. And I'm like, I think it's just that you got kind of unlucky with a few prompts or something. And it looked like it was getting much worse. And actually, it was just, yeah, it was maybe just like luck.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But then you look into it and you're like, this is just the same model doing the same thing. And I'm like, I think it's just that you got kind of unlucky with a few prompts or something. And it looked like it was getting much worse. And actually, it was just, yeah, it was maybe just like luck.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And you can get randomness is like the other thing. And just trying the prompt like, you know, four or 10 times, you might realize that actually like possibly, you know, like two months ago you tried it and it succeeded. But actually if you tried it, it would have only succeeded half of the time. And now it only succeeds half of the time. And that can also be an effect.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And you can get randomness is like the other thing. And just trying the prompt like, you know, four or 10 times, you might realize that actually like possibly, you know, like two months ago you tried it and it succeeded. But actually if you tried it, it would have only succeeded half of the time. And now it only succeeds half of the time. And that can also be an effect.