Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1353 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You could train them on one to eight GPUs, whereas, you know, now we train jobs on tens of thousands, soon going to hundreds of thousands of GPUs. And so when I saw those two things together, and, you know, there were a few people like Ilya Sutskiver, who you've interviewed, who had somewhat similar views, right?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

He might have been the first one, although I think a few people came to similar views around the same time, right? There was, you know, Rich Sutton's bitter lesson. There was, Goren wrote about the scaling hypothesis. But I think somewhere between 2014 and 2017 was when it really clicked for me, when I really got conviction that, hey, we're going to be able to do these incredible

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

He might have been the first one, although I think a few people came to similar views around the same time, right? There was, you know, Rich Sutton's bitter lesson. There was, Goren wrote about the scaling hypothesis. But I think somewhere between 2014 and 2017 was when it really clicked for me, when I really got conviction that, hey, we're going to be able to do these incredible

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

He might have been the first one, although I think a few people came to similar views around the same time, right? There was, you know, Rich Sutton's bitter lesson. There was, Goren wrote about the scaling hypothesis. But I think somewhere between 2014 and 2017 was when it really clicked for me, when I really got conviction that, hey, we're going to be able to do these incredible

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

incredibly wide cognitive tasks if we just scale up the models. And at every stage of scaling, there are always arguments. And when I first heard them, honestly, I thought, probably I'm the one who's wrong. And all these experts in the field are right. They know the situation better than I do. There's the Chomsky argument about you can get syntactics, but you can't get semantics.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

incredibly wide cognitive tasks if we just scale up the models. And at every stage of scaling, there are always arguments. And when I first heard them, honestly, I thought, probably I'm the one who's wrong. And all these experts in the field are right. They know the situation better than I do. There's the Chomsky argument about you can get syntactics, but you can't get semantics.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

incredibly wide cognitive tasks if we just scale up the models. And at every stage of scaling, there are always arguments. And when I first heard them, honestly, I thought, probably I'm the one who's wrong. And all these experts in the field are right. They know the situation better than I do. There's the Chomsky argument about you can get syntactics, but you can't get semantics.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There was this idea, oh, you can make a sentence make sense, but you can't make a paragraph make sense. You know, we're going to run out of data or the data isn't high quality enough or models can't reason. And each time, every time, we manage to either find a way around or scaling just is the way around. Sometimes it's one, sometimes it's the other.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There was this idea, oh, you can make a sentence make sense, but you can't make a paragraph make sense. You know, we're going to run out of data or the data isn't high quality enough or models can't reason. And each time, every time, we manage to either find a way around or scaling just is the way around. Sometimes it's one, sometimes it's the other.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There was this idea, oh, you can make a sentence make sense, but you can't make a paragraph make sense. You know, we're going to run out of data or the data isn't high quality enough or models can't reason. And each time, every time, we manage to either find a way around or scaling just is the way around. Sometimes it's one, sometimes it's the other.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I'm now at this point, I still think, you know, it's always quite uncertain. We have nothing but inductive inference to tell us that the next few years are going to be like the last 10 years. But I've seen the movie enough times.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I'm now at this point, I still think, you know, it's always quite uncertain. We have nothing but inductive inference to tell us that the next few years are going to be like the last 10 years. But I've seen the movie enough times.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And so I'm now at this point, I still think, you know, it's always quite uncertain. We have nothing but inductive inference to tell us that the next few years are going to be like the last 10 years. But I've seen the movie enough times.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I've seen the story happen for enough times to really believe that probably the scaling is going to continue and that there's some magic to it that we haven't really explained on a theoretical basis yet.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I've seen the story happen for enough times to really believe that probably the scaling is going to continue and that there's some magic to it that we haven't really explained on a theoretical basis yet.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I've seen the story happen for enough times to really believe that probably the scaling is going to continue and that there's some magic to it that we haven't really explained on a theoretical basis yet.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yes. All of those. In particular, linear scaling up of bigger networks bigger training times, and more data. So all of these things, almost like a chemical reaction. You have three ingredients in the chemical reaction, and you need to linearly scale up the three ingredients. If you scale up one, not the others, you run out of the other reagents and the reaction stops.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yes. All of those. In particular, linear scaling up of bigger networks bigger training times, and more data. So all of these things, almost like a chemical reaction. You have three ingredients in the chemical reaction, and you need to linearly scale up the three ingredients. If you scale up one, not the others, you run out of the other reagents and the reaction stops.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yes. All of those. In particular, linear scaling up of bigger networks bigger training times, and more data. So all of these things, almost like a chemical reaction. You have three ingredients in the chemical reaction, and you need to linearly scale up the three ingredients. If you scale up one, not the others, you run out of the other reagents and the reaction stops.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But if you scale up everything in series, then the reaction can proceed.