Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1353 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

So I can only describe it as it relates to kind of my own experience, but I've been in the AI field for about 10 years. And it was something I noticed very early on. So I first joined the AI world when I was working at Baidu with Andrew Ng in late 2014, which is almost exactly 10 years ago now. And the first thing we worked on was speech recognition systems.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

So I can only describe it as it relates to kind of my own experience, but I've been in the AI field for about 10 years. And it was something I noticed very early on. So I first joined the AI world when I was working at Baidu with Andrew Ng in late 2014, which is almost exactly 10 years ago now. And the first thing we worked on was speech recognition systems.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

So I can only describe it as it relates to kind of my own experience, but I've been in the AI field for about 10 years. And it was something I noticed very early on. So I first joined the AI world when I was working at Baidu with Andrew Ng in late 2014, which is almost exactly 10 years ago now. And the first thing we worked on was speech recognition systems.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And in those days, I think deep learning was a new thing. It had made lots of progress, but everyone was always saying, we don't have the algorithms we need to succeed. You know, we're not, we're only matching a tiny, tiny fraction. There's so much we need to kind of discover algorithmically. We haven't found the picture of how to match the human brain.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And in those days, I think deep learning was a new thing. It had made lots of progress, but everyone was always saying, we don't have the algorithms we need to succeed. You know, we're not, we're only matching a tiny, tiny fraction. There's so much we need to kind of discover algorithmically. We haven't found the picture of how to match the human brain.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And in those days, I think deep learning was a new thing. It had made lots of progress, but everyone was always saying, we don't have the algorithms we need to succeed. You know, we're not, we're only matching a tiny, tiny fraction. There's so much we need to kind of discover algorithmically. We haven't found the picture of how to match the human brain.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Uh, and when, you know, in some ways it was fortunate. I was kind of, you know, you can have almost beginner's luck, right? I was like a newcomer to the field. And, you know, I looked at the neural net that we were using for speech, the recurrent neural networks. And I said, I don't know, what if you make them bigger and give them more layers and

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Uh, and when, you know, in some ways it was fortunate. I was kind of, you know, you can have almost beginner's luck, right? I was like a newcomer to the field. And, you know, I looked at the neural net that we were using for speech, the recurrent neural networks. And I said, I don't know, what if you make them bigger and give them more layers and

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Uh, and when, you know, in some ways it was fortunate. I was kind of, you know, you can have almost beginner's luck, right? I was like a newcomer to the field. And, you know, I looked at the neural net that we were using for speech, the recurrent neural networks. And I said, I don't know, what if you make them bigger and give them more layers and

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And what if you scale up the data along with this, right? I just saw these as like independent dials that you could turn. And I noticed that the model started to do better and better as you gave them more data, as you made the models larger, as you trained them for longer.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And what if you scale up the data along with this, right? I just saw these as like independent dials that you could turn. And I noticed that the model started to do better and better as you gave them more data, as you made the models larger, as you trained them for longer.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And what if you scale up the data along with this, right? I just saw these as like independent dials that you could turn. And I noticed that the model started to do better and better as you gave them more data, as you made the models larger, as you trained them for longer.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And I didn't measure things precisely in those days, but along with colleagues, we very much got the informal sense that the more data and the more compute and the more training you put into these models, the better they perform. And so initially my thinking was, hey, maybe that is just true for speech recognition systems, right? Maybe that's just one particular quirk, one particular area.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And I didn't measure things precisely in those days, but along with colleagues, we very much got the informal sense that the more data and the more compute and the more training you put into these models, the better they perform. And so initially my thinking was, hey, maybe that is just true for speech recognition systems, right? Maybe that's just one particular quirk, one particular area.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And I didn't measure things precisely in those days, but along with colleagues, we very much got the informal sense that the more data and the more compute and the more training you put into these models, the better they perform. And so initially my thinking was, hey, maybe that is just true for speech recognition systems, right? Maybe that's just one particular quirk, one particular area.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think it wasn't until 2017 when I first saw the results from GPT-1. that it clicked for me that language is probably the area in which we can do this. We can get trillions of words of language data. We can train on them. And the models we were trained in those days were tiny.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think it wasn't until 2017 when I first saw the results from GPT-1. that it clicked for me that language is probably the area in which we can do this. We can get trillions of words of language data. We can train on them. And the models we were trained in those days were tiny.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

I think it wasn't until 2017 when I first saw the results from GPT-1. that it clicked for me that language is probably the area in which we can do this. We can get trillions of words of language data. We can train on them. And the models we were trained in those days were tiny.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You could train them on one to eight GPUs, whereas, you know, now we train jobs on tens of thousands, soon going to hundreds of thousands of GPUs. And so when I saw those two things together, and, you know, there were a few people like Ilya Sutskiver, who you've interviewed, who had somewhat similar views, right?

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You could train them on one to eight GPUs, whereas, you know, now we train jobs on tens of thousands, soon going to hundreds of thousands of GPUs. And so when I saw those two things together, and, you know, there were a few people like Ilya Sutskiver, who you've interviewed, who had somewhat similar views, right?

โ† Previous Page 1 of 68 Next โ†’