Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1367 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Idea limited? So a few things. Now we're talking about hitting the limit before we get to the level of humans and the skill of humans. So I think one that's popular today and I think could be a limit that we run into, like most of the limits, I would bet against it, but it's definitely possible, is we simply run out of data. There's only so much data on the internet.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Idea limited? So a few things. Now we're talking about hitting the limit before we get to the level of humans and the skill of humans. So I think one that's popular today and I think could be a limit that we run into, like most of the limits, I would bet against it, but it's definitely possible, is we simply run out of data. There's only so much data on the internet.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And there's issues with the quality of the data, right? You can get... hundreds of trillions of words on the internet, but a lot of it is repetitive or it's search engine optimization drivel, or maybe in the future, it'll even be text generated by AIs itself. And so I think there are limits to what can be produced in this way.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And there's issues with the quality of the data, right? You can get... hundreds of trillions of words on the internet, but a lot of it is repetitive or it's search engine optimization drivel, or maybe in the future, it'll even be text generated by AIs itself. And so I think there are limits to what can be produced in this way.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And there's issues with the quality of the data, right? You can get... hundreds of trillions of words on the internet, but a lot of it is repetitive or it's search engine optimization drivel, or maybe in the future, it'll even be text generated by AIs itself. And so I think there are limits to what can be produced in this way.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

That said, we, and I would guess other companies, are working on ways to make data synthetic. where you can use the model to generate more data of the type that you have already or even generate data from scratch.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

That said, we, and I would guess other companies, are working on ways to make data synthetic. where you can use the model to generate more data of the type that you have already or even generate data from scratch.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

That said, we, and I would guess other companies, are working on ways to make data synthetic. where you can use the model to generate more data of the type that you have already or even generate data from scratch.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

If you think about what was done with DeepMind's AlphaGo Zero, they managed to get a bot all the way from no ability to play Go whatsoever to above human level just by playing against itself. There was no example data from humans required in the AlphaGo Zero version of it.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

If you think about what was done with DeepMind's AlphaGo Zero, they managed to get a bot all the way from no ability to play Go whatsoever to above human level just by playing against itself. There was no example data from humans required in the AlphaGo Zero version of it.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

If you think about what was done with DeepMind's AlphaGo Zero, they managed to get a bot all the way from no ability to play Go whatsoever to above human level just by playing against itself. There was no example data from humans required in the AlphaGo Zero version of it.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

The other direction, of course, is these reasoning models that do chain of thought and stop to think and reflect on their own thinking. In a way, that's another kind of synthetic data coupled with reinforcement learning. So my guess is with one of those methods, we'll get around the data limitation or there may be other sources of data that are available.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

The other direction, of course, is these reasoning models that do chain of thought and stop to think and reflect on their own thinking. In a way, that's another kind of synthetic data coupled with reinforcement learning. So my guess is with one of those methods, we'll get around the data limitation or there may be other sources of data that are available.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

The other direction, of course, is these reasoning models that do chain of thought and stop to think and reflect on their own thinking. In a way, that's another kind of synthetic data coupled with reinforcement learning. So my guess is with one of those methods, we'll get around the data limitation or there may be other sources of data that are available.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

We could just observe that even if there's no problem with data, as we start to scale models up, they just stop getting better. It seemed to be a reliable observation that they've gotten better. That could just stop at some point for a reason we don't understand. The answer could be that we need to invent some new architecture.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

We could just observe that even if there's no problem with data, as we start to scale models up, they just stop getting better. It seemed to be a reliable observation that they've gotten better. That could just stop at some point for a reason we don't understand. The answer could be that we need to invent some new architecture.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

We could just observe that even if there's no problem with data, as we start to scale models up, they just stop getting better. It seemed to be a reliable observation that they've gotten better. That could just stop at some point for a reason we don't understand. The answer could be that we need to invent some new architecture.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There have been problems in the past with, say, numerical stability of models where it looked like things were leveling off, but actually when we found the right unblocker, they didn't end up doing so. So perhaps there's some new โ€“ optimization method or some new technique we need to unblock things.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There have been problems in the past with, say, numerical stability of models where it looked like things were leveling off, but actually when we found the right unblocker, they didn't end up doing so. So perhaps there's some new โ€“ optimization method or some new technique we need to unblock things.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

There have been problems in the past with, say, numerical stability of models where it looked like things were leveling off, but actually when we found the right unblocker, they didn't end up doing so. So perhaps there's some new โ€“ optimization method or some new technique we need to unblock things.