Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dario Amodei

๐Ÿ‘ค Speaker
1367 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And of course, we're always trying to make the processes as streamlined as possible, right? We want our safety testing to be rigorous, but we want it to be rigorous and to be automatic, to happen as fast as it can without compromising on rigor. Same with our pre-training process and our post-training process. So it's just like building anything else. It's just like building airplanes.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And of course, we're always trying to make the processes as streamlined as possible, right? We want our safety testing to be rigorous, but we want it to be rigorous and to be automatic, to happen as fast as it can without compromising on rigor. Same with our pre-training process and our post-training process. So it's just like building anything else. It's just like building airplanes.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And of course, we're always trying to make the processes as streamlined as possible, right? We want our safety testing to be rigorous, but we want it to be rigorous and to be automatic, to happen as fast as it can without compromising on rigor. Same with our pre-training process and our post-training process. So it's just like building anything else. It's just like building airplanes.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You want to make them You want to make them safe, but you want to make the process streamlined. And I think the creative tension between those is an important thing in making the models work.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You want to make them You want to make them safe, but you want to make the process streamlined. And I think the creative tension between those is an important thing in making the models work.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

You want to make them You want to make them safe, but you want to make the process streamlined. And I think the creative tension between those is an important thing in making the models work.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

you would be surprised how much of the challenges of, you know, building these models comes down to, you know, software engineering, performance engineering, you know, you, you know, from the outside, you might think, oh man, we had this Eureka breakthrough, right? You know, this movie with the science, we discovered it, we figured it out.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

you would be surprised how much of the challenges of, you know, building these models comes down to, you know, software engineering, performance engineering, you know, you, you know, from the outside, you might think, oh man, we had this Eureka breakthrough, right? You know, this movie with the science, we discovered it, we figured it out.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

you would be surprised how much of the challenges of, you know, building these models comes down to, you know, software engineering, performance engineering, you know, you, you know, from the outside, you might think, oh man, we had this Eureka breakthrough, right? You know, this movie with the science, we discovered it, we figured it out.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But, but, but I think, I think all things, even, even, even, you know, incredible discoveries like, They almost always come down to the details and often super, super boring details. I can't speak to whether we have better tooling than other companies. I mean, you know, I haven't been at those other companies, at least not recently, but it's certainly something we give a lot of attention to.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But, but, but I think, I think all things, even, even, even, you know, incredible discoveries like, They almost always come down to the details and often super, super boring details. I can't speak to whether we have better tooling than other companies. I mean, you know, I haven't been at those other companies, at least not recently, but it's certainly something we give a lot of attention to.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

But, but, but I think, I think all things, even, even, even, you know, incredible discoveries like, They almost always come down to the details and often super, super boring details. I can't speak to whether we have better tooling than other companies. I mean, you know, I haven't been at those other companies, at least not recently, but it's certainly something we give a lot of attention to.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, I think at any given stage, we're focused on improving everything at once. Okay. Um, just, just naturally like there are different teams. Each team makes progress in a particular area in, in, in making a particular, you know, their particular segment of the relay race better. And it's just natural that when we make a new model, we put, we put all of these things in at once.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, I think at any given stage, we're focused on improving everything at once. Okay. Um, just, just naturally like there are different teams. Each team makes progress in a particular area in, in, in making a particular, you know, their particular segment of the relay race better. And it's just natural that when we make a new model, we put, we put all of these things in at once.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, I think at any given stage, we're focused on improving everything at once. Okay. Um, just, just naturally like there are different teams. Each team makes progress in a particular area in, in, in making a particular, you know, their particular segment of the relay race better. And it's just natural that when we make a new model, we put, we put all of these things in at once.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, preference data from old models sometimes gets used for new models, although, of course, it performs somewhat better when it's, you know, trained on the new models. Note that we have this, you know, constitutional AI method such that we don't only use preference data, we kind of, there's also a post-training process where we train the model against itself.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, preference data from old models sometimes gets used for new models, although, of course, it performs somewhat better when it's, you know, trained on the new models. Note that we have this, you know, constitutional AI method such that we don't only use preference data, we kind of, there's also a post-training process where we train the model against itself.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Yeah, preference data from old models sometimes gets used for new models, although, of course, it performs somewhat better when it's, you know, trained on the new models. Note that we have this, you know, constitutional AI method such that we don't only use preference data, we kind of, there's also a post-training process where we train the model against itself.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And there's, you know, new types of post-training the model against itself that are used every day. So it's not just RLHF, it's a bunch of other methods as well. Post-training, I think, you know, is becoming more and more sophisticated.

Lex Fridman Podcast
#452 โ€“ Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

And there's, you know, new types of post-training the model against itself that are used every day. So it's not just RLHF, it's a bunch of other methods as well. Post-training, I think, you know, is becoming more and more sophisticated.