Dario Amodei
๐ค SpeakerAppearances Over Time
Podcast Appearances
And I would suspect that if we can get to 90, 95%, that it will represent ability to autonomously do a significant fraction of software engineering tasks.
And I would suspect that if we can get to 90, 95%, that it will represent ability to autonomously do a significant fraction of software engineering tasks.
Not giving an exact date, but as far as we know, the plan is still to have a Cloud 3.5 Opus.
Not giving an exact date, but as far as we know, the plan is still to have a Cloud 3.5 Opus.
Not giving an exact date, but as far as we know, the plan is still to have a Cloud 3.5 Opus.
Like Duke Nukem Forever.
Like Duke Nukem Forever.
Like Duke Nukem Forever.
You know, it's only been three months since we released the first Sonnet.
You know, it's only been three months since we released the first Sonnet.
You know, it's only been three months since we released the first Sonnet.
It just tells you about the pace. Yeah. The expectations for when things are going to come out.
It just tells you about the pace. Yeah. The expectations for when things are going to come out.
It just tells you about the pace. Yeah. The expectations for when things are going to come out.
Naming is actually an interesting challenge here, right? Because I think a year ago, most of the model was pre-training. And so you could start from the beginning and just say, okay, we're going to have models of different sizes. We're going to train them all together. And, you know, we'll have a family of naming schemes and then we'll put some new magic into them.
Naming is actually an interesting challenge here, right? Because I think a year ago, most of the model was pre-training. And so you could start from the beginning and just say, okay, we're going to have models of different sizes. We're going to train them all together. And, you know, we'll have a family of naming schemes and then we'll put some new magic into them.
Naming is actually an interesting challenge here, right? Because I think a year ago, most of the model was pre-training. And so you could start from the beginning and just say, okay, we're going to have models of different sizes. We're going to train them all together. And, you know, we'll have a family of naming schemes and then we'll put some new magic into them.
And then, you know, we'll have the next, the next generation. The trouble starts already when some of them take a lot longer than others to train, right? That already messes up your time, time a little bit, but yeah, As you make big improvements in pre-training, then you suddenly notice, oh, I can make better pre-trained model, and that doesn't take very long to do.
And then, you know, we'll have the next, the next generation. The trouble starts already when some of them take a lot longer than others to train, right? That already messes up your time, time a little bit, but yeah, As you make big improvements in pre-training, then you suddenly notice, oh, I can make better pre-trained model, and that doesn't take very long to do.
And then, you know, we'll have the next, the next generation. The trouble starts already when some of them take a lot longer than others to train, right? That already messes up your time, time a little bit, but yeah, As you make big improvements in pre-training, then you suddenly notice, oh, I can make better pre-trained model, and that doesn't take very long to do.