Dario Amodei
๐ค SpeakerAppearances Over Time
Podcast Appearances
So in 10 months, we've gone from 3% to 50% on this task. And I think in another year, we'll probably be at 90%. I mean, I don't know, but might even be less than that. We've seen similar things in graduate level math, physics, and biology from models like OpenAI's 01.
So if we just continue to extrapolate this in terms of skill that we have, I think if we extrapolate the straight curve, within a few years, we will get to these models being above the highest professional level in terms of humans. Now, will that curve continue? You've pointed to and I've pointed to a lot of reasons why, you know, possible reasons why that might not happen.
So if we just continue to extrapolate this in terms of skill that we have, I think if we extrapolate the straight curve, within a few years, we will get to these models being above the highest professional level in terms of humans. Now, will that curve continue? You've pointed to and I've pointed to a lot of reasons why, you know, possible reasons why that might not happen.
So if we just continue to extrapolate this in terms of skill that we have, I think if we extrapolate the straight curve, within a few years, we will get to these models being above the highest professional level in terms of humans. Now, will that curve continue? You've pointed to and I've pointed to a lot of reasons why, you know, possible reasons why that might not happen.
But if the extrapolation curve continues, that is the trajectory we're on.
But if the extrapolation curve continues, that is the trajectory we're on.
But if the extrapolation curve continues, that is the trajectory we're on.
Yeah, so I want to separate out a couple things, right? So, you know, Anthropic's mission is to kind of try to make this all go well, right? And, you know, we have a theory of change called race to the top, right? Race to the top is about trying to push the other players to do the right thing by setting an example. It's not about being the good guy.
Yeah, so I want to separate out a couple things, right? So, you know, Anthropic's mission is to kind of try to make this all go well, right? And, you know, we have a theory of change called race to the top, right? Race to the top is about trying to push the other players to do the right thing by setting an example. It's not about being the good guy.
Yeah, so I want to separate out a couple things, right? So, you know, Anthropic's mission is to kind of try to make this all go well, right? And, you know, we have a theory of change called race to the top, right? Race to the top is about trying to push the other players to do the right thing by setting an example. It's not about being the good guy.
It's about setting things up so that all of us can be the good guy. I'll give a few examples of this. Early in the history of Anthropic, one of our co-founders, Chris Ola, who I believe you're interviewing soon, he's the co-founder of the field of mechanistic interpretability, which is an attempt to understand what's going on inside AI models.
It's about setting things up so that all of us can be the good guy. I'll give a few examples of this. Early in the history of Anthropic, one of our co-founders, Chris Ola, who I believe you're interviewing soon, he's the co-founder of the field of mechanistic interpretability, which is an attempt to understand what's going on inside AI models.
It's about setting things up so that all of us can be the good guy. I'll give a few examples of this. Early in the history of Anthropic, one of our co-founders, Chris Ola, who I believe you're interviewing soon, he's the co-founder of the field of mechanistic interpretability, which is an attempt to understand what's going on inside AI models.
So we had him and one of our early teams focus on this area of interpretability, which we think is good for making models safe and transparent. For three or four years, that had no commercial application whatsoever. It still doesn't today. We're doing some early betas with it, and probably it will eventually. But this is a very, very long research bed and one in which we've
So we had him and one of our early teams focus on this area of interpretability, which we think is good for making models safe and transparent. For three or four years, that had no commercial application whatsoever. It still doesn't today. We're doing some early betas with it, and probably it will eventually. But this is a very, very long research bed and one in which we've
So we had him and one of our early teams focus on this area of interpretability, which we think is good for making models safe and transparent. For three or four years, that had no commercial application whatsoever. It still doesn't today. We're doing some early betas with it, and probably it will eventually. But this is a very, very long research bed and one in which we've
built in public and shared our results publicly. And we did this because we think it's a way to make models safer. An interesting thing is that as we've done this, other companies have started doing it as well. In some cases, because they've been inspired by it. In some cases, because they're worried that,
built in public and shared our results publicly. And we did this because we think it's a way to make models safer. An interesting thing is that as we've done this, other companies have started doing it as well. In some cases, because they've been inspired by it. In some cases, because they're worried that,
built in public and shared our results publicly. And we did this because we think it's a way to make models safer. An interesting thing is that as we've done this, other companies have started doing it as well. In some cases, because they've been inspired by it. In some cases, because they're worried that,
You know, if if other companies are doing this that look more responsible, they want to look more responsible, too. No one wants to look like the irresponsible actor. And so they adopt this. They adopt this as well. When folks come to Anthropic, interpretability is often a draw. And I tell them the other places you didn't go. Tell them why you came here. And then you.