Sam Altman
๐ค SpeakerAppearances Over Time
Podcast Appearances
And I also, like, I get why this is such an important issue.
This is a really important issue.
But that somehow we, like...
Somehow, this is the thing that we get caught up in versus like, what is this going to mean for our future?
Now, maybe you say, this is critical to what this is going to mean for our future.
The thing that it says more characters about this person than this person and who's deciding that and how it's being decided and how the users get control over that.
Maybe that is the most important issue.
But I wouldn't have guessed it at the time when I was like an eight-year-old.
So we finished last summer.
We immediately started
giving it to people to Red Team.
We started doing a bunch of our own internal safety EFLs on it.
We started trying to work on different ways to align it.
And that combination of an internal and external effort, plus building a whole bunch of new ways to align the model.
And we didn't get it perfect by far, but one thing that I care about is that our degree of alignment increases faster than our rate of capability progress.
And that I think will become more and more important over time.
And I think we made reasonable progress there to a more aligned system than we've ever had before.
I think this is the most capable and most aligned model that we've put out.
We were able to do a lot of testing on it.
And that takes a while.