Sam Altman
๐ค SpeakerAppearances Over Time
Podcast Appearances
Well, I can't just pick one AI safety case or AI alignment case, but I think Eliezer wrote a really great blog post.
I think some of his work has been somewhat difficult to follow or had what I view as quite significant logical flaws.
But he wrote this one blog post outlining why he believed that alignment was such a hard problem that I thought was...
Again, don't agree with a lot of it, but well-reasoned and thoughtful and very worth reading.
So I think I'd point people to that as the steel man.
A lot of the formative AI safety work was done before people even believed in deep learning.
And certainly before people believed in large language models.
And I don't think it's updated enough given everything we've learned now and everything we will learn going forward.
So I think it's got to be this...
very tight feedback loop.
I think the theory does play a real role, of course, but continuing to learn what we learn from how the technology trajectory goes is quite important.
I think now is a very good time, and we're trying to figure out how to do this, to significantly ramp up technical alignment work.
I think we have new tools, we have new understanding, and there's a lot of work that's important to do.
So GPT-4 has not surprised me at all in terms of reception there.
ChatGPT surprised us a little bit, but I still was like advocating that we do it because I thought it was going to do really great.
So like, you know, maybe I thought it would have been like
the 10th fastest growing product in history and not the number one fastest.
I'm like, okay, you know, I think it's like hard.
You should never kind of assume something's going to be like the most successful product launch ever.
But we thought it was, at least many of us thought it was going to be really good.