Sam Altman
๐ค SpeakerAppearances Over Time
Podcast Appearances
There are going to be disinformation problems or economic shocks or something else at a level far beyond anything we're prepared for.
And that doesn't require super intelligence.
That doesn't require a super deep alignment problem in the machine waking up and trying to deceive us.
And I don't think that gets enough attention.
I mean, it's starting to get more, I guess.
How would we know if on Twitter we were mostly having LLMs direct the whatever's flowing through that hive mind?
And then as on Twitter, so everywhere else eventually.
My statement is we wouldn't.
And that's a real danger.
I think there's a lot of things you can try.
But at this point, it is a certainty.
There are soon going to be a lot of capable open-sourced LLMs with very few to no safety controls on them.
And so...
You can try with regulatory approaches.
You can try with using more powerful AIs to detect this stuff happening.
I'd like us to start trying a lot of things very soon.
You stick with what you believe in.
You stick to your mission.
I'm sure people will get ahead of us in all sorts of ways and take shortcuts we're not going to take.
And we just aren't going to do that.