Sam Altman
👤 PersonAppearances Over Time
Podcast Appearances
One that comes to mind is that it was revealed this spring that OpenAI had been forcing employees when they left to sign non-disclosure agreements, which is somewhat unusual. But then very unusually, they told those employees, if you do not sign this NDA, we can claw back the equity that we have given you in the company.
It would be impossible. They don't do that. They don't do that? No, they don't do that. So this is just extraordinarily unusual.
It would be impossible. They don't do that. They don't do that? No, they don't do that. So this is just extraordinarily unusual.
It would be impossible. They don't do that. They don't do that? No, they don't do that. So this is just extraordinarily unusual.
You know, sometimes with like a C-suite executive or someone very high up in the company, if they, maybe let's say they're fired, but the company doesn't want them to run around badmouthing them to their competitors, they might make that person sign an NDA in exchange for a lot of money. But this thing was just hitting the rank and file employees at OpenAI, and that was really, really unusual.
You know, sometimes with like a C-suite executive or someone very high up in the company, if they, maybe let's say they're fired, but the company doesn't want them to run around badmouthing them to their competitors, they might make that person sign an NDA in exchange for a lot of money. But this thing was just hitting the rank and file employees at OpenAI, and that was really, really unusual.
You know, sometimes with like a C-suite executive or someone very high up in the company, if they, maybe let's say they're fired, but the company doesn't want them to run around badmouthing them to their competitors, they might make that person sign an NDA in exchange for a lot of money. But this thing was just hitting the rank and file employees at OpenAI, and that was really, really unusual.
Yeah. And afterwards, Sam Altman posted on X saying that he would not do this and that it was one of the few times he had been genuinely embarrassed running OpenAI. He did not know this was happening and he should have, is what he said.
Yeah. And afterwards, Sam Altman posted on X saying that he would not do this and that it was one of the few times he had been genuinely embarrassed running OpenAI. He did not know this was happening and he should have, is what he said.
Yeah. And afterwards, Sam Altman posted on X saying that he would not do this and that it was one of the few times he had been genuinely embarrassed running OpenAI. He did not know this was happening and he should have, is what he said.
Yeah, absolutely. And, you know, I will say that there has been great reporting over the past year by other journalists who have gotten at what some of those concerns are. And a lot of them wind up being the same thing, which is we launched a product, and I think we should have done a lot more testing before we launched that product, but we didn't.
Yeah, absolutely. And, you know, I will say that there has been great reporting over the past year by other journalists who have gotten at what some of those concerns are. And a lot of them wind up being the same thing, which is we launched a product, and I think we should have done a lot more testing before we launched that product, but we didn't.
Yeah, absolutely. And, you know, I will say that there has been great reporting over the past year by other journalists who have gotten at what some of those concerns are. And a lot of them wind up being the same thing, which is we launched a product, and I think we should have done a lot more testing before we launched that product, but we didn't.
And so now we have accelerated this kind of AI arms race that we are in, and that will likely end badly because we are much closer to building superintelligence than we are to understanding how to safely build a superintelligence. I see.
And so now we have accelerated this kind of AI arms race that we are in, and that will likely end badly because we are much closer to building superintelligence than we are to understanding how to safely build a superintelligence. I see.
And so now we have accelerated this kind of AI arms race that we are in, and that will likely end badly because we are much closer to building superintelligence than we are to understanding how to safely build a superintelligence. I see.
Exactly, and we have seen this time and time again. I mean, this is really fundamental to the DNA of OpenAI. When they released ChatGPT, other companies had developed large language models that were just as good, but Sam got spooked that his rival, Anthropic, which had an LLM named Claude, was going to release their product first and might steal all of their thunder.