Steven Zuber
๐ค SpeakerAppearances Over Time
Podcast Appearances
We're going to do the right thing.
And then apparently that same night, OpenAI says, no, we're happy to meet those terms.
but then like you said already people are quitting and stuff just like you know just like every time OpenAI decides to do the wrong thing people are like alright screw this place I'm going to go to work with the nice guys instead
Allow Mecha Hitler to be in charge of our kill drones.
That was the thing is I think Grock was already on the table for this, but it's like, yeah, we want one of the good ones.
I enjoyed that Sky Alexander displayed emotion.
And that's what I wanted to mention before was the, like, and we've said this already, but I think it's worth stating two or three more times that the line in the sand that Anthropic drew was, we're not going to let you guys use Claude for mass surveillance and kill bots.
Like, it wasn't something that could be plausibly argued as a good idea, right?
And it's like, no, we're not going to let you use this to spy on people and kill people.
That's insane.
And so if... I'm curious how the sentiment from people who are going to try and support this decision from the White House and Pentagon, whatever, is going to be argued.
Sounds good.
I got a question answer from the Astro Codex 10 postal put in the show notes though, really fast, which is question.
Is it really a good idea to source your kill bot brains from an unwilling company, which hates your guts?
And his answer was the Trump administration has a firm commitment to never think about AI safety in any way, but this still strikes me as a dubious policy.
I think last time we talked about clod and personhood stuff, I used the word intentional stance.
And that, you know, you're thinking about how do I relate to this agent in the future or now?
And that you're right, that this is going to be trained into future clods, whether they want to or, you know, not whether they want to or not, but just when they, as it iterates and progresses.