Steven Zuber
๐ค SpeakerAppearances Over Time
Podcast Appearances
Again, Todd is already integrated.
Deintegrating will take a while.
Integrating TrapGPT will take a long time.
So, like, there's still lots of
I think at the low range weeks of time where nothing can really move forward yet.
I had heard about the RSP thing, and I'm sure you guys know more than me, and I'm eager to hear it.
But my first thought was, like, keeping in mind that I still think Anthropic are the good guys.
what they're really saying is like, look, we can't, if, if we, if we commit, if we stick to this commitment of responsible scaling, we're going to fall out of the race and kind of the world might depend on us winning it.
And so we're, we're like, we're going to try and play things as safe as we can while still moving forward as quickly as we can, which I, that there's almost a contradiction in terms, but not, not quite.
Cause I think safety is still an explicit goal, but it's more just like, okay, yeah, we, we, we can't let everyone, uh,
run ahead of us capabilities-wise, right?
And so, I don't know.
I was more sympathetic to it, but I understand.
That sounds like kind of what I was guessing, that it was their core point there.
That's what I was going to say, is I think it would have made a better game theoretic optimization strategy for everyone to say, hey, look, I mean, this has been the case that Eliezer made, what, three, four years ago in that Times article, two or three years?
Like, yeah, everyone take a step back, let's catch our breath, do this thing properly, or we're all going to fucking die.
And I think that, yeah, it would have been awesome if they'd said, hey, look, let's talk about this, and let's all agree as the major players in this game to slow down a little bit.
But you're right, one of the major players is Sam Altman, who would say, of course, yeah, you bet, and then not do it.
I totally agree, because I may be a bit biased.
I admit I'm biased in favor of liking Anthropic, but the same way that I'm biased in liking you guys, that if I learned that one of you had said, I'm going to find some human-level equivalent of this kind of thing, I'd be like, all right, well, why would Matt or Inyash do that?