Tamay Besiroglu
๐ค SpeakerAppearances Over Time
Podcast Appearances
Like, class.
Sure, but then I guess I'm still not seeing the... I mean, if the question is just that is it possible for that to happen, which is like a weaker claim, then yeah, I mean, it seems possible.
But there are, I think, a lot of arguments just pushing back against it.
Probably actually the biggest one is the fact that AI preferences are just not... Like, just look at the AIs we have today.
Like, can you imagine them doing that?
I think people just don't put a lot of weight on that because they think once we have enough optimization pressure and once they become super intelligent, they're just going to become misaligned.
But I just don't see the evidence for that.
No, there's more than some evidence.
Yeah, so imagine that you gave students at a school a test, and then the answer key was, like, 100.
Yeah, so I think, like, it's important to be clear about, like, what is the thing that you're actually worried about?
Like, I think some people are just, like, say that, oh, like, we're gonna, like, humans are gonna lose control of the future.
Like, we're not gonna be the ones that are, like, making the important decision.
We...
However conceived, that's also kind of nebulous.
But okay, so is that something to worry about?
If you just think biological humans should remain in charge of all important decisions forever, then I agree.
The development of AI seems like kind of a problem for that.
But in fact, other things also seem like kind of a problem for that.
I just don't expect to generically be true.
Like a million years from now, even if we don't develop AI, biological humans, the way we recognize them today, are still making all the important decisions, and they have something like the culture that we would recognize from ourselves today.