Joe Carlsmith
👤 PersonAppearances Over Time
Podcast Appearances
And about like maintaining a kind of, a kind of,
checks and balances and kind of distribution of power, period.
Not just about like kind of humans versus AIs and kind of the differences between human values and AI values.
Now, that said, I mean, I do think humans, many humans would likely be nicer if they foomed than like certain types of AIs.
So, I mean, it's not, but I think the kind of conceptual structure of the argument is not, it's sort of,
very open question how much it applies to humans as well.
It's clearly this kind of very janky kind of
I mean, well, people maybe disagree about this.
I think it's, you know, I mean, it's obvious to everyone with respect to like real world human agents that kind of thinking of humans as having utility functions is, you know, at best a very lossy approximation of what's going on.
I think it's likely to mislead.
as you amp up the intelligence of various agents as well, though I think Eliezer might disagree about that.
Right.
I will say, I think there's something adjacent to that that I think is like more real, that seems more real to me, which is something like, I don't know, my mom recently bought, you know, or a few years ago, she like wanted to get a house, she wanted to get a new dog.
Now she has both, you know?
How did this happen?
What is the right action?
It's good, she tried, it was hard.
She had to like search for the house, it was hard to find the dog, right?
Now she has a house, now she has a dog.
This is a very common thing that happens all the time.