Steven Byrnes
๐ค SpeakerAppearances Over Time
Podcast Appearances
Optimist.
I've read those, but I'm not seeing how they answer my question.
Again, what's your positive argument for ruthless sociopathy?
Lay it on me.
Me?
Sure.
Back at the start of the conversation, I mentioned that random objects like dirt clods are not able to accomplish impressive feats.
I didn't just bring up dirt clods to troll you, rather I was laying the groundwork for a key point.
If we're thinking about AI that can autonomously found, grow, and staff innovative companies for years, or autonomously invent new scientific paradigms, then clearly it's not a random object, but rather a thing that is able to accomplish impressive feats.
And the question we should be asking is, how does it do that?
Those things would be astronomically unlikely to happen if the AI were choosing actions at random.
So there has to be some explanation for how the AI finds actions that accomplish those impressive feats.
So an explanation has to exist.
What is it?
I claim there are really only two answers that work in practice.
The first possible explanation is consequentialism.
The AI accomplishes impressive feats by what amounts to having desires about what winds up happening in the future and running some search process to find actions that lead to those desires getting fulfilled.
This is the main thing that you get from RL agents and from model-based planning algorithms.
My brain-like AGI scenario would involve both of those at once.
The whole point of those subfields of AI is...