Bryan Callen
๐ค SpeakerAppearances Over Time
Podcast Appearances
We share that.
then can I try to figure out what is the logic of a system like this that tracks good and evil?
And I said, well, if we were programming an AI to run systems for us and we were concerned about value mismatch, where it's like one example that I often bring up is that the future will be corn.
Why?
because Americans produce corn like nobody's business and we subsidize it.
So when the AI tracks all of our economics and everything we do, it outweights corn above everything else.
And then slowly over time starts integrating corn to where after 40 years of being under the AI's rule, everyone's wearing corn costumes, they're trading corn, all food is a derivative of corn.
And it's again, because the AI is a value mismatch.
How would you program an AI to not have that?
You would simulate a human experience for the AI and then filter algorithmically the immoral and the moral towards the morals you want.
Then when the program concludes, you have independent AI agents that you have determined through this program to be good and worthy of being in control of systems.
Like for instance, if you were to actually die,
And you did wake up in a machine.
The progenitors, whatever you want to call them, would know you would never harm somebody.
You're not a murderer.
You're not a killer.
But a killer goes to hell.
What does that mean?
They delete him.
They say this AI went rogue, killed and destroyed in the training simulator.