Andrej Karpathy
๐ค SpeakerAppearances Over Time
Podcast Appearances
Arranging some kind of a crazy quantum mechanical system that somehow gives you buffer overflow, somehow gives you a rounding error in the floating point.
We'll find some way to extract infinite energy.
For example, when you train reinforcement learning agents in physical simulations and you ask them to, say, run
quickly on the flat ground, they'll end up doing all kinds of weird things in part of that optimization.
They'll get on their back leg and they'll slide across the floor.
And it's because the optimization, the enforcement learning optimization on that agent has figured out a way to extract infinite energy from the friction forces and basically their poor implementation.
And they found a way to generate infinite energy and just slide across the surface.
And it's not what you expected.
It's just a
It's sort of like a perverse solution.
And so maybe we can find something like that.
Maybe we can be that little dog in this physical simulation.
Cause it's so fun.
Well, no person will discover it.
I think, by the way, I think it's going to have to be a, some kind of a super intelligent AGI of a third generation.
Like we're building the first generation AGI.
Better AI, yeah.
And then there's no way for us to introspect what that might even... I think it's very likely that these things, for example, like say you have these AGIs, it's very likely that, for example, they will be completely inert.
I like these kinds of sci-fi books sometimes where these things are just completely inert.
They don't interact with anything.