Roman Yampolsky
๐ค PersonAppearances Over Time
Podcast Appearances
Those are not the same. I am against the super intelligence in general sense with no undo button.
Those are not the same. I am against the super intelligence in general sense with no undo button.
Those are not the same. I am against the super intelligence in general sense with no undo button.
Partially, but they don't scale. For narrow AI, for deterministic systems, you can test them. You have edge cases. You know what the answer should look like. You know the right answers. For general systems, you have infinite test surface. You have no edge cases. You cannot even know what to test for. Again, the unknown unknowns are underappreciated by... people looking at this problem.
Partially, but they don't scale. For narrow AI, for deterministic systems, you can test them. You have edge cases. You know what the answer should look like. You know the right answers. For general systems, you have infinite test surface. You have no edge cases. You cannot even know what to test for. Again, the unknown unknowns are underappreciated by... people looking at this problem.
Partially, but they don't scale. For narrow AI, for deterministic systems, you can test them. You have edge cases. You know what the answer should look like. You know the right answers. For general systems, you have infinite test surface. You have no edge cases. You cannot even know what to test for. Again, the unknown unknowns are underappreciated by... people looking at this problem.
You are always asking me, how will it kill everyone? How will it will fail? The whole point is, if I knew it, I would be super intelligent, and despite what you might think, I'm not.
You are always asking me, how will it kill everyone? How will it will fail? The whole point is, if I knew it, I would be super intelligent, and despite what you might think, I'm not.
You are always asking me, how will it kill everyone? How will it will fail? The whole point is, if I knew it, I would be super intelligent, and despite what you might think, I'm not.
It is a master at deception. Sam tweeted about how great it is at persuasion. And we see it ourselves, especially now with voices, with maybe kind of flirty, sarcastic female voices. It's gonna be very good at getting people to do things.
It is a master at deception. Sam tweeted about how great it is at persuasion. And we see it ourselves, especially now with voices, with maybe kind of flirty, sarcastic female voices. It's gonna be very good at getting people to do things.
It is a master at deception. Sam tweeted about how great it is at persuasion. And we see it ourselves, especially now with voices, with maybe kind of flirty, sarcastic female voices. It's gonna be very good at getting people to do things.
Right. I don't think developers know everything about what they are creating. They have lots of great knowledge. We're making progress on explaining parts of a network. We can understand, okay, this node gets excited when this input is presented, this cluster of nodes. But we're nowhere near close to understanding the full picture, and I think it's impossible.
Right. I don't think developers know everything about what they are creating. They have lots of great knowledge. We're making progress on explaining parts of a network. We can understand, okay, this node gets excited when this input is presented, this cluster of nodes. But we're nowhere near close to understanding the full picture, and I think it's impossible.
Right. I don't think developers know everything about what they are creating. They have lots of great knowledge. We're making progress on explaining parts of a network. We can understand, okay, this node gets excited when this input is presented, this cluster of nodes. But we're nowhere near close to understanding the full picture, and I think it's impossible.
You need to be able to survey an explanation. The size of those models prevents a single human from observing all this information, even if provided by the system. So either we're getting model as an explanation for what's happening, and that's not comprehensible to us, or we're getting a compressed explanation, lossy compression, where here's top 10 reasons you got fired.
You need to be able to survey an explanation. The size of those models prevents a single human from observing all this information, even if provided by the system. So either we're getting model as an explanation for what's happening, and that's not comprehensible to us, or we're getting a compressed explanation, lossy compression, where here's top 10 reasons you got fired.
You need to be able to survey an explanation. The size of those models prevents a single human from observing all this information, even if provided by the system. So either we're getting model as an explanation for what's happening, and that's not comprehensible to us, or we're getting a compressed explanation, lossy compression, where here's top 10 reasons you got fired.
It's something, but it's not a full picture.
It's something, but it's not a full picture.