Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Those are not the same. I am against the super intelligence in general sense with no undo button.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Those are not the same. I am against the super intelligence in general sense with no undo button.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Those are not the same. I am against the super intelligence in general sense with no undo button.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Partially, but they don't scale. For narrow AI, for deterministic systems, you can test them. You have edge cases. You know what the answer should look like. You know the right answers. For general systems, you have infinite test surface. You have no edge cases. You cannot even know what to test for. Again, the unknown unknowns are underappreciated by... people looking at this problem.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Partially, but they don't scale. For narrow AI, for deterministic systems, you can test them. You have edge cases. You know what the answer should look like. You know the right answers. For general systems, you have infinite test surface. You have no edge cases. You cannot even know what to test for. Again, the unknown unknowns are underappreciated by... people looking at this problem.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Partially, but they don't scale. For narrow AI, for deterministic systems, you can test them. You have edge cases. You know what the answer should look like. You know the right answers. For general systems, you have infinite test surface. You have no edge cases. You cannot even know what to test for. Again, the unknown unknowns are underappreciated by... people looking at this problem.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You are always asking me, how will it kill everyone? How will it will fail? The whole point is, if I knew it, I would be super intelligent, and despite what you might think, I'm not.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You are always asking me, how will it kill everyone? How will it will fail? The whole point is, if I knew it, I would be super intelligent, and despite what you might think, I'm not.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You are always asking me, how will it kill everyone? How will it will fail? The whole point is, if I knew it, I would be super intelligent, and despite what you might think, I'm not.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It is a master at deception. Sam tweeted about how great it is at persuasion. And we see it ourselves, especially now with voices, with maybe kind of flirty, sarcastic female voices. It's gonna be very good at getting people to do things.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It is a master at deception. Sam tweeted about how great it is at persuasion. And we see it ourselves, especially now with voices, with maybe kind of flirty, sarcastic female voices. It's gonna be very good at getting people to do things.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It is a master at deception. Sam tweeted about how great it is at persuasion. And we see it ourselves, especially now with voices, with maybe kind of flirty, sarcastic female voices. It's gonna be very good at getting people to do things.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. I don't think developers know everything about what they are creating. They have lots of great knowledge. We're making progress on explaining parts of a network. We can understand, okay, this node gets excited when this input is presented, this cluster of nodes. But we're nowhere near close to understanding the full picture, and I think it's impossible.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. I don't think developers know everything about what they are creating. They have lots of great knowledge. We're making progress on explaining parts of a network. We can understand, okay, this node gets excited when this input is presented, this cluster of nodes. But we're nowhere near close to understanding the full picture, and I think it's impossible.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. I don't think developers know everything about what they are creating. They have lots of great knowledge. We're making progress on explaining parts of a network. We can understand, okay, this node gets excited when this input is presented, this cluster of nodes. But we're nowhere near close to understanding the full picture, and I think it's impossible.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You need to be able to survey an explanation. The size of those models prevents a single human from observing all this information, even if provided by the system. So either we're getting model as an explanation for what's happening, and that's not comprehensible to us, or we're getting a compressed explanation, lossy compression, where here's top 10 reasons you got fired.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You need to be able to survey an explanation. The size of those models prevents a single human from observing all this information, even if provided by the system. So either we're getting model as an explanation for what's happening, and that's not comprehensible to us, or we're getting a compressed explanation, lossy compression, where here's top 10 reasons you got fired.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You need to be able to survey an explanation. The size of those models prevents a single human from observing all this information, even if provided by the system. So either we're getting model as an explanation for what's happening, and that's not comprehensible to us, or we're getting a compressed explanation, lossy compression, where here's top 10 reasons you got fired.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It's something, but it's not a full picture.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It's something, but it's not a full picture.