Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

There is mental diseases where people don't have empathy, don't have this human quality of understanding suffering in ours.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

There is mental diseases where people don't have empathy, don't have this human quality of understanding suffering in ours.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Again, I would like to assume that normal people never think like that. It's always some sort of psychopaths, but yeah.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Again, I would like to assume that normal people never think like that. It's always some sort of psychopaths, but yeah.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Again, I would like to assume that normal people never think like that. It's always some sort of psychopaths, but yeah.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

They can certainly be more creative. They can understand human biology better, understand our molecular structure, genome. Again, a lot of times torture ends and the individual dies. That limit can be removed as well.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

They can certainly be more creative. They can understand human biology better, understand our molecular structure, genome. Again, a lot of times torture ends and the individual dies. That limit can be removed as well.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

They can certainly be more creative. They can understand human biology better, understand our molecular structure, genome. Again, a lot of times torture ends and the individual dies. That limit can be removed as well.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. We can definitely keep up for a while. I'm saying you cannot do it indefinitely. At some point, the cognitive gap is too big. The surface you have to defend is infinite. But attackers only need to find one exploit.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. We can definitely keep up for a while. I'm saying you cannot do it indefinitely. At some point, the cognitive gap is too big. The surface you have to defend is infinite. But attackers only need to find one exploit.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. We can definitely keep up for a while. I'm saying you cannot do it indefinitely. At some point, the cognitive gap is too big. The surface you have to defend is infinite. But attackers only need to find one exploit.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If we create general super intelligences, I don't see a good outcome long-term for humanity. The only way to win this game is not to play it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If we create general super intelligences, I don't see a good outcome long-term for humanity. The only way to win this game is not to play it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If we create general super intelligences, I don't see a good outcome long-term for humanity. The only way to win this game is not to play it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I don't know for sure. The prediction markets right now are saying 2026 for AGI. I heard the same thing from CEO of Anthropic, DeepMind, so maybe we are two years away, which seems very soon given we don't have a working safety mechanism in place or even a prototype for one. And there are people trying to accelerate those timelines because they feel we're not getting there quick enough.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I don't know for sure. The prediction markets right now are saying 2026 for AGI. I heard the same thing from CEO of Anthropic, DeepMind, so maybe we are two years away, which seems very soon given we don't have a working safety mechanism in place or even a prototype for one. And there are people trying to accelerate those timelines because they feel we're not getting there quick enough.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I don't know for sure. The prediction markets right now are saying 2026 for AGI. I heard the same thing from CEO of Anthropic, DeepMind, so maybe we are two years away, which seems very soon given we don't have a working safety mechanism in place or even a prototype for one. And there are people trying to accelerate those timelines because they feel we're not getting there quick enough.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So the definitions we used to have, and people are modifying them a little bit lately. Artificial general intelligence was a system capable of performing in any domain a human could perform. So kind of you're creating this average artificial person. They can do cognitive labor, physical labor, where you can get another human to do it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So the definitions we used to have, and people are modifying them a little bit lately. Artificial general intelligence was a system capable of performing in any domain a human could perform. So kind of you're creating this average artificial person. They can do cognitive labor, physical labor, where you can get another human to do it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So the definitions we used to have, and people are modifying them a little bit lately. Artificial general intelligence was a system capable of performing in any domain a human could perform. So kind of you're creating this average artificial person. They can do cognitive labor, physical labor, where you can get another human to do it.