Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If we go back to that idea of simulation and this is entertainment kind of giving meaning to us, the question is how much suffering is reasonable for a video game? So yeah, I don't mind a video game where I get haptic feedback, there is a little bit of shaking, maybe I'm a little scared. I don't want a game where kids are tortured, literally. That seems unethical, at least by our human standards.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So we know there are some humans who, because of a mutation, don't experience physical pain. So at least physical pain can be mutated out, re-engineered out. Suffering in terms of meaning, like you burn the only copy of my book, is a little harder. But even there, you can manipulate your hedonic set point, you can change defaults, you can reset.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So we know there are some humans who, because of a mutation, don't experience physical pain. So at least physical pain can be mutated out, re-engineered out. Suffering in terms of meaning, like you burn the only copy of my book, is a little harder. But even there, you can manipulate your hedonic set point, you can change defaults, you can reset.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So we know there are some humans who, because of a mutation, don't experience physical pain. So at least physical pain can be mutated out, re-engineered out. Suffering in terms of meaning, like you burn the only copy of my book, is a little harder. But even there, you can manipulate your hedonic set point, you can change defaults, you can reset.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Problem with that is if you start messing with your reward channel, you start wireheading and end up bleacing out a little too much.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Problem with that is if you start messing with your reward channel, you start wireheading and end up bleacing out a little too much.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Problem with that is if you start messing with your reward channel, you start wireheading and end up bleacing out a little too much.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I think we need that, but I would change the overall range. So right now it's negative infinity to kind of positive infinity, pain-pleasure axis. I would make it like zero to positive infinity. And being unhappy is like, I'm close to zero.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I think we need that, but I would change the overall range. So right now it's negative infinity to kind of positive infinity, pain-pleasure axis. I would make it like zero to positive infinity. And being unhappy is like, I'm close to zero.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I think we need that, but I would change the overall range. So right now it's negative infinity to kind of positive infinity, pain-pleasure axis. I would make it like zero to positive infinity. And being unhappy is like, I'm close to zero.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So there are many malevolent actors. We can talk about psychopaths, crazies, hackers, doomsday cults. We know from history they tried killing everyone. They tried on purpose to cause maximum amount of damage, terrorism. What if someone malevolent wants on purpose to torture all humans as long as possible?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So there are many malevolent actors. We can talk about psychopaths, crazies, hackers, doomsday cults. We know from history they tried killing everyone. They tried on purpose to cause maximum amount of damage, terrorism. What if someone malevolent wants on purpose to torture all humans as long as possible?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So there are many malevolent actors. We can talk about psychopaths, crazies, hackers, doomsday cults. We know from history they tried killing everyone. They tried on purpose to cause maximum amount of damage, terrorism. What if someone malevolent wants on purpose to torture all humans as long as possible?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You solve aging, so now you have functional immortality, and you just try to be as creative as you can.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You solve aging, so now you have functional immortality, and you just try to be as creative as you can.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You solve aging, so now you have functional immortality, and you just try to be as creative as you can.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So there are different malevolent agents. Some maybe just gaining personal benefit and sacrificing others to that cause. Others, we know for a fact, are trying to kill as many people as possible. And we look at recent school shootings. If they had more capable weapons, they would take out not dozens, but thousands, millions, billions.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So there are different malevolent agents. Some maybe just gaining personal benefit and sacrificing others to that cause. Others, we know for a fact, are trying to kill as many people as possible. And we look at recent school shootings. If they had more capable weapons, they would take out not dozens, but thousands, millions, billions.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So there are different malevolent agents. Some maybe just gaining personal benefit and sacrificing others to that cause. Others, we know for a fact, are trying to kill as many people as possible. And we look at recent school shootings. If they had more capable weapons, they would take out not dozens, but thousands, millions, billions.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

There is mental diseases where people don't have empathy, don't have this human quality of understanding suffering in ours.