Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It's not safe to give technology so powerful to those who may misalign it, even if you are successful at somehow getting it to work in the first place in a friendly manner.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It also sets a very wrong precedent. So we open sourced model one, model two, model three, nothing ever bad happened. So obviously we're gonna do it with model four. It's just gradual improvement.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It also sets a very wrong precedent. So we open sourced model one, model two, model three, nothing ever bad happened. So obviously we're gonna do it with model four. It's just gradual improvement.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It also sets a very wrong precedent. So we open sourced model one, model two, model three, nothing ever bad happened. So obviously we're gonna do it with model four. It's just gradual improvement.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I have a paper which collects accidents through history of AI, and they always are proportional to capabilities of that system. So if you have tic-tac-toe playing AI, it will fail to properly play and loses the game which it should draw. Trivial. Your spell checker will misspell a word, so on.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I have a paper which collects accidents through history of AI, and they always are proportional to capabilities of that system. So if you have tic-tac-toe playing AI, it will fail to properly play and loses the game which it should draw. Trivial. Your spell checker will misspell a word, so on.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I have a paper which collects accidents through history of AI, and they always are proportional to capabilities of that system. So if you have tic-tac-toe playing AI, it will fail to properly play and loses the game which it should draw. Trivial. Your spell checker will misspell a word, so on.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I stopped collecting those because there are just too many examples of AIs failing at what they are capable of. We haven't had... terrible accidents in the sense of billion people get killed. Absolutely true. But in another paper, I argue that those accidents do not actually prevent people from continuing with research. And actually, they kind of serve like vaccines.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I stopped collecting those because there are just too many examples of AIs failing at what they are capable of. We haven't had... terrible accidents in the sense of billion people get killed. Absolutely true. But in another paper, I argue that those accidents do not actually prevent people from continuing with research. And actually, they kind of serve like vaccines.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I stopped collecting those because there are just too many examples of AIs failing at what they are capable of. We haven't had... terrible accidents in the sense of billion people get killed. Absolutely true. But in another paper, I argue that those accidents do not actually prevent people from continuing with research. And actually, they kind of serve like vaccines.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

A vaccine makes your body a little bit sick, so you can handle the big disease later much better. It's the same here. People will point out, you know that accident, AI accident we had where 12 people died? Everyone's still here. 12 people is less than smoking kills. It's not a big deal. So we continue. So in a way, it will actually be kind of confirming that it's not that bad.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

A vaccine makes your body a little bit sick, so you can handle the big disease later much better. It's the same here. People will point out, you know that accident, AI accident we had where 12 people died? Everyone's still here. 12 people is less than smoking kills. It's not a big deal. So we continue. So in a way, it will actually be kind of confirming that it's not that bad.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

A vaccine makes your body a little bit sick, so you can handle the big disease later much better. It's the same here. People will point out, you know that accident, AI accident we had where 12 people died? Everyone's still here. 12 people is less than smoking kills. It's not a big deal. So we continue. So in a way, it will actually be kind of confirming that it's not that bad.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So you bring up example of cars. Yes, cars were slowly developed and integrated. If we had no cars, and somebody came around and said, I invented this thing. It's called cars. It's awesome. It kills like 100,000 Americans every year. Let's deploy it. Would we deploy that?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So you bring up example of cars. Yes, cars were slowly developed and integrated. If we had no cars, and somebody came around and said, I invented this thing. It's called cars. It's awesome. It kills like 100,000 Americans every year. Let's deploy it. Would we deploy that?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So you bring up example of cars. Yes, cars were slowly developed and integrated. If we had no cars, and somebody came around and said, I invented this thing. It's called cars. It's awesome. It kills like 100,000 Americans every year. Let's deploy it. Would we deploy that?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You need data. You need to know. But if I'm right and it's unpredictable, unexplainable, uncontrollable, you cannot make this decision, we're gaining $10 trillion of wealth, but we're losing, we don't know how many people. You basically have to perform an experiment on 8 billion humans without their consent.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You need data. You need to know. But if I'm right and it's unpredictable, unexplainable, uncontrollable, you cannot make this decision, we're gaining $10 trillion of wealth, but we're losing, we don't know how many people. You basically have to perform an experiment on 8 billion humans without their consent.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

You need data. You need to know. But if I'm right and it's unpredictable, unexplainable, uncontrollable, you cannot make this decision, we're gaining $10 trillion of wealth, but we're losing, we don't know how many people. You basically have to perform an experiment on 8 billion humans without their consent.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

And even if they want to give you consent, they can't because they cannot give informed consent. They don't understand those things.