Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

And even if they want to give you consent, they can't because they cannot give informed consent. They don't understand those things.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

And even if they want to give you consent, they can't because they cannot give informed consent. They don't understand those things.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We're literally doing it. The previous model we learned about after we finished training it, what it was capable of. Let's say we stop GPT-4 training run around human capability, hypothetically. We start training GPT-5, and I have no knowledge of insider training runs or anything. And we start at that point of about human, and we train it for the next nine months.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We're literally doing it. The previous model we learned about after we finished training it, what it was capable of. Let's say we stop GPT-4 training run around human capability, hypothetically. We start training GPT-5, and I have no knowledge of insider training runs or anything. And we start at that point of about human, and we train it for the next nine months.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We're literally doing it. The previous model we learned about after we finished training it, what it was capable of. Let's say we stop GPT-4 training run around human capability, hypothetically. We start training GPT-5, and I have no knowledge of insider training runs or anything. And we start at that point of about human, and we train it for the next nine months.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Maybe two months in, it becomes super intelligent. We continue training it. At the time when we start testing it, It is already a dangerous system. How dangerous? I have no idea. But neither people training it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Maybe two months in, it becomes super intelligent. We continue training it. At the time when we start testing it, It is already a dangerous system. How dangerous? I have no idea. But neither people training it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Maybe two months in, it becomes super intelligent. We continue training it. At the time when we start testing it, It is already a dangerous system. How dangerous? I have no idea. But neither people training it.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If we had capability of ahead of the run, before the training run, to register exactly what capabilities that next model will have at the end of the training run, and we accurately guessed all of them, I would say, you're right, we can definitely go ahead with this run. We don't have that capability.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If we had capability of ahead of the run, before the training run, to register exactly what capabilities that next model will have at the end of the training run, and we accurately guessed all of them, I would say, you're right, we can definitely go ahead with this run. We don't have that capability.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If we had capability of ahead of the run, before the training run, to register exactly what capabilities that next model will have at the end of the training run, and we accurately guessed all of them, I would say, you're right, we can definitely go ahead with this run. We don't have that capability.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We're not talking just about capabilities, specific tasks. We're talking about general capability to learn. Maybe like a child at the time of testing and deployment, it is still not extremely capable, but as it is exposed to more data, real world, it can be trained to become much more dangerous and capable.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We're not talking just about capabilities, specific tasks. We're talking about general capability to learn. Maybe like a child at the time of testing and deployment, it is still not extremely capable, but as it is exposed to more data, real world, it can be trained to become much more dangerous and capable.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We're not talking just about capabilities, specific tasks. We're talking about general capability to learn. Maybe like a child at the time of testing and deployment, it is still not extremely capable, but as it is exposed to more data, real world, it can be trained to become much more dangerous and capable.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I think at some point it becomes capable of getting out of control. For game theoretic reasons, it may decide not to do anything right away and for a long time just collect more resources, accumulate strategic advantage. Right away, it may be kind of still young, weak superintelligence. Give it a decade, it's in charge of a lot more resources. It had time to make backups.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I think at some point it becomes capable of getting out of control. For game theoretic reasons, it may decide not to do anything right away and for a long time just collect more resources, accumulate strategic advantage. Right away, it may be kind of still young, weak superintelligence. Give it a decade, it's in charge of a lot more resources. It had time to make backups.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So I think at some point it becomes capable of getting out of control. For game theoretic reasons, it may decide not to do anything right away and for a long time just collect more resources, accumulate strategic advantage. Right away, it may be kind of still young, weak superintelligence. Give it a decade, it's in charge of a lot more resources. It had time to make backups.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So it's not obvious to me that it will strike as soon as it can.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So it's not obvious to me that it will strike as soon as it can.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So it's not obvious to me that it will strike as soon as it can.