Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

When people say, oh, it's formally verified software, mathematical proof, they accept something close to 100% chance of it being free of all problems. But if you actually look at research, software is full of bugs. Old mathematical theorems, which have been proven for hundreds of years, have been discovered to contain bugs, on top of which we generate new proofs, and now we have to redo all that.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

When people say, oh, it's formally verified software, mathematical proof, they accept something close to 100% chance of it being free of all problems. But if you actually look at research, software is full of bugs. Old mathematical theorems, which have been proven for hundreds of years, have been discovered to contain bugs, on top of which we generate new proofs, and now we have to redo all that.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So, verifiers are not perfect. Usually, they are either a single human or communities of humans, and it's basically kind of like a democratic vote. Community of mathematicians agrees that this proof is correct, mostly correct. Even today, we're starting to see some mathematical proofs are so complex, so large, that mathematical community is unable to make a decision.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So, verifiers are not perfect. Usually, they are either a single human or communities of humans, and it's basically kind of like a democratic vote. Community of mathematicians agrees that this proof is correct, mostly correct. Even today, we're starting to see some mathematical proofs are so complex, so large, that mathematical community is unable to make a decision.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So, verifiers are not perfect. Usually, they are either a single human or communities of humans, and it's basically kind of like a democratic vote. Community of mathematicians agrees that this proof is correct, mostly correct. Even today, we're starting to see some mathematical proofs are so complex, so large, that mathematical community is unable to make a decision.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It looks interesting, it looks promising, but they don't know. They will need years for top scholars to study it, to figure it out. So of course we can use AI to help us with this process, but AI is a piece of software which needs to be verified.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It looks interesting, it looks promising, but they don't know. They will need years for top scholars to study it, to figure it out. So of course we can use AI to help us with this process, but AI is a piece of software which needs to be verified.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It looks interesting, it looks promising, but they don't know. They will need years for top scholars to study it, to figure it out. So of course we can use AI to help us with this process, but AI is a piece of software which needs to be verified.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. And for AI, we would like to have that level of confidence. For very important mission-critical software, controlling satellites, nuclear power plants, for small deterministic programs, we can do this. We can check that code verifies its mapping to the design, whatever software engineers intend it was correctly implemented.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. And for AI, we would like to have that level of confidence. For very important mission-critical software, controlling satellites, nuclear power plants, for small deterministic programs, we can do this. We can check that code verifies its mapping to the design, whatever software engineers intend it was correctly implemented.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Right. And for AI, we would like to have that level of confidence. For very important mission-critical software, controlling satellites, nuclear power plants, for small deterministic programs, we can do this. We can check that code verifies its mapping to the design, whatever software engineers intend it was correctly implemented.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But we don't know how to do this for software which keeps learning, self-modifying, rewriting its own code. We don't know how to prove things about the physical world, states of humans in the physical world. So there are papers coming out now, and I have this beautiful one. Towards guaranteed safe AI. Very cool paper. Some of the best authors I ever seen.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But we don't know how to do this for software which keeps learning, self-modifying, rewriting its own code. We don't know how to prove things about the physical world, states of humans in the physical world. So there are papers coming out now, and I have this beautiful one. Towards guaranteed safe AI. Very cool paper. Some of the best authors I ever seen.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

But we don't know how to do this for software which keeps learning, self-modifying, rewriting its own code. We don't know how to prove things about the physical world, states of humans in the physical world. So there are papers coming out now, and I have this beautiful one. Towards guaranteed safe AI. Very cool paper. Some of the best authors I ever seen.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I think there is multiple Turing Award winners. You can have this one. One just came out, kind of similar, managing extreme AI risks. So all of them expect this level of proof, but... I would say that we can get more confidence with more resources we put into it. But at the end of the day, we're still as reliable as the verifiers. And you have this infinite regress of verifiers.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I think there is multiple Turing Award winners. You can have this one. One just came out, kind of similar, managing extreme AI risks. So all of them expect this level of proof, but... I would say that we can get more confidence with more resources we put into it. But at the end of the day, we're still as reliable as the verifiers. And you have this infinite regress of verifiers.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I think there is multiple Turing Award winners. You can have this one. One just came out, kind of similar, managing extreme AI risks. So all of them expect this level of proof, but... I would say that we can get more confidence with more resources we put into it. But at the end of the day, we're still as reliable as the verifiers. And you have this infinite regress of verifiers.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

The software used to verify a program is itself a piece of program. If aliens give us well-aligned superintelligence, we can use that to create our own safe AI. But it's a catch-22. You need to have already proven to be safe system to verify this new system of equal or greater complexity.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

The software used to verify a program is itself a piece of program. If aliens give us well-aligned superintelligence, we can use that to create our own safe AI. But it's a catch-22. You need to have already proven to be safe system to verify this new system of equal or greater complexity.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

The software used to verify a program is itself a piece of program. If aliens give us well-aligned superintelligence, we can use that to create our own safe AI. But it's a catch-22. You need to have already proven to be safe system to verify this new system of equal or greater complexity.