Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

We cannot rule out bugs and capabilities because we haven't found them.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Again, we can only ask and test for things we know about. If there are unknown unknowns, we cannot do it. I'm thinking of human statistics events, right? If you talk to a person like that, you may not even realize they can multiply 20-digit numbers in their head. You have to know to ask.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Again, we can only ask and test for things we know about. If there are unknown unknowns, we cannot do it. I'm thinking of human statistics events, right? If you talk to a person like that, you may not even realize they can multiply 20-digit numbers in their head. You have to know to ask.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Again, we can only ask and test for things we know about. If there are unknown unknowns, we cannot do it. I'm thinking of human statistics events, right? If you talk to a person like that, you may not even realize they can multiply 20-digit numbers in their head. You have to know to ask.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So two things. One, we're switching from tools to agents. Tools don't have negative or positive impact. People using tools do. So guns don't kill. People with guns do. Agents can make their own decisions. They can be positive or negative. A pit bull can decide to harm you as an agent. The fears are the same. The only difference is now we have this technology.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So two things. One, we're switching from tools to agents. Tools don't have negative or positive impact. People using tools do. So guns don't kill. People with guns do. Agents can make their own decisions. They can be positive or negative. A pit bull can decide to harm you as an agent. The fears are the same. The only difference is now we have this technology.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

So two things. One, we're switching from tools to agents. Tools don't have negative or positive impact. People using tools do. So guns don't kill. People with guns do. Agents can make their own decisions. They can be positive or negative. A pit bull can decide to harm you as an agent. The fears are the same. The only difference is now we have this technology.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Then they were afraid of humanoid robots 100 years ago. They had none. Today, every major company in the world is investing billions to create them. Not every, but you understand what I'm saying? It's very different.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Then they were afraid of humanoid robots 100 years ago. They had none. Today, every major company in the world is investing billions to create them. Not every, but you understand what I'm saying? It's very different.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Then they were afraid of humanoid robots 100 years ago. They had none. Today, every major company in the world is investing billions to create them. Not every, but you understand what I'm saying? It's very different.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

They are saying they are building super intelligence and have a super alignment team. You don't think they are trying to create a system smart enough to be an independent agent under that definition?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

They are saying they are building super intelligence and have a super alignment team. You don't think they are trying to create a system smart enough to be an independent agent under that definition?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

They are saying they are building super intelligence and have a super alignment team. You don't think they are trying to create a system smart enough to be an independent agent under that definition?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Those systems are well beyond narrow AI. If you had to list all the capabilities of GPT-4, you would spend a lot of time writing that list.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Those systems are well beyond narrow AI. If you had to list all the capabilities of GPT-4, you would spend a lot of time writing that list.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Those systems are well beyond narrow AI. If you had to list all the capabilities of GPT-4, you would spend a lot of time writing that list.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Not yet. But do you think any of those companies are holding back because they think it may be not safe or are they developing the most capable system they can given the resources and hoping they can control and monetize?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Not yet. But do you think any of those companies are holding back because they think it may be not safe or are they developing the most capable system they can given the resources and hoping they can control and monetize?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Not yet. But do you think any of those companies are holding back because they think it may be not safe or are they developing the most capable system they can given the resources and hoping they can control and monetize?

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I mean, I can't speak for other people. For all of them, I think some of them are very ambitious. They fundraise in trillions. They talk about controlling the light corner of the universe. I would guess that they might.