Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Roman Yampolsky

๐Ÿ‘ค Person
771 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I cannot make a case that he's right. He's wrong in so many ways, it's difficult for me to remember all of them. He's a Facebook buddy, so I have a lot of fun having those little debates with him. So I'm trying to remember the arguments. So one, he says we are not... gifted this intelligence from aliens. We are designing it, we are making decisions about it. That's not true.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I cannot make a case that he's right. He's wrong in so many ways, it's difficult for me to remember all of them. He's a Facebook buddy, so I have a lot of fun having those little debates with him. So I'm trying to remember the arguments. So one, he says we are not... gifted this intelligence from aliens. We are designing it, we are making decisions about it. That's not true.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

I cannot make a case that he's right. He's wrong in so many ways, it's difficult for me to remember all of them. He's a Facebook buddy, so I have a lot of fun having those little debates with him. So I'm trying to remember the arguments. So one, he says we are not... gifted this intelligence from aliens. We are designing it, we are making decisions about it. That's not true.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It was true when we had expert systems, symbolic AI, decision trees. Today, you set up parameters for a model and you water this plant. You give it data, you give it compute, and it grows. And after it's finished growing into this alien plant, you start testing it to find out what capabilities it has. And it takes years to figure out, even for existing models.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It was true when we had expert systems, symbolic AI, decision trees. Today, you set up parameters for a model and you water this plant. You give it data, you give it compute, and it grows. And after it's finished growing into this alien plant, you start testing it to find out what capabilities it has. And it takes years to figure out, even for existing models.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It was true when we had expert systems, symbolic AI, decision trees. Today, you set up parameters for a model and you water this plant. You give it data, you give it compute, and it grows. And after it's finished growing into this alien plant, you start testing it to find out what capabilities it has. And it takes years to figure out, even for existing models.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If it's trained for six months, it will take you two, three years to figure out basic capabilities of that system. We still discover new capabilities in systems which are already out there. So that's not the case.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If it's trained for six months, it will take you two, three years to figure out basic capabilities of that system. We still discover new capabilities in systems which are already out there. So that's not the case.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

If it's trained for six months, it will take you two, three years to figure out basic capabilities of that system. We still discover new capabilities in systems which are already out there. So that's not the case.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Absolutely. That's what makes it so successful. Then we had to painstakingly hard code in everything. We didn't have much progress. Now, just spend more money and more compute and it's a lot more capable.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Absolutely. That's what makes it so successful. Then we had to painstakingly hard code in everything. We didn't have much progress. Now, just spend more money and more compute and it's a lot more capable.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Absolutely. That's what makes it so successful. Then we had to painstakingly hard code in everything. We didn't have much progress. Now, just spend more money and more compute and it's a lot more capable.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Let's say there is a ceiling. It's not guaranteed to be at the level which is competitive with us. It may be greatly superior to ours.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Let's say there is a ceiling. It's not guaranteed to be at the level which is competitive with us. It may be greatly superior to ours.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Let's say there is a ceiling. It's not guaranteed to be at the level which is competitive with us. It may be greatly superior to ours.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Historically, he's completely right. Open source software is wonderful. It's tested by the community. It's debugged, but we're switching from tools to agents. Now you're giving open source weapons to psychopaths. Do we want to open source nuclear weapons? biological weapons.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Historically, he's completely right. Open source software is wonderful. It's tested by the community. It's debugged, but we're switching from tools to agents. Now you're giving open source weapons to psychopaths. Do we want to open source nuclear weapons? biological weapons.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

Historically, he's completely right. Open source software is wonderful. It's tested by the community. It's debugged, but we're switching from tools to agents. Now you're giving open source weapons to psychopaths. Do we want to open source nuclear weapons? biological weapons.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It's not safe to give technology so powerful to those who may misalign it, even if you are successful at somehow getting it to work in the first place in a friendly manner.

Lex Fridman Podcast
#431 โ€“ Roman Yampolskiy: Dangers of Superintelligent AI

It's not safe to give technology so powerful to those who may misalign it, even if you are successful at somehow getting it to work in the first place in a friendly manner.