Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Eliezer Yudkowsky

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
1716 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Like how much more frequent did your genes become in the next generation?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

In fact, that just is natural selection.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

It doesn't optimize for that, but rather the process of genes becoming more frequent is that.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

You can nonetheless imagine that there is this hill climbing process, not like gradient descent, because gradient descent uses calculus.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

This is just using like, where are you?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

But still hill climbing in both cases, making something better and better over time in steps.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And natural selection was optimizing exclusively for this very simple, pure criterion of inclusive genetic fitness in a very complicated environment.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

We're doing a very wide range of things and solving a wide range of problems, led it to having more kids.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And this got you humans, which had no internal notion of inclusive genetic fitness until thousands of years later when they were actually figuring out what had even happened.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

and no explicit desire to increase inclusive genetic fitness.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So from this important case study, we may infer the important fact that if you do a whole bunch of hill climbing on a very simple loss function, at the point where the system's capabilities start to generalize very widely, when it is in an intuitive sense becoming very capable and generalizing far outside the training distribution,

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

We know that there is no general law saying that the system even internally represents, let alone tries to optimize the very simple loss function you are training it on.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

I've talked here about the power of intelligence and not really gotten very far into it, but not like...

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

why it is that suppose you like screw up with AGI and it ended up wanting a bunch of random stuff.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Why does it try to kill you?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Why doesn't it try to trade with you?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Why doesn't it give you just the tiny little fraction of the solar system that it would keep to take everyone alive, that it would take to keep everyone alive?

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

I mean, the vast majority of randomly specified utility functions do not have optima with humans in them.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

would be the first thing I would point out.

Lex Fridman Podcast
#368 โ€“ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And then the next question is like, well, if you try to optimize something and you lose control of it, where in that space do you land?