Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Eliezer Yudkowsky

👤 Person
1713 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

We don't have any other tests.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

We don't have any lines to draw on the sand and say like, well, when we get this far, we will start to worry about what's inside there.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

So if it were up to me, I would be like, OK, like this far, no further time for the summer of AI where we have planted our seeds and now we like wait and reap the rewards of the technology we've already developed and don't do any larger training runs than that, which, to be clear, I realize requires more than one company agreeing to not do that.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

That would take decades.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Like, having any idea of what's going on in there, people have been trying for a while.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

I mean, there's a whole bunch of different sub-questions here.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

There's the question of...

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Like, is there consciousness?

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Is there qualia?

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Is this an object of moral concern?

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Is this a moral patient?

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Like, should we be worried about how we're treating it?

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And then there's questions like, how smart is it exactly?

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Can it do X?

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Can it do Y?

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

And we can check how it can do X and how it can do Y.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Unfortunately, we've gone and exposed this model to a vast corpus of text of people discussing consciousness on the Internet, which means that when it talks about being self-aware, we don't know to what extent it is repeating back what it has previously been trained on for discussing self-awareness.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

or if there's anything going on in there such that it would start to say similar things spontaneously.

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Among the things that one could do if one were at all serious about trying to figure this out is train GPT-3 to detect conversations about consciousness, exclude them all from the training data sets, and then retrain something around the rough size of GPT-4 and no larger

Lex Fridman Podcast
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

with all of the discussion of consciousness and self-awareness and so on missing, although, you know, hard, hard bar to pass.