Eliezer Yudkowsky
๐ค SpeakerAppearances Over Time
Podcast Appearances
You take your giant heap of linear algebra, and you stir it, and it works a little bit better, and you stir it this way, and it works a little bit worse, and you throw out that change.
Transformers are the main thing like that.
And various people are now saying like, well, if you throw enough compute, RNNs can do it.
If you throw enough compute, dense networks can do it.
And not quite at GPT-4 scale.
It is possible that like all these little tweaks are things that like save them a factor of three total on computing power.
And you could get the same performance by throwing three times as much compute without all the little tweaks, right?
But the part where it's like running on... So there's a question of like, is there anything in GPT-4 that is like the kind of qualitative shift that transformers were over RNNs?
And if they have anything like that, they should not say it.
If Sam Alton was dropping hints about that, he shouldn't have dropped hints.
Not a specialist in the circuitry.
I certainly pray that Moore's law runs as slowly as possible, and if it broke down completely tomorrow, I would dance through the streets singing hallelujah as soon as the news were announced.
Only not literally because, you know, it's not religious.
So I guess I could, but I would offer to instead say like,
Drop that empathy with me.
I bet you don't believe that.
Why don't you tell me about why you believe that AGI is not going to kill everyone, and then I can try to describe how my theoretical perspective differs from that.
Maybe I was mistaken.
What do you believe?
Just forget the debate and the dualism.