Eliezer Yudkowsky
๐ค SpeakerAppearances Over Time
Podcast Appearances
and more sensible people saying, if aliens were landing in 30 years, you would be preparing right now.
And the world looking on at this and sort of nodding along and be like, ah, yes, the people saying that it's definitely a long way off because progress is really slow, that sounds sensible to us.
RLHF, thumbs up.
Produce more outputs like that one.
I agree with this output.
This output is persuasive.
even in the field of effective altruism.
You quite recently had people publishing papers about like, ah, yes, well, you know, to get something at human level intelligence, it needs to have like this many parameters and you need to like do this much training of it with this many tokens according to the scaling laws and at the rate that Moore's law is going, at the rate that software is going, it'll be in 2050.
And me going like,
What?
You don't know any of that stuff.
This is like this one weird model that has all kinds of like, you have done a calculation that does not obviously bear on reality anyways.
And this is like a simple thing to say, but you can also produce a whole long paper
impressively arguing out all the details of how you got the number of parameters and how you're doing this impressive huge wrong calculation.
And I think most of the effective altruists who are paying attention to this issue, the larger world paying no attention to it at all,
you know, or just like nodding along with a giant impressive paper because, you know, you like press thumbs up for the giant impressive paper and thumbs down for the person going like, I don't think that this paper bears any relation to reality.
And I do think that we are now seeing with like GPT-4 and the sparks of AGI that,
Possibly, depending on how you define that even.
I think that EAs would now consider themselves less convinced by the very long paper on the argument from biology as to AGI being 30 years off.
But this is what people pressed thumbs up on.