Dwarkesh Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
And the second is that, well, they have this powerful property that once they achieve a certain waterline,
They can fill every single problem that is available at that waterline, which we simply can't do with humans, where we can't make a million copies of you and give each of them a million dollars of inference compute and have you do a hundred years of subjective time research on a hundred different problems at the same time, or a million different problems at the same time.
But once AIs reach Terence Tower level, they could do that.
And once they reach intermediate levels, they could do the intermediate version of that.
So
The same reason that we should be bearish now is the reason we should be especially bullish, not even when they achieve superhuman intelligence, but just when they achieve human-level intelligence, because their human-level intelligence is qualitatively wider and more powerful than our human-level intelligence.
To this point about complementarity,
programmers have noticed that they're way more productive as a result of these AI tools.
And I don't know if you as a mathematician feel the same way, but it does seem like one big difference between vibe coding and vibe researching is that with software,
The whole point of the thing is to have some effect on the world through your work.
And if it leads to you better understanding a problem or you coming up with some clean abstraction to embody in your code, that is instrumental to the end goal.
Whereas maybe with research,
The reason we care about solving the Millennium Piper's problems is presumably that in the process of solving them, we discover new mathematical objects or new techniques and those who understand our civilization's understanding of mathematics.
And so the proof is sort of instrumental to the intermediate work.
I don't know if you agree with that dichotomy or if that in any way will explain the relative uplift we'll see in software versus research.
Interesting.
I feel like a big crux in these conversations about how much, how good AI will be for science is, I think you said this, it's like, oh, they're using existing techniques and modifying them.
And,
it would be interesting to understand how much progress one can make simply from using existing techniques.
Like how much of, if I looked at the top mad journals, how many of them are, how many of the papers are coming up with whatever, coming up with new technique means, doing that versus using existing techniques in new problems and what the overhang is, where if you just applied every known technique to every open problem, would that just constitute a humongous uplift in our civilization's knowledge or would that not be that impressive and useful?