Ege Erdil
๐ค SpeakerAppearances Over Time
Podcast Appearances
from fairly small scaling and say brain size or something.
And so then you might think, well, yeah, I could be, you know, if we scale beyond the size of training runs that, you know, the amount of training compute that the human brain uses, which is maybe on the order of, you know, 1E, 24, flop or whatever, which we've recently surpassed, then maybe surpassing it just a little bit more enables us to unlock
Very sophisticated intelligence in the same way that humans have much more sophisticated intelligence compared to non-human primates.
And I think part of our disagreement is that intelligence is kind of important, but just having a lot more intelligence and reasoning and good reasoning isn't something that will kind of accelerate technological change and economic growth significantly.
very substantially.
It isn't the case that the world today is just totally bottlenecked by not having enough good reasoning.
That's not really what's bottlenecking the world's ability to grow much more substantially.
I think that we might have some disagreement about this particular argument, but I think what's also really important is just that we have a different
view as to how this acceleration happens, that it's not just having like a bunch of really good reasoners that give you this technology that then accelerates things very drastically, because that alone is not sufficient.
You need kind of complementary innovations in other industries.
You need the economy as a whole growing and supporting
the development of these various technologies, you need the various supply chains to be upgraded, you might need demand for the various products that are being built.
And so we have this view where actually this very broad upgrading of your technology and your economy is important rather than just having very good reasoners and very good reasoning tokens that gives us this acceleration.
It's like 100 times for the same capability or something.
What is wrong with this logic?
So I think the logic seems fine.
I think this is a decent way to think about this problem.
But I think that it's useful to draw on a bunch of work that, say, economists have done for studying the returns to R&D and what happens if you 10x your inputs, the number of researchers, what happens to innovation or the rate of innovation.
And there, you know, they point out these kind of two effects where, you know, as you do more innovation, then you get to kind of stand on top of the shoulders of giants, and you get the benefit from past discoveries, and it makes you as a scientist more productive.
But then there's also kind of diminishing returns that the low-hanging fruit has been picked, and it becomes harder to make progress.