Daniel Kokotajlo
๐ค SpeakerAppearances Over Time
Podcast Appearances
We can totally scale up the system 100x.
And
Every single layer of this has been much harder than the strongest optimist expected.
It seems like there have been significant difficulties in increasing the pre-training size, at least from rumors about failed training runs or underwhelming training runs at labs.
It seems like building up these RL... I'm, like, total outside view.
I know nothing about the actual engineering involved here.
But just from an outside view, it seems like building up the O1, like, RL clearly took much... at least two years after GPT-4 was released.
And these things are also โ their economic impact and the kinds of things you would immediately expect based on benchmarks for them to be especially capable at isn't overwhelming.
Like the call center of workers haven't been fired yet.
So the โ why not just say that like, look โ
At higher scale, it will probably get even more difficult.
Wait a second.
I agree, Robin Hanson in particular has been too pessimistic.
I had this interesting experience yesterday.
We were having lunch with this senior AI researcher, probably makes on the order of like millions a month or something.
And we were asking him, how much are the AIs helping you?
And he said, in domains, which I understand well, and it's closer to autocomplete but more intense,
There it's maybe saving me four to eight hours a week.
But then he says, in domains which I'm less familiar with, if I need to go wrangle up some hardware library or make some modification to the kernel or whatever, where I'm just like, I know less, that saves me on the order of 24 hours a week.
now with like current models.