Ajeya Cotra
๐ค SpeakerAppearances Over Time
Podcast Appearances
But more generally, at least with LLMs, for example, they produce like one token after another.
And having twice as much compute doesn't necessarily allow you to basically...
complete an answer twice as fast without limit.
How much is that an issue here?
And as much as we're trying to solve problems in like a very short calendar time?
Yeah.
Are there any other examples of similar bottlenecks?
I guess in terms of solving theoretical problems, I suppose you can speed things up enormously by having like many, many different instances of the same model, like try to brainstorm different solutions and then have them evaluate one another.
And that allows you to kind of have many different efforts in parallel.
So what sort of stuff do we need to be doing in advance?
I guess, for example, setting up planning meetings ahead of time for diplomats between the US and China.
We need to do that at the very early stage in anticipation that eventually we might have a deal that they might want to ratify.
I guess that sounds a bit crazy.
But are there other examples of things that you need to do before this all kicks off?
So what should people be doing if they think that this kind of makes sense or it's something that they'd want to contribute to?
Are there other organizations that should similarly be sort of planning ahead and thinking about how this might look for them?
Or could individuals be thinking about how they could contribute to, I guess, adopting this approach for their own particular projects?
Let's talk a bit about the career journey that you've been on since we last did an interview two and a half years ago.
I guess back then you were doing general AI research and strategy for Open Philanthropy.
This is in 2023.