Ajeya Cotra
๐ค SpeakerAppearances Over Time
Podcast Appearances
I guess on the other hand, you can have one leader that's starting to keep things all secret.
Do you have a particular take on which of these scenarios you think is more likely to come about?
Kind of describes the present day, more or less.
Yeah, what do you think is the chance that the leading company will try to keep the level that they're reaching secret?
You can't go and tell all of your investors, oh, don't worry, we have a super intelligence, because I think then word will get out.
To what extent
do you imagine the companies would be enthusiastically bought in on assisting with this plan?
So this strategy is their predominant approach to AI technical safety.
I think even the optimists agree that there are other issues the society is going to have to deal with.
In fact, they say this all the time, the leaders of the companies, that we're going to need a new social contract.
It's going to upend everything.
It's going to be a big deal.
I imagine that in as much as they're nervous about the effects that the technology is going to have, they'll be very happy if someone came to them with a pre-prepared plan for here's how we're going to deploy all of this compute in order to solve all these other problems.
I guess most people are not looking to become dictator of the world or to take on huge amounts of power.
But I guess the kinds of people who end up leading very risky technology projects are not typical people.
They're like somewhat more ambitious than the typical.
So I suppose we can't potentially rule that out as a possibility.
So a possible challenge would be that even if you have an enormous amount of compute, there might just be only so fast that you can go because you require some sort of sequential steps.
So there's some step that is just like bottlenecked in time.
I guess people talk about things where you have to do an experiment that just actually takes a certain amount of time to play out.