Ajeya Cotra
๐ค SpeakerAppearances Over Time
Podcast Appearances
So we might think that AI, you'd get more of an early heads up if you do something that's more straightforward like solar panels, but would really like to be monitoring across all kinds of different manufacturing, how much difference is any of this making?
Is there anything else we can do?
Do I understand right that last year you put out a request for proposals?
You were at OpenPhil looking to fund people who had ideas for how would we resolve this question?
So as part of this, you've been thinking about, I guess one way that this could really go wrong is if the companies that are developing cutting edge AI may know, they may begin to see themselves internally how much it's helping them and that perhaps it's speeding them up enormously.
they may not decide not to share that information with the rest of the world.
it could afford it in the sense of it didn't need to make money by selling the product?
They're so far ahead that they can just choose to always basically have their product be somewhat better.
They can just release whatever level of their own internal machine would be the best to the external world.
But I guess it would be unfortunate if there are people who do know this, but the broader world doesn't get a heads up.
And so we could have known six months or a year earlier in what direction things were going, but that was kept secret.
I mean, I guess maybe for the leading AI company, they'd prefer to keep it secret.
But for the rest of us, I would probably prefer that the government has some idea what's going on.
So you've been thinking about what sort of transparency requirements could be put in place that would require the companies to release information that would give the rest of us clues as to where things are going.
What sort of transparency requirements could those be?
What about just requirements that in as much as they're training future generations of AI models, they have to reveal to at least like some people in the government, like how they're performing on like normal evals of capabilities.
So they can kind of see the line going up, even if they're not releasing it as products for whatever reason.
And if the line starts, you know, if the benchmarks start curving upwards, like far above previous expectations, then that could lead them to sound the alarm.
Yeah, but they always have that S-curve shape.
Yeah.