Ege Erdil
๐ค SpeakerAppearances Over Time
Podcast Appearances
Our view as well, actually, again, making contact with the real world and getting a lot of data from experiments and from deployment and so on, it's just very important.
I think there is this underlying latent variable, which explains some of this disagreement, both on the policy prescriptions.
and about the extent to which we should be humble versus ambitious about what we ought to do today, as well as for thinking about the mechanism through which AI has this impact.
And this underlying latent thing is like, what is the power of reason?
Like, how much can we reason about what might happen?
How much can reasoning in general figure things out about the world and about technology?
And, you know, so that is like a kind of core underlying disagreement here.
Yeah, yeah.
I mean, it's unclear that this trades off against the probability of it being achieved successfully or something.
There might be an alignment tax.
I mean, maybe.
You can also just do the calculation of how much a year's worth of delay costs for current people.
This enormous amount of utility that people are able to enjoy, and that gets brought forward by year or pushed back by year if you delay things by year.
And how much is this worth?
Well, you can look at simple models of
how concave people's utility functions are, and do some calculations.
And maybe that's worth on the order of tens of trillions of dollars per year in consumption.
That is roughly the amount consumers might be willing to defer in order to bring forward the date of automation in one year.
And again, I think there's this like...
difference in opinion about how broad and diffuse this transformation ends up being versus how concentrated within a specific labs where the very idiosyncratic decisions made by that lab will end up having very large impact.