David Duvenaud
๐ค SpeakerAppearances Over Time
Podcast Appearances
Yeah, well, I will say that.
So I guess one thing you always have to be careful of is you want to be doing things that aren't otherwise incentivized to be done, right?
And so, as I said, there are already incentives to...
forecast prices certainly in the short term the thing that's going to be very valuable is actually like as you said action conditional or like policy conditional forecasting so if we take this policy or if we coordinate then this is going to happen and i think that sort of forecast is going to be an undersupplied public good so that's why i'm not so worried about just like copying the work of like some other corporation
Yeah, yeah.
So I guess I'll say I spend a bunch of time at Anthropic working on the more acute loss of control, standard AI safety kind of stuff.
And I guess I am still very worried about this sort of thing.
And as I said, to me, the modal future is we get some way along gradual disempowerment and then we screw up alignment actually completely.
Or there's like some just much faster takeover.
So I guess I'll say in absolute terms, normal loss of control AI safety research is still massively underinvested in.
In relative terms, I think this kind of more speculative future, how do we align civilization question is even more underinvested in.
With the major caveat that it's just way harder to make progress on.
And in a sense, it's like...
less neglected.
One of the sort of big things I say is what we need to do is upgrade our sensemaking and governance and forecasting and coordination mechanisms.
All of these things need to be much more better and reliable before they're writing us too much on the wall that like there's no alpha in humans and don't listen to humans and we lose de facto power.
But that's not a very controversial thing, right?
Like no one's against better institutions basically.
And so they're not neglected in that sense.
What I do think is neglected again is