Ryan Kidd
๐ค SpeakerAppearances Over Time
Podcast Appearances
I made an effort.
I tried to make my product not do the heinous thing that the very best model developer is doing.
you know then everyone has no excuse and they have to do that right and governments can compel them to and so on so I think like you know making your model performance competitive enough that people that you know people want to pay the alignment tax so to speak seems seems like a viable strategy from that perspective now of course this is none of this is trying to justify the current frontier like the race of frontier models which seems very reckless let's be clear right I think at the current pace of development that we're going to be in a lot of trouble but this is one of those
collective action problems, right?
These companies have to coordinate to slow down.
And there's international things at stake here as well, because now you have, you do have a US model versus China model developer kind of race now that they're in the running.
So it's very complicated, and when you have these collective action problems, I think the main way you solve them is through governance.
And sure, the lab leads could be probably even more collaborative.
And definitely some of them are not advocating as strongly as they should be for slowing down and for having this kind of collective, you know, kind of sharing in the alignment benefits and, you know, not pushing the frontier dangerously.
But I do think this is ultimately a job for governments.
Maybe.
I can't really speculate on the psychologies of the leaders of these labs, let alone their shareholders, not shareholders so much as
I guess venture capitalist investors and everyone else, they've made promises to their clients and so on, their employees.
I can't really speculate about that.
I will say that given that the value of AGI is estimated at at least between $1 and $17 quadrillion, I think that seems like a lot of money.
It's a pretty big mark on history.
I'm not sure if it even matters whether they're trying to make a big mark on history or not.
make money you know from we can adopt Dennett's intentional stance right about the AI companies be like okay so what does it look like they're doing like if we were to conceptualize if that them is like a coherent agent trying to do a thing what is the thing that they would be trying to be doing and to me it seems a lot like they're trying to make a bunch of money but making a mark in history could also be valid though I would say I guess like in the world where I expect
I don't want to use any specific AI lab as an example, but I think in the world where an AI lab is trying very specifically to make their mark in history and not trying to make a bunch of money, I'd expect it might look identical, actually, to this world right now.
He's a billionaire.