Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Ryan Kidd

๐Ÿ‘ค Speaker
958 total appearances

Appearances Over Time

Podcast Appearances

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I made an effort.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I tried to make my product not do the heinous thing that the very best model developer is doing.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

you know then everyone has no excuse and they have to do that right and governments can compel them to and so on so I think like you know making your model performance competitive enough that people that you know people want to pay the alignment tax so to speak seems seems like a viable strategy from that perspective now of course this is none of this is trying to justify the current frontier like the race of frontier models which seems very reckless let's be clear right I think at the current pace of development that we're going to be in a lot of trouble but this is one of those

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

collective action problems, right?

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

These companies have to coordinate to slow down.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And there's international things at stake here as well, because now you have, you do have a US model versus China model developer kind of race now that they're in the running.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

So it's very complicated, and when you have these collective action problems, I think the main way you solve them is through governance.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And sure, the lab leads could be probably even more collaborative.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And definitely some of them are not advocating as strongly as they should be for slowing down and for having this kind of collective, you know, kind of sharing in the alignment benefits and, you know, not pushing the frontier dangerously.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

But I do think this is ultimately a job for governments.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

Maybe.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I can't really speculate on the psychologies of the leaders of these labs, let alone their shareholders, not shareholders so much as

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I guess venture capitalist investors and everyone else, they've made promises to their clients and so on, their employees.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I can't really speculate about that.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I will say that given that the value of AGI is estimated at at least between $1 and $17 quadrillion, I think that seems like a lot of money.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

It's a pretty big mark on history.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I'm not sure if it even matters whether they're trying to make a big mark on history or not.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

make money you know from we can adopt Dennett's intentional stance right about the AI companies be like okay so what does it look like they're doing like if we were to conceptualize if that them is like a coherent agent trying to do a thing what is the thing that they would be trying to be doing and to me it seems a lot like they're trying to make a bunch of money but making a mark in history could also be valid though I would say I guess like in the world where I expect

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I don't want to use any specific AI lab as an example, but I think in the world where an AI lab is trying very specifically to make their mark in history and not trying to make a bunch of money, I'd expect it might look identical, actually, to this world right now.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

He's a billionaire.