Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Ryan Kidd

๐Ÿ‘ค Speaker
958 total appearances

Appearances Over Time

Podcast Appearances

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And they typically have AI safety organizations they founded and lead.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

Then there's iterators, right?

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And this isn't just, it's not just engineering, right?

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

Iterators are active researchers who have strong research tastes, who are pushing the frontier, but they typically aren't creating like novel paradigms based on like theoretical models of things.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

They're typically advancing empirical AI safety.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And you can even imagine iterators and technical governance agendas as well.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

So this is the majority of people that are working in AI safety today and also the majority of hiring needs in the future.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And then there's amplifiers who I think like the closest example is like TPM archetypes.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

I'll say this for iterators, like prominent examples include like Ethan Perez, Neil Nanda, Dan Hendricks.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

Actually, I thought I think Dan Hendricks maybe crosses some...

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

crosses some boundaries there.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

But yeah, amplifiers, to distinguish them, they have more focus on amplifying people.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And typically, you'll find them on large research teams, and they're scaling the number of people that can be effectively managed and contribute to organizations.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

So a lot of math research managers would fit this category, or TPMs at the various labs.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And interestingly, they're actually quite in demand as well, particularly for labs in like the 10 to 30 FTE range.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

They're the most in demand archetype because it's very hard to hire great people managers who also have the requisite research experience.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

You're trying to hit two bullseyes.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And there are ways, of course, like Google has this sort of model where you have your research managers and your people managers, your project people managers, and they're somewhat distinct.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

And MATS does try to do this for our mentors and our RMs.

Future of Life Institute Podcast
Can AI Do Our Alignment Homework? (with Ryan Kidd)

But yeah, I think the need for amplifiers is only going to grow because as you've said, things like cloud code and other AI systems are going to erode away the minimum technical skills required to contribute.