Ryan Kidd
๐ค SpeakerAppearances Over Time
Podcast Appearances
We don't necessarily, we don't want to play favors politically.
Like that's not in anyone's interest.
Right.
I think if people are doing that and they're trying to be the thing we are, they're doing a bad job.
That said, right.
I think like we do, we do, uh, uh, currently I believe David Kruger, uh,
is going to be a mentor in the current program.
And some of his research that he's going to be discussing is to do with, I guess, what sort of messaging and what sort of standards are actionable.
But of course, I wouldn't say this is true advocacy.
This is more massive supporting independent research, working with David Kruger, who has his new org, Evitable.
not inevitable, evitable, which is focused on some of these advocacy questions.
So I think Matt says to be pretty careful, you know, in terms of, you know, obviously our 501c3 spending requirements for advocacy, we haven't spent anything on advocacy, what it's worth.
And also like, you know, ensuring this political neutrality so that our fellows, our mentors, and all of our strategic partners just can feel assured that we are like, you know, we're solutions oriented, right?
We are pushing for a particular solution
outcome, right?
And I think that AI safety being a political football is just a bad idea.
And I applaud advocacy orgs like Encode and plenty of others like, you know, perhaps Case, et cetera, for their efforts to try and, you know, that's not maths as an organization.
Yeah, gotcha.
So, I mean, Matt's like, I've talked about a mentor selection committee.
Well, we are fundamentally, I think, this massive information processing interface, right?