Ryan Kidd
๐ค SpeakerAppearances Over Time
Podcast Appearances
There's another perspective which says it's that very gradual release or something that ensures continual VC reinvestment to drive the engine to actually make the progress.
Whereas in the other world, actually, you just wouldn't build AGI.
Because perhaps in that world, no one can build it without several hundred billion dollars, maybe a trillion or something.
I don't know.
I can't say.
I certainly think that we're now in the world where
it does seem better to have gradual release of models than to have it all kind of hit us at once.
We recently changed up our track descriptions.
So we previously had the standard oversight control, evals, governance, interpretability, agency, which is sort of a catch-all term for cooperative AI and agent foundations, and AI sentience, digital minds research.
And, of course, security.
But we've recently changed that up because we wanted to reflect more less the theory of change underpinning those kind of things and more like the type of process and type of individual that works on this, right?
So we now have the tracks on our website.
Empirical research, this is control, insert, skill oversight, evals, red teaming, robustness.
A lot of this very like hands-on, coding-heavy, iteration-focused research, right?
We have policy and strategy, which is different, again, right?
That's much more focused on, less on archive publications, potentially more on modeling, more on adapting technical research into things that are actually actionable by policymakers.
Theory is another track.
So this is a lot of mathematics.
It's foundational research on the concepts of agency and how agents interact.
It does include some of that agent-based modeling for cooperative AI.