Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And that is sort of the foundation of everything else.
But then there are also other things that are not really about AIs at all that are just about broad societal defenses.
So if we think that the advent of extremely powerful AI will like create a flood of new like cyber vulnerabilities that are quickly discovered in like a bunch of critical systems like weapons systems and the power grid and so on.
Can we preemptively use those same AIs that are good at finding those vulnerabilities to find and patch them before bad actors can use the AIs to find them?
Another thing is biodefense.
So you had my colleague Andrew on your podcast recently that talked about his ambitious plan to rapidly scale up like detection of novel pathogens, rapidly scale up medical countermeasures when they're detected, and rapidly scale up the manufacturing of like PPE and clean rooms and things like that.
If we have AI systems that are good at that kind of research problem, and also maybe we have at that point robots, so a lot of that manufacturing itself can be automated and can go a lot faster than if humans had to do that stuff, that would be a big boon to biodefense.
And then there's somewhat more speculative things along the lines of you can think of this as a kind of defense.
You can think of it as a psychological defense maybe.
But there's stuff around can we use AIs to make our collective decision making a lot smarter, a lot wiser, a lot better?
Can we make it so that we're better at finding truth together?
Can we make it so that we're better at coming to compromise policy solutions that leave lots of people happy?
Or even that too, but even more mundanely, stuff like over the last 10, 15 years, social media has led to a degradation of political discourse.
Could AI tools help you just kind of find the policy from among the vast space of possible policies that a large number of people actually like and can credibly put trust in and so on?
Yeah, I agree.
I think all of those problems that Tom and Will highlighted seem like real problems to me.
I think maybe my approach would be to, from our current vantage point, lump a lot of that under AI for helping us think better and helping us find solutions that we're mutually happy with.
So it's like AI for...
coordination, compromise, negotiation, truth-seeking, that cluster of things.
Because I think something like the question of space governance, how do we divide up the resources of space if there are some existing factions that have an existing distribution of power?