Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
Like, you know, cars created the opportunity for there to be carjackings and for there to be drive-by shootings and for like, you know, it empowered bad actors in various ways.
But of course, like, you know, if if the police and law enforcement have cars as well, like that is that is a balance.
Like it's not like you're you know, when you imagine a future with some crazy new advanced technology and you imagine all the problems it creates, it can be hard to like with the same level of detail and fidelity.
Imagine all the responses to those problems that are also enabled by that technology.
And so, you know, you could imagine someone worrying about the rise of like fast vehicles and like neglecting to think about how the fast vehicles would have like all the ways that like they cause bad things could could be sort of like kept in check by people using vehicles for law enforcement and similar.
And similarly with computers, you can hack things with computers, but computers also enable you to do a lot of automated monitoring for that kind of hack and automated vulnerability discovery.
Different kinds of law enforcement.
Yeah, different kinds of law enforcement.
You couldn't imagine a police force not using computers.
I do think the basic principle is sound, that if you're worried about problems created by technology, one of the first things on your mind should be how can you use whatever that new technology is to solve those problems.
But I think that this is an especially
narrow window to get this right.
And you're not imagining cars creating like broad based rapid acceleration of all sorts of new technologies and potentially like just a 12 month window or two year window or six year window before everything goes totally crazy.
So I do think that it's important.
to not blow through that window, to monitor as we're approaching it, and to monitor how long we have.
But yeah, I think I'm fundamentally fairly optimistic about trying to use early transformative AI systems.
early systems that automate a lot of things to automate the process of controlling and aligning and managing risks from the next generation of systems who then like automate the process of managing those risks from the generation after and so on.
So I don't think that I agree with this.
So I do think misalignment, the prospect that these early AIs, these early transformative AIs are misaligned, is a huge obstacle to this plan that needs to be shored up and handled and specifically addressed.
And I don't think that...