Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
1910 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Like, you know, cars created the opportunity for there to be carjackings and for there to be drive-by shootings and for like, you know, it empowered bad actors in various ways.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But of course, like, you know, if if the police and law enforcement have cars as well, like that is that is a balance.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Like it's not like you're you know, when you imagine a future with some crazy new advanced technology and you imagine all the problems it creates, it can be hard to like with the same level of detail and fidelity.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Imagine all the responses to those problems that are also enabled by that technology.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And so, you know, you could imagine someone worrying about the rise of like fast vehicles and like neglecting to think about how the fast vehicles would have like all the ways that like they cause bad things could could be sort of like kept in check by people using vehicles for law enforcement and similar.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And similarly with computers, you can hack things with computers, but computers also enable you to do a lot of automated monitoring for that kind of hack and automated vulnerability discovery.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Different kinds of law enforcement.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, different kinds of law enforcement.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

You couldn't imagine a police force not using computers.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I do think the basic principle is sound, that if you're worried about problems created by technology, one of the first things on your mind should be how can you use whatever that new technology is to solve those problems.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But I think that this is an especially

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

narrow window to get this right.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And you're not imagining cars creating like broad based rapid acceleration of all sorts of new technologies and potentially like just a 12 month window or two year window or six year window before everything goes totally crazy.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I do think that it's important.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

to not blow through that window, to monitor as we're approaching it, and to monitor how long we have.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But yeah, I think I'm fundamentally fairly optimistic about trying to use early transformative AI systems.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

early systems that automate a lot of things to automate the process of controlling and aligning and managing risks from the next generation of systems who then like automate the process of managing those risks from the generation after and so on.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I don't think that I agree with this.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I do think misalignment, the prospect that these early AIs, these early transformative AIs are misaligned, is a huge obstacle to this plan that needs to be shored up and handled and specifically addressed.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I don't think that...