Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
1910 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Like, I really want to know that.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But then, of course, it's clear that reporting that is very embarrassing to companies.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So one thing that might help here is that there are a number of companies now, so perhaps they could report their individual data to some sort of third-party aggregator that then reports out an anonymized overall industry aggregate score.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But I don't think that solves all the issues because there are few enough of them that people would be able to guess.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I think there's a lot of...

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

competitive challenges and ip sensitivity challenges and like just pr challenges to overcome here with some of the more like penetrating internal information but i think it's like important enough to the public interest that we should try and find a way to like navigate that yeah so it's not unusual for government agencies to be able to basically demand commercially sensitive information from companies for regulatory or governance purposes i actually worked at one when i was in the australian government i was at the productivity commission which had like

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And what kinds of things would you ask them?

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think that could be a solution, but I'm I'm a little skeptical.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

So I think that releasing this information publicly is probably a lot better than releasing it just to a government body, basically because, you know, we're we're like building the plane of like A.I.,

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

safety research like as we're flying it and it's not like there's a box checking exercise that any kind of government agency that's like often understaffed especially with like technical staff could do it's more like we want this information out there in the open and then we want people to do like some involved analyses of it and like our

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

sense of what information we even want is probably going to be like shifting over time.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And it'll probably go better if there's like a robust kind of external scientific conversation about like what indicators we want to see and what that would mean and like when we should trigger alarm.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And if that's all being routed through governments with like 10 people or like even 50 people who have to deal with it, I think it's like it would be very hard for them to

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

interpret the evidence like quickly enough and well enough and be confident enough to sound the alarm and then have people actually listen to them.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Like if I imagine sounding the alarm on something like the intelligence explosion, I kind of picture it having to be like a society wide conversation, kind of like sounding the alarm about COVID or like something I have in my mind is like when Joe Biden had that disastrous debate performance that led to like weeks of conversation that ultimately led to him being removed from the ticket.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

It would have been very hard, I think, for a small, narrow group of people sort of entrusted with the authority to make the same thing happen.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

As well as the opportunity for a bunch of technical experts who may not be paying that much attention now because maybe they...

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

think this stuff is all science fiction, to jump in at that moment and offer their takes.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think it would be very powerful if someone like Arvind Narayanan, who's known for being very skeptical of these stories,