Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Rob Wiblin

๐Ÿ‘ค Speaker
2780 total appearances

Appearances Over Time

Podcast Appearances

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

willing to be avant-garde are going to be more intellectually avant-garde like like tolerant of like quite a lot of philosophical like reasoning and speculation in a sense i think this might be like what a healthy ea community is it's like an engine that incubates cause areas um at a stage when they're like not very respected they're extremely speculative the methodology isn't

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

firm yet.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

You kind of just have to be extremely altruistic and extremely willing to do unconventional things.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And then like matures those cause areas to the point where they can stand on their own while also being a thing that like many EAs work on.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I think like digital sentience and maybe like the other things on Will and Tom's list, like space governance and

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

thinking about value lock-in and stuff like that, are other candidates for EA to kind of incubate the way it incubated worrying about AI takeover, basically.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think that there are some versions of like the value lock-in concern that go through something else kind of overtly scary and bad happening.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Like one person getting all of the power and that's how like that person's values get locked in and that's how we get value lock-in.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

But I think there's a whole spectrum of things that are sort of like...

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

almost like social media plus plus.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

It's sort of like in this distributed way, this technology has made us like meaner to each other and like worse at thinking and has allowed individuals to live in information bubbles of their own creation.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

You can imagine AIs getting way better at like creating a curated information bubble for each individual person that allows them to continue believing whatever it is they started believing with like super intelligent help, like preventing them from changing their mind.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And this might be something you think of as an important social problem for the long-run future, even if it doesn't happen via one person getting all the power.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Power is still relatively distributed, but large fractions of society are impervious to changing their mind.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Yeah, absolutely.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And I think even the tamest of EAA cause areas, like global health and development, has a huge dose of this.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

I think if you look at GiveWell's cost effectiveness analysis, they have to grapple with how does the value of doubling one's income, if you make a very low amount of money, compare to a certain risk of death or the value of a certain painful disease you could have.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And they have to try and get their answers based on surveys and weird studies people have done.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

It's not very rigorous in the end.

80,000 Hours Podcast
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

And they have to form their judgments and spell out their judgments.