Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
Yeah, I think that's right.
I'm very glad that I took the sabbatical.
I'm also glad that I didn't leave.
I think a salient alternative for me at the time that I decided to take four months off was to just leave and figure out what I wanted to do next.
And I think it was...
good both for my impact and for my personal growth and like satisfaction that I came back, I like helped Emily and now I'm doing like a proper job search, which like at the time that I left for my sabbatical, it was more like kind of like healing and reflecting and not like sort of in a focused way searching for a role.
Yeah, I mean, I think it kind of depends on the show's goals.
I think my take is that it's correct and good that you don't need to buy into the whole EA package with all of its baggage to worry about misaligned AI taking over the world and to do technical AI safety research to prevent that, to worry about AI-driven misuse and to do research and policy to prevent that, and to just generally worry about AI disruption and think about that.
But I don't think so.
So I think there should be and there is like a healthy, thriving like AI is going to be a big deal ecosystem that does not take EA as a premise.
But at the same time, I think EA thinking and EA values
probably do still have a lot to add in the age of AI disruption.
I think it's going to be EAs, for the most part, who are thinking seriously about whether AIs themselves are moral patients and whether they should have protections and rights and how to navigate that thoughtfully against trade-offs with safety and other goals.
It's going to be EAs that
by and large, are still the ones that take most seriously the possibility that AI disruption could be so disruptive that, like, you know, we end up locked into a certain set of societal values.
Like, we gain the technological ability to, like,
you know, shape the future for millions of years or billions of years and like are thinking about how that should go.
Like there's a lot of degrees of extremity to the like AI worldview.
Like even if you accept that AI is going to disrupt everything in the next 10 or 20 years, the people who are thinking hardest about the most intense disruptions are going to be disproportionately EAs because sort of
ea thinking like challenges you to like try and engage in that kind of like very far-seeing like rigorous speculation um even though that you know there's a lot of challenges with that and it's like very hard to know the future i think ea's are the ones that like try hardest to like peek ahead anyway um