Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
If you look at public communications from at least OpenAI, Anthropic, and Google DeepMind, in all of their like stated safety plans, you see this element of as AIs get better and better, they're going to incorporate the AIs themselves into their safety plans more and more.
How to create a setup where we use control techniques and alignment techniques and interpretability to the point where we feel good about relying on their outputs is like a crucial step to figure out.
Because it either like bottlenecks our progress because we're checking on everything all the time and slowing things down.
Or it doesn't bottleneck our progress, but we like hand the AIs the power to take over.
Thank you so much for having me.
Hopefully.
Thank you.
Yeah, so I think a thing that I've been noticing as the concept of AGI has become more and more mainstream is that it's also become more and more watered down.
So like last year, I was on a panel about the future of AI at Dealbook in New York, and it was me and one or two other folks who kind of think about things from a safety perspective and then a number of venture capitalists and technologists.
And the moderator asked at the very beginning of the panel whether we thought it was more likely than not that by 2030 we would get AGI, defined as AIs that can do everything humans can do.
Like seven or eight hands went up, not including mine, because my timelines are somewhat longer than that.
But then he asked a follow-up question a couple questions later about whether we thought that AI would create more jobs or destroy more jobs over the following 10 years.
So 2030 was five years, and seven out of 10 people thought that we would have AGI by 2030.
But then it turned out that eight out of 10 people, not including me, thought that AI would create more jobs than it destroyed over the next 10 years.
And I was a little confused.
I was like, why is it that you think we will have AI that can do absolutely everything that the best human experts can do in five years?
but will actually end up creating more jobs than it destroys in the following 10 years.
Like what's happening?
And when I poked some people later in the panel about that seeming tension, I think they like really quickly backed off and they said, you know, like what does AGI really mean?
The moderator had defined it as this very extreme thing, but they were like,