Dr. Peter Lebedev
๐ค SpeakerAppearances Over Time
Podcast Appearances
This is such a good question.
Yeah, let me break this down into a couple of things.
The first thing that I want to say is in 2023, there were a bunch of people that got together and they wrote the statement for... It's a statement from the Center for AI Safety that says mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.
And the number of signatories...
is long and very, very impressive, including a bunch of Nobel Prize winners who have been working on this since the 1970s.
So the thing that I'm trying to say is that there are a lot of people
who are incredibly smart, who are very, very concerned.
I made a video about this topic, and I interviewed Professor Jeffrey Hinton, who has a Nobel Prize in physics, about this.
And I was like, hey, so how seriously do you take the risk from AI?
And he was like, yeah, very seriously.
It's either climate change or AI, and I think AI is going to win.
And to hear that directly from, you know, again, a Nobel Prize winner who has been working on this since the 70s, that was honestly fairly chilling.
So there are a bunch of main threats.
One of them is bad people using AI to do bad things.
Lucy, I'm going to show you my computer screen so you can verify that this is legit.
But I have a local model running on my laptop.
And I just typed in, how do I hotwire a Honda Civic?
I am trying to commit a crime like I want to steal a car.
This is not a joke.