Tobes (author/narrator of the LessWrong linkpost)
👤 SpeakerAppearances Over Time
Podcast Appearances
That seems possible and scary.
But humans don't even need to lose control for this to go very badly.
Superintelligence is a superpower, and big changes to the global structure of power can be unpredictable and terrifying.
So I don't know what happens next.
It might be the worst wars imaginable or AI-powered global totalitarianism.
But whatever does happen seems like it has a decent chance of killing us or making life for most or all of humanity terrible.
Point 1.
My family shows concern, maybe some confusion, but definitely concern.
It feels relieving to express.
I have always been stoical about my pain and anxieties.
As a child and teenager, I never wanted to bother others with my stuff.
It's nice to be able to express to them that I am scared about something.
Talking about the risk of AI doom feels easier than discussing my career worries.
Heading.
Hope.
It is June 2023.
In the months prior, the heads of the leading AI labs have been talking to world leaders about the existential risks from AI.
They, along with many prominent AI researchers and tech leaders have signed a statement saying that, mitigating the risk of extinction from AI, should be a global priority alongside other societal scale risks such as pandemics and nuclear war.
Today I am hosting an event in East London on AI governance.
I have lined up eight speakers from the UK government, think tanks and academia, and around 70 people have turned up to watch.