Tobes (author/narrator of the LessWrong linkpost)
👤 SpeakerAppearances Over Time
Podcast Appearances
Many days I look out over the rooftops and picture the swarms of kill drones on the horizon.
Maybe that's how it will happen.
That, or a mysterious and terrifying pandemic.
I will see my friends freaking out about it on social media one day and then a day later, my partner, and I will be coughing up blood.
Or maybe it'll be quicker.
I'll be blinded by the initial flash of the bomb, then a fraction of a second of extreme heat before the end.
The fear isn't sharp, just a dull empty sense that there is no future.
It's March 2023 and I am in my flat, talking to a friend.
We have agreed to meet to spend some time figuring out our thoughts on the actual risk from AI.
We have both been reading a lot about it, but still feel very confused so wanted to be more deliberate.
We spend some time together writing about and discussing our thoughts.
I am still very confused.
I mostly seem to be dancing between two ideas.
On one side there is the idea that the base rate for catastrophic risk is low.
New technology is usually good on balance and no new tech has ever killed humanity before.
Good forecasters should need a lot of evidence to update away from that very low prior probability of doom.
There isn't much hard evidence that AI is actually dangerous, and it seems very possible that we just won't be able to create superintelligence for some reason anyway.
On on the other side is the idea that intelligence creation is just categorically different from other technology.
Intelligence is the main tool for gaining power in the world.
This makes the potential impact of AI completely historically unprecedented.