Leogao
๐ค SpeakerAppearances Over Time
Podcast Appearances
Only 6% think it would be a good idea and 75% think it's a bad idea.
I'm curious to see how this one changes over time too.
AI existential risk also doesn't seem to have become politically polarized yet.
Two-thirds of Americans don't associate AIX risk with any particular political party, and the remaining third is split exactly in half on whether preventing AIX risk feels like a democratic or republican issue.
If we plot this data, we obtain this unusual shape that science has yet to find a name for.
There's an image here.
As for specific risks from AI, Americans are most worried about misinformation and deepfakes, 70%, followed by fraud and cybercrime, 66%, and privacy and surveillance, 59%.
Surprisingly, people are roughly as worried about losing control of AI, 57%, as their job loss and lower wages, 56%.
I would have thought that job loss would feel very near at hand, whereas loss of control would be a very weird abstract idea to people.
There's a huge drop-off from there to the next biggest worries.
Military use, 37%, mental health, 35%, environmental impact, 38%, bias, 36%, and inequality, 36%.
My guess is this is because misinformation and deepfakes feel very visceral.
Fake news is a widespread idea, and you don't have to be an AI connoisseur to notice that large sections of the internet are now filled with AI-generated slop.
There's an image here.
My favorite response was a shockingly accurate description of how hopeless it would be to fight back against superhuman AI.
How do you hide from a robot that's more intelligent than humans and can see through walls etc.?
You can't hide.
End quote.
Me too, buddy.
Me too.