Stephen McAleese
๐ค SpeakerAppearances Over Time
Podcast Appearances
2.
Singularitarians Singularitarians or AI optimists believe that high general intelligence is extremely impactful and potentially dangerous and ASI is likely to be created in the near future.
But they believe the AI alignment problem is sufficiently easy that we don't need to worry about misaligned ASI.
Instead they expect ASI to create a utopian world of material abundance where ASI transforms the world in a mostly desirable way.
The Ayurved view, also known as AI Dumas believe that general intelligence is extremely powerful, ASI is likely to be created in the future, AI alignment is very difficult to solve, and that the default outcome is a misaligned ASI being created that causes human extinction.
For AI successionists,
Finally AI successionists believe that the AI alignment problem is irrelevant.
If misaligned ASI is created and causes human extinction it doesn't matter because it would be a successor species with its own values just as humans are a successor species to chimpanzees.
They believe that increasing intelligence is the universe's natural development path that should be allowed to continue even if it results in human extinction.
There's an image here with the caption
I created a flowchart to illustrate how different beliefs about the future of AI lead to different camps which each have a distinct worldview.
Given the impact of humans on the world and rapid AI progress, I don't find the arguments of AI skeptics compelling and I believe the most knowledgeable thinkers and sophisticated critics are generally not in this camp.
that AI successionist camp complicates things because they say that human extinction is not equivalent to an undesirable future where all value is destroyed.
It's an interesting perspective but I won't be covering it in this review because it seems like a niche view, it's only briefly covered by the book, and discussing it involves difficult philosophical problems like whether AI could be conscious.
This review focuses on the third core claim above.
The belief that the AI alignment problem is very difficult to solve.
I'm focusing on this claim because I think the other three are fairly obvious or are generally accepted by people who have seriously thought about this topic.
AI is likely to be an extremely impactful technology in the future, ASI is likely to be created in the near future, and human extinction is undesirable.
I'm focusing on the third core claim, the idea that the AI alignment problem is difficult, because it seems to be the claim that is most contested by sophisticated critics.
Also many of the book's recommendations such as pausing ASI development are conditional on this claim being true.