Tobes (author/narrator of the LessWrong linkpost)
👤 SpeakerAppearances Over Time
Podcast Appearances
After having many other surprising conversations about AI, like the one I had in the Mendips, I have decided to read more about it.
I am listening to an audiobook of Superintelligence by Nick Bostrom.
As I cycle in loops around the park, I listen to Bostrom describe a world in which we have created super-intelligent AI.
He seems to think the risk that this will go wrong is very high.
He explains how scarily counterintuitive the power of an entity that is vastly more intelligent than a human is.
He talks about the concept of orthogonality.
The idea that there is no intrinsic reason that the intelligence of a system is related to its motivation to do things we want, for example not kill us.
He talks about how power-seeking is useful for a very wide range of possible goals.
He also talks through a long list of ways we might try to avoid it going very wrong.
He then spends a lot of time describing why many of these ideas won't work.
I wonder if this is all true.
It sounds like science fiction, so while I notice some vague discomfort with the ideas, I don't feel that concerned.
I am still sweating, and am quite worried about getting sunburnt.
Heading.
It's a long way off though.
It's still summer 2018 and I am in an Italian restaurant in West London.
I am at an event for people working in policy who want to have more impact.
I am talking to two other attendees about AI.
Bostrom's arguments have now been swimming around my mind for several weeks.
The book's subtitle is, Paths, Dangers, Strategies, and I have increasingly been feeling the weight of the middle one.