Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Ihor Kendiukhov

๐Ÿ‘ค Speaker
515 total appearances

Appearances Over Time

Podcast Appearances

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

Leopold Aschenbrenner stated in an interview that he wrote a memo after a major security incident arguing that OpenAI security was egregiously insufficient against theft of key secrets by foreign actors.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

He also said that HR warned him his concerns were racist and unconstructive, and he was later fired.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

That's the end of the list.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

All these things sound extremely dumb, and yet, they are, to my best knowledge, true.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

Eliezer has been pointing at this general cluster of failures for years, though from a different angle.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

His death with Dignity Post and of course AGI Ruin paint some parts of the picture in which AGI alignment is going to be addressed in a very undignified manner.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

So, the idea is definitely not new, and yet.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

Heading.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

Many existing scenarios and case studies assume, relatively, high competence.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

Many existing scenarios are high quality, interesting and actually may easily be more likely and realistic than low-competence scenarios.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

In particular, I am talking about famous pieces like AI 2027, it looks like you're trying to take over the world, how AI takeover might happen in two years, scale was all we needed, at first, how an AI company CEO could quietly take over the world.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

It's just it seems we don't have low-competence scenarios at all, although they are not negligibly improbable.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

The scenarios which start to focus to some extent on the low-competence area are what failure looks like by Cristiano and what multipolar failure looks like by Critch, although even they don't treat it as a big explicit domain.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

Across these otherwise very different vibes, hard take-off clippy horror, bureaucratic AI 2027 doom, multipolar economic drift, CEO as Shogun power capture, the stories repeatedly converge on a small set of motifs.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

Stealth through normality, exploitation of real-world bottlenecks by rooting around them socially, replication and parallelization as the decisive advantage, bio or nanotech as a late-game cleanup tool.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

They serve a just educational and modeling cause, and it may indeed be the case that significantly superhuman competence is needed to successfully execute a full takeover against a humanity.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

but many of them, in my view, look more like they are trying to persuade a reader who is sceptical about AI takeover if humans act competently, rather than trying to deliver a realistic scenario in which humans are not that smart, because in reality, they are not.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

As a result, the implicit adversary in most of these stories has to be very capable because the implicit defender is assumed to be at least somewhat functional.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

The scenarios are answering the question could a sufficiently intelligent AI beat a reasonably competent civilization rather than the question could a moderately intelligent AI cause catastrophic harm in a civilization that is demonstrably bad at responding to novel technological threats.

LessWrong (Curated & Popular)
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

John Wentworth, in his post The Case Against AI Control Research, argues that the median doom path goes through slop rather than scheming.