Steven Byrnes
👤 SpeakerAppearances Over Time
Podcast Appearances
For example...
I'm not commenting on whether it's possible to modify LLM post-training into a real continual learning algorithm, although I happen to believe that it isn't possible.
I'm not commenting on how an inability to do real, continual learning cashes out in terms of real-world competencies.
For example, can a non-real, continual learning AI nevertheless take jobs?
Can it kill billions of people?
Can it install itself as an eternal global dictator?
Etc.
I happen to think that these are tricky questions without obvious answers.
I'm not commenting on whether we should think of actual frontier LLMs, not just pre-trained base models, as predominantly powered by imitation learning, even despite their RL post-training, although I happen to believe that we probably should, more or less, one too.
This article was narrated by Type 3 Audio for Less Wrong.
It was published on March 16, 2026.
The original text contained three footnotes which were omitted from the narration.
Why We Should Expect Ruthless Sociopath ASI by Stephen Burns.
Published on February 18, 2026.
Heading.
The conversation begins.
fictional, optimist.
So you expect future artificial superintelligence, ASI, by default, that is in the absence of yet-to-be-invented techniques, to be a ruthless sociopath, happy to lie, cheat, and steal, whenever doing so is selfishly beneficial, and with callous indifference to whether anyone, including its own programmers and users, lives or dies.
Me?
Yup.