Robert M
👤 SpeakerAppearances Over Time
Podcast Appearances
Industrial accidents can cause serious harm.
But the type of risk differs from classic misalignment scenarios, and our mitigation should adapt accordingly.
End quote.
One is uninteresting in the context of future superintelligences unless you're trying to define them out of existence.
Two.
is actively contradicted by the evidence in the paper, relies on a definition of incoherence that could easily classify a fully human-dominating superintelligence as more incoherent than humans, and is attempting to both extrapolate trend lines from experiments on tiny models to superintelligence, and then extrapolate from those trend lines to the underlying cognitive properties of those systems.
Three relies on two.
Four is slop.
I think this paper could have honestly reported a result on incoherence increasing with task length.
As it is, I think the paper misreports its own results re.
Incoherent scaling with model size performs an implicit Mott & Bailey with its definition of incoherence and tries to use evidence it doesn't have to draw conclusions about the likelihood of future alignment difficulties that would be unjustified even if it had that evidence.
This article was narrated by Type 3 Audio for Less Wrong.
It was published on February 4, 2026.
The original text contained three footnotes which were omitted from the narration.
Images are included in the podcast episode description.