Zach Furman
๐ค SpeakerAppearances Over Time
Podcast Appearances
Deep Learning as Program Synthesis by Zach Furman.
Published on January 20, 2026.
Audio note.
This article contains 73 uses of latex notation, so the narration may be difficult to follow.
There's a link to the original text in the episode description.
Epistemic status.
This post is a synthesis of ideas that are, in my experience, widespread among researchers at frontier labs and in mechanistic interpretability, but rarely written down comprehensively in one place, different communities tend to know different pieces of evidence.
The core hypothesis, that deep learning is performing something like tractable program synthesis, is not original to me, even to me, the ideas are roughly three years old, and I suspect it has been arrived at independently many times.
See the appendix on related work.
This is also far from finished research, more a snapshot of a hypothesis that seems increasingly hard to avoid, and a case for why formalization is worth pursuing.
I discuss the key barriers and how tools like singular learning theory might address them towards the end of the post.
Thanks to Dan Murfitt, Jesse Hoogland, Max Hennig, and Rumi Salazar for feedback on this post.
Quote.
Sam Altman.
Why does unsupervised learning work?
Dan Salesam.
Compression.
So, the ideal intelligence is called Solomonov induction.
End quote.
The central hypothesis of this post is that deep learning succeeds because it's performing a tractable form of program synthesis, searching for simple, compositional algorithms that explain the data.