Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Zach Furman

๐Ÿ‘ค Speaker
696 total appearances

Appearances Over Time

Podcast Appearances

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

A natural starting point is to ask what individual neurons are doing.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

Suppose we take a neuron somewhere in the network.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

We can find images that make it activate strongly by either searching through a dataset or optimizing an input to maximize activation.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

If we collect images that strongly activate a given neuron, do they have anything in common?

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

In early layers, they do, and the patterns we find are simple.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

Neurons in the first few layers respond to edges at particular orientations, small patches of texture, transitions between colors.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

Different neurons respond to different orientations or textures, but many are selective for something visually recognizable.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

In later layers, the patterns we find become more complex.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

Neurons respond to curves, corners, or repeating patterns.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

Deeper still, neurons respond to things like eyes, wheels, or windows, object parts rather than geometric primitives.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

This already suggests a hierarchy.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

Simple features early, complex features later.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

But the more striking finding is about how the complex features are built.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

Ohler et al.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

do not just visualize what neurons respond to.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

They trace the connections between layers, examining the weights that connect one layer's neurons to the next, identifying which earlier features contribute to which later ones.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

What they find is that later features are composed from earlier ones in interpretable ways.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

There is, for instance, a neuron in Inception V1 that we identify as responding to dog heads.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

If we trace its inputs by looking at which neurons from the previous layer connect to it with strong weights, we find it receives input from neurons that detect eyes, snout, fur, and tongue.

LessWrong (Curated & Popular)
"Deep learning as program synthesis" by Zach Furman

The dog head detector is built from the outputs of simpler detectors.