Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Nicholas Andresen

๐Ÿ‘ค Speaker
498 total appearances

Appearances Over Time

Podcast Appearances

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Dancers don't rehearse choreography by thinking next I will contract the left quadriceps while rotating the hip flexor 17 degrees.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

They just dance.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

What's happening is something more like high-dimensional pattern recognition.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Intuition's that fire before language catches up, a felt sense that doesn't decompose into words.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Language often comes after the real work, if it comes at all.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Something similar happens inside models.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

As a model computes its next token, information passes through the network as several thousand-dimensional vectors, encoding uncertainty, alternatives, the half-formed feeling that something might be wrong.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

But chain of thought forces the model to squeeze all of that into text, one token after another, an information compression ratio of over 1,000 to 1.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Much of the information is lost.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

So researchers asked, what if we skip chain of thought?

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

What if models could think in their native format, vectors morphing into vectors, activations flowing through layers, all in some high-dimensional space that was never meant for human consumption and only emit English when they're done?

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Projects like Coconut and Huggin are exploring exactly this.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Researchers call it vneuralese.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Neuralese isn't like thinkish.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Thinkish is compressed and strange, but it's still language that you could, in principle, parse with enough effort.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Neuralese isn't a language at all.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

The reasoning happens in continuous internal latent states, in thinking that is never words.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

You can't read Neuralese.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

You can only watch what goes in and what comes out.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

The good news.