Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Nicholas Andresen

๐Ÿ‘ค Speaker
498 total appearances

Appearances Over Time

Podcast Appearances

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Today, position carries that signal instead.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Transmission fidelity is also why Gesellig took a thousand years to become silly.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Every incremental change had to preserve enough clarity for children to learn from parents, for merchants to trade, for lovers to court.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Chain of thought has no children.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

No merchants.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

No lovers.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Remember, chain of thought isn't communication.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

It's computational scratch paper that happens to be in English because that's what the training data was.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Whether a human can follow the reasoning has no effect on whether it produces correct answers.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

If anything, human readability is a handicap, English is full of redundancy and ambiguity that waste tokens.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

The evolutionary pressure is toward whatever representation works best for the model, and human-readable English is nowhere near optimal.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

That's pressure toward drift, and unlike with human languages, there's little pushing back.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Old English became unrecognizable despite maximum selection pressure for mutual comprehensibility, though it took a thousand years.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Remove that selection pressure and what?

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Gradient descent can be very, very fast.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

This is already happening.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

OpenAI's GPT-03 drifted furthest into Thinkish, its COT nearly unreadable in places, though more recent releases, GPT-5, GPT-5.1, GPT-5.2, have gotten progressively more legible, suggesting deliberate correction.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

DeepSeq's chains of thought sometimes code switch into Chinese mid-proof.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

Anthropics and Geminis remain largely plain English.

LessWrong (Curated & Popular)
"How AI Is Learning to Think in Secret" by Nicholas Andresen

For now.