Jeremiah
๐ค SpeakerAppearances Over Time
Podcast Appearances
I think overemphasizing next token prediction is a confusion of levels.
On the levels where AI is a next token predictor, you are also a next token, technically next sense datum predictor.
On the levels where you're not a next token predictor, AI isn't one either.
Putting all the levels in graphic form, this is a chart that shows two sequences of steps, one for human and one for LLM, with the comparable levels next to each other.
So we have evolution, sex and reproduction, for a human, and for an LLM it's incentives, AI company profit motive.
For the human we have predictive coding, next sense datum prediction.
LLM has training, next token prediction.
Then for a human, next we have, I don't know, I just thought about it really hard.
And then an LLM has question mark, maybe nothing?
Then the human has, for example, high D toroidal attractor manifolds.
And for an LLM, for example, rotation of 6D helical manifolds.
And then for the human, finally we have neurons and neurotransmitters.
And for the LLM we have chips and electricity.
2.
The human brain was designed by a series of nested optimization loops.
The outermost loop is evolution, which optimized the human genome for being good at survival, sex, reproduction, and child-rearing.
But evolution can't encode everything important in the genome.
It obviously can't include individual and cultural features like the vocabulary of your native language, or your particular mother's face.
But even a lot of things that could be there in theory, like how to walk, or which animals are most nutritious, are missing.
The genome is too small for it to be worth it.