Ada Palmer
๐ค SpeakerAppearances Over Time
Podcast Appearances
But Palmer's approach suggests otherwise.
Experimental history is possible.
not in the sense of manipulating the past, but in the sense of systematically exploring its possibility space.
Her simulation is an experiment.
Controlled conditions, repeated trials, emergent patterns.
It will never achieve the precision of physics, but it's a genuine advance beyond purely descriptive history, as we know it.
The limitation is obvious.
Palmer can run her simulation perhaps 10 times over the years she teaches the course.
But what if we could run 50 simulations per day, as weather forecasters do?
What if we do that for an entire year?
We'd end up with tens of thousands of simulations and a detailed probabilistic landscape of the political situation of 1492.
Enter history of LLMs, large language models trained exclusively on texts from specific historical periods.
The idea emerged from a fundamental problem.
Modern LLMs cannot forget.
A generic LLM knows what already happened.
No amount of prompting can remove this hindsight bias, which, by the way, it shares with Palmer students.
A historian studying the Renaissance cannot UN-know what came next, and neither can a model trained on Wikipedia.
But what if you could train an LLM only on texts available before a specific date?
Researchers at the University of Zurich recently built RANK4B, a language model trained exclusively on pre-1913 texts.
The model literally doesn't know World War I happened.