Brian
๐ค SpeakerAppearances Over Time
Podcast Appearances
That brings us to the practical reality section of our deep dive.
And looking at the sources, the short answer seems to be not really.
It's basically a leaky bucket.
It's a constant cat and mouse game.
But there's also a real fragility to the filters themselves.
I loved the specific detail in the notes about perplexity trying to de-censor deep seek.
It honestly sounded like a technical comedy of errors.
Quantization, which is basically compressing the model to make it smaller and faster so normal people can use it.
I want to make sure you listen and get this.
So just making the file smaller somehow brought the CCP censorship back.
How does that even work structurally?
It really shows how unstable these alignment layers really are.
You can strip the rules out, but they might just reappear due to a literal math error.
And it's causing real collateral damage along the way.
They provide data tools for academics and researchers.