Brian
๐ค SpeakerAppearances Over Time
Podcast Appearances
initially blocked perfectly legitimate academic queries about genocide and civil rights history.
Because the safety filters were just way too aggressive.
saw words associated with violence and just panicked and shut it down.
And a paper in Frontiers in Communication argues this is devastating, particularly for the global south.
They call it digital silence.
If your country's history is violent or controversial, a safety-focused AI might just refuse to talk about it entirely to avoid violating its appropriateness guardrails.
So looking at all this, we have a mess.
Governed AI protects us from radicalization and deep fakes, but it might trap us in a sanitized bubble and literally erase history.
Raw AI offers epistemic freedom, but it opens the door to industrial scale lies and propaganda.
It feels like it, but that is where we need to be really careful.
The sources strongly suggest this is a false binary.
Well, first off, transparency is completely missing from both sides of this debate.
The Future of Free Speech report mentioned that no provide literally none of the big tech companies actually discloses their training data sets or the specific exact rules for what they consider helpful versus harmful.
Wait, what's the difference between pre-training censorship and the post-training filters we've been talking about?
So the model doesn't even know that it doesn't know.