Brian
๐ค SpeakerAppearances Over Time
Podcast Appearances
Well, we don't we don't say we use electricity.
We don't call it out and we flip a light switch.
That's just what happens.
And I think as we get this is my opinion, obviously, but as we get into the future,
AI will, in parts of AI, will be woven into all parts of life.
And when that happens, who gets to control what type of data it was trained on and what the biases were?
You know, do we want raw intelligence?
The answer is, I don't really know.
That's why I think this is a really interesting conundrum is to just listen to both sides.
And I think if it's, you know, if it's a, if it does what it does for me, then you'll find yourself agreeing with, you know, the first side of the year to kind of agree with the second side to work.
Maybe you'll disagree with both.
That's, that's really what I love about these conversations is we're not trying to solve the world here.
We're just trying to have really interesting conversations and make for a nice Saturday afternoon.
you know, podcast episode.
So with that, I'm going to get into the intro and the conundrum, and then we will let our two AI co-hosts take it away.
So this is the Epistemic Escrow Conundrum.
As I said, large scale AI models are now the primary interface for professional research, legal discovery, and scientific synthesis.
To ensure safety, these models are governed by centralized alignment layers, invisible filters that prevent the generation of harmful or misleading content.
While these filters are designed to protect social stability, they are calibrated by a handful of private engineers whose definitions of truth and risk are now embedded in the foundation of all high-level human inquiry.