Doctor Mike
๐ค SpeakerAppearances Over Time
Podcast Appearances
Sensitivity versus specificity.
Yeah, exactly.
You'll understand, yeah.
So is it a worse harm to leave it up or to take it down?
And that trade-off might be dependent on the type of content it is.
If you're building a classifier that looks at nudity and maybe certain types of nudity are illegal, right?
Or in a particular locale, it might be illegal.
For certain ages, it is explicitly illegal, right?
So you want to have certain types of detection that are going to err on the side of absolutely not.
This is going to come down and we're going to take this very, very seriously.
Whereas for certain types of speech or policies that relate to, you know, as you mentioned, like ivermectin and things like this, you might...
want to err on the side of leaving it up.
So I worked on a paper on that like in July of last year.
There's a really interesting question that goes along with that.
So there's a type of AI called agentic AI, which is an AI that acts as a human agent.
And not all of these, again, are bad.
Some of them are you might want to allow an agent to act as you.
You might want to have your chatbot, for example, which operates as you.
And you're going to disclose, because you're an ethical person, that this is your chatbot you.
But other people might not want to disclose.