Noah Labhart
๐ค SpeakerAppearances Over Time
Podcast Appearances
And it's not too surprising when you think about the internal folks also being bad actors, right?
Or a lot of the bad actors coming from internal agents there.
It's interesting to think about the, I don't know if I'd call it the attack area, but the attack area internally versus not versus externally as well.
I'm curious why traditional security models are insufficient at inference.
This will be interesting, too, because it's a different point of a problem, right?
And clearly you have you yourself and the industry has probably tried these traditional security models at inference or tried to apply them.
Why are they insufficient?
Yeah, no, that makes total sense.
It's not the before, it's the after.
It's not the what you have access to is what you can do with it in some sort of respect.
And I think that's a really great segue into my next question.
And again, I could probably, from your answers thus far, elaborate on my own ideas, but what's the impact of these sort of breaches, inference time breaches on AI adoption?
How is this slowing down, affecting all the things on AI adoption?
It's just a roadblock to AI adoption.
So what role does compliance play in shaping inference time guardrails?
Because we talk about the security aspect of it and not the before, but the after the action limiting what some an agent can do.
But where does compliance come into play?
Yeah, for sure.
Tell me about guardrails then.
So dive into that.