Noah Labhart
๐ค SpeakerAppearances Over Time
Podcast Appearances
Hello, listeners.
Today, we are releasing the final episode in our series entitled The Gene Simmons of Data Protection, The KISS Method, brought to you by none other than Protegrity.
Protegrity is AI-powered data security for data consumption, offering fine-grained data protection solutions so you can enable your data security, compliance, sharing, and analytics.
In our final, final episode, we are talking with Ave Gatton, Director of Generative AI.
We talk about how AI safety doesn't end with training.
It begins with inference.
We explore the overlooked frontier of AI security from prompt injection, data leakage, and model manipulation.
Ave helps us understand how you can build guardrails that operate in real time and adapt to evolving threats.
Dave, thank you for being on the show today.
Thanks for being on CodeStory.
Absolutely.
We got a jam-packed agenda today and inference time, AI guardrails and safety and all the things.
You're director of generative AI at Protegraity and I know you've got lots of experience and things to speak on.
Before we dive into that, tell me in the audience a little bit about you.
That's great clarification because I think most of us and me included on this podcast would immediately go to Slack, the communication device.
I appreciate you clarifying that.
We've done some cool stuff.
You have worked for some interesting folks and now you're at Protegority.
I'm excited to dive into our
topic today on inference time, AI guardrails, and really engineering safety beyond training data.