Navrina Singh
👤 PersonAppearances Over Time
Podcast Appearances
And in this context, as I mentioned, fairness could be top, toxicity could be important, you know, performance of this system is really critical, et cetera.
So we start to lay out all the risks that we find for this particular conversational AI customer service chatbot.
Credo AI right now has the largest repository of AI risks that we not only manage, but actively mitigate.
We have about 600 plus AI risk within our system.
And the way that we bring these risk into the organization is understood risks like, you know, security and fairness, new kinds of risk like these prompt injection, adversarial attacks,
Or emergent threats, and these emergent threats are mostly the risk that we might not have mitigations for.
For example, a random one like sycophancy, what do we do for that, right?
That's an interesting one, yeah.
Yeah, but we do categorize that within Credo AI as well.
So we are tracking these 600 plus risks.
And for this customer service bot that you just created, we start to highlight what risk we see.
We see the performance risk.
We are seeing the toxicity risk.
And then we give you a game plan as to how to mitigate those risks.
Now, one of the things which is really key is Credo AI is the single system of record for governance that integrates into all your data and AI infrastructure.
However, right now, we are not directly changing your ops layer to enforce policies.
So as an example,
If we believe that this customer service chatbot is really toxic because it's sort of like went over your toxicity threshold that you've established, we will flag those toxicity thresholds, but we will not go and change your, you know, your app ops layer to basically go and reform how you're building that system.
Right.
But we will flag it for the data scientists.