Navrina Singh
👤 PersonAppearances Over Time
Podcast Appearances
And how do you manage that reputational risk?
And you want to make sure that this customer service chatbot not only is accurate, but it is not toxic.
It is fair.
It is not recommending competitor products, which we've actually happened among some of our customers.
Yeah.
How do you make sure that...
This application, which is sort of a representation of the brand interacting with your consumers, is doing the things it is meant to do.
So this is where Credo AI software comes in.
We do two levels of governance.
The first level of governance is at the step that you're making a determination that you might be using a third party proprietary model or an open source model to build this chatbot.
So at that level one, what Credo AI does is we provide you model trust scores, as I was mentioning, where we do an evaluation.
Is the O4 model better than, let's say, you know, Cloud for Opus model?
Or you should be using maybe a Lama 3.1 model for building this application.
So very quickly, this business can make a determination that,
that if they are planning to use a third party, a large language model for this application, how do they make that decision?
Now, once they made that decision to use the model, now it's all about the use case.
As I mentioned, Credo AI governs within the context of use case.
So in this case, you build your application, which is a customer service chatbot.
Credo AI's intelligence layer will quickly tell you, hey,
given this is a customer service chatbot and given we've applied certain policies where you deeply care about reputational risk, here is a set of risk we have identified that you should be managing.