Navrina Singh
👤 PersonAppearances Over Time
Podcast Appearances
We will flag it for your governance teams that unfortunately you've exceeded your toxicity threshold.
That means you're not going to meet your reputational requirements that are important for this particular chatbot.
Exactly.
So, Grant, like we do not only early warning systems, but once the system has been deployed in production, we are actively monitoring it.
We will connect with your monitoring systems.
And as we are actively monitoring it, and let's say that there is some sort of a data drift, and because of that, your toxicity now went way above threshold, we will immediately flag and recommend shutting down that system because now you are not in alignment with one of your values.
So for us, it is not just at the design or development, but it is also once you've deployed that system, how do you risk manage it?
And then last thing I do want to leave you with is to Corey's point.
Now, if it is in this case, reputation risk for this company is massive.
Right.
But if it was, let's say, oh.
Simple marketing chatbot that you're using only for internal use.
It's not high risk.
In that case, we can actually tone down the dials on governance so that it doesn't have human oversight, that it is automatically governed.
It basically checks certain errors.
It doesn't flag as many risks.
It is much more automated.
then for a high-risk application that you deeply care about, in that case, either there is human over the loop oversight or there is AI oversight to make sure, especially in case of agents, high-risk applications are managed appropriately.
And then, Corey, we work a lot across different sectors, you know, from health care and pharma to government to, you know, HR.
And what you can imagine is in all these organizations, you have.