Jesse Zhang
๐ค SpeakerAppearances Over Time
Podcast Appearances
And so let's flag that.
Let's draft a suggestion for what could go better here based on how the human agents are handling or based on the other procedures that we have.
And here's a suggestion for how you should adjust the agent.
That allows the agent to improve automatically over time.
And that's really critical.
So when you think about moats in the agentic world, a lot of it is around if you've been working with a client for a year, has your agent just continuously gotten better by learning from the data?
And that's a different concept than just training on the data.
But has it continuously gotten better to the point where it's just very difficult for another agent to come in and perform at the same level?
Oh, interesting.
I mean, one framework is similar to the sort of framework we talked about before of two ends of the spectrum.
And I would say most leaders we talked to are focused on the more bottoms up end of the spectrum, which is like, where are the areas that we should just not have humans doing because it's so mundane and repeatable.
And there's tons of cost efficiencies there.
So I would say that's where folks are typically thinking of.
So when we talk to leaders and there's a couple observations, one, pretty much all AI initiatives are very top down at this point because it is such a board level mandate.
So the C-suite is very, very invested in like, okay, where do we deploy AI?
It almost means that if you want to get something going at a larger organization, you have to have buy-in from the top level because it's going to get up there anyways.
And they have to make the decision at the end of the day.
So that's one.
Two, the way they think about the use cases, to your point, is back to ROI.
It's like, where can we either save much money or make a lot of money?