Bret Taylor
👤 PersonAppearances Over Time
Podcast Appearances
Within that, there's lots of areas where you want to afford the AI agency and creativity, just like a really good salesperson would have that conversation with you. And in an empathetic, not pushy way, just try to figure out if there's a way to retain you as a customer. And that's nuanced, right? Empathetic, not pushy. That's where you need to get a lot of agency.
Within that, there's lots of areas where you want to afford the AI agency and creativity, just like a really good salesperson would have that conversation with you. And in an empathetic, not pushy way, just try to figure out if there's a way to retain you as a customer. And that's nuanced, right? Empathetic, not pushy. That's where you need to get a lot of agency.
But you don't want the AI to go off script.
But you don't want the AI to go off script.
But you don't want the AI to go off script.
Yeah. Or even worse, there was an airline that had a chatbot that hallucinated a bereavement policy. Someone had a death in the family and the chatbot's like, the ticket's on us. I won't name the brand on your podcast. But it was like, it was a pretty bad thing. So you don't want the ad to have so much agency that in the extreme case, it hallucinates.
Yeah. Or even worse, there was an airline that had a chatbot that hallucinated a bereavement policy. Someone had a death in the family and the chatbot's like, the ticket's on us. I won't name the brand on your podcast. But it was like, it was a pretty bad thing. So you don't want the ad to have so much agency that in the extreme case, it hallucinates.
Yeah. Or even worse, there was an airline that had a chatbot that hallucinated a bereavement policy. Someone had a death in the family and the chatbot's like, the ticket's on us. I won't name the brand on your podcast. But it was like, it was a pretty bad thing. So you don't want the ad to have so much agency that in the extreme case, it hallucinates.
And in the case that you mentioned, you don't want the ad to just basically represent your brand poorly as well. So essentially, when you're making an AI-mediated customer experience, like a conversational agent, you need to really be able to declare both the goals of what the AI is supposed to do and the guardrails, which could be around language and brand.
And in the case that you mentioned, you don't want the ad to just basically represent your brand poorly as well. So essentially, when you're making an AI-mediated customer experience, like a conversational agent, you need to really be able to declare both the goals of what the AI is supposed to do and the guardrails, which could be around language and brand.
And in the case that you mentioned, you don't want the ad to just basically represent your brand poorly as well. So essentially, when you're making an AI-mediated customer experience, like a conversational agent, you need to really be able to declare both the goals of what the AI is supposed to do and the guardrails, which could be around language and brand.
It could be a tone, how pushy you want to be, how forceful. And then similarly, like, here's the offers that are available, things like that. So that's the technical problem that we solve at Xero, and I think fairly novel, like in a novel way.
It could be a tone, how pushy you want to be, how forceful. And then similarly, like, here's the offers that are available, things like that. So that's the technical problem that we solve at Xero, and I think fairly novel, like in a novel way.
It could be a tone, how pushy you want to be, how forceful. And then similarly, like, here's the offers that are available, things like that. So that's the technical problem that we solve at Xero, and I think fairly novel, like in a novel way.
I think that as AI improves, you'll see these agents adopted for increasingly more mission-critical systems. So I think the adoption curve rationally starts with relatively low-risk interactions and then progresses from there. But our customers already are using it for revenue generation, sales, subscription churn management for subscription services, things like that.
I think that as AI improves, you'll see these agents adopted for increasingly more mission-critical systems. So I think the adoption curve rationally starts with relatively low-risk interactions and then progresses from there. But our customers already are using it for revenue generation, sales, subscription churn management for subscription services, things like that.
I think that as AI improves, you'll see these agents adopted for increasingly more mission-critical systems. So I think the adoption curve rationally starts with relatively low-risk interactions and then progresses from there. But our customers already are using it for revenue generation, sales, subscription churn management for subscription services, things like that.
So, you know, I think that as companies develop confidence in their agents, they can go to sort of increasingly higher risk areas. But this is actually sort of getting to the challenge where we started this conversation is it's a very different design problem than traditional consumer design problems.
So, you know, I think that as companies develop confidence in their agents, they can go to sort of increasingly higher risk areas. But this is actually sort of getting to the challenge where we started this conversation is it's a very different design problem than traditional consumer design problems.
So, you know, I think that as companies develop confidence in their agents, they can go to sort of increasingly higher risk areas. But this is actually sort of getting to the challenge where we started this conversation is it's a very different design problem than traditional consumer design problems.