Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

The Daily AI Show

The Agentic Allegiance Conundrum

24 Jan 2026

Transcription

Chapter 1: What is the main topic discussed in this episode?

0.925 - 23.98 Brian

What's going on, everybody? Welcome to the 44th edition of our Saturday Conundrum. Yes, 44. It's hard to believe we are almost on a year of doing this pretty much every single Saturday. We took a few off during the holidays, but very excited to come back with this new fresh conundrum for you this weekend. And this one is the agentic allegiance conundrum.

0

24.401 - 41.456 Brian

And we've talked before, in fact, we've talked about a lot of topics on these conundrum episodes. It's always, it's getting harder to find new topics for sure. But I like the challenge. So this one, you know, we've talked before about proxies. And if you haven't listened to the other conundrums, look, they're usually 15 minutes or less. They're easy listens.

0

42.177 - 62.415 Brian

And so you can go back on any of your podcast platforms and listen to those. So, you know, the idea of the proxy is that it's AI doing something for you. It's your proxy, right? And we've discussed, you know, what does that look like? I mean, well, we've talked a lot about cloud code in the last two weeks on the show. And, you know, that could be considered, you know, a bit of a proxy.

0

62.735 - 76.107 Brian

But we're not we're not talking about like chatbots at this point writing bad poetry. We're like barreling into a world where AI proxies can move your money, sign your contracts and negotiate your life in mere milliseconds, probably.

0

Chapter 2: What is the agentic allegiance conundrum?

76.087 - 100.336 Brian

But the problem is this creates a massive headache because do you want your AI to be like a ruthless mercenary that destroys the competition to get you rich? Or is it a citizen that actually refuses your orders because they might be socially suboptimal, right? It's like the idea that, yes, when I want to, when I have a good idea, I've been personally working on a project, right?

0

100.456 - 116.67 Brian

And I have a good idea and I want to get it out into the market when it's ready. And so, yeah, sure. There's a bit of me that's like, be the mercenary, right? Go out there. And we're not talking about guns blazing. I'm just talking about go out there and I want my AI proxy to do the thing, you know, within rules.

0

116.931 - 137.806 Brian

I don't want to break the law, but I wanted to go do the thing and help me make money. Then there's the other side. Well, when other people are mercenaries, you go, well, hold on a second. I think I'd like to have more of a citizen AI, which is to say there's a set of rules that we all have to follow. And, you know, your mercenary AI cannot be more ruthless than mine.

0

137.986 - 159.223 Brian

And so, you know, it has to be socially optimal to do that. We're kind of digging into that today, and I'm going to set you up here. But it's this idea of mercenary and citizen. And where do you fall? And who gets the power if we ultimately go one way or the other? The fun conversation. I think you guys are going to enjoy it.

0

159.564 - 180.546 Brian

Let's just set it up here with the intro and then the conundrum as we normally do. And so again, this is the genetic allegiance conundrum. We're moving from AI as a chatbot to AI as a proxy. As we said, you're not going to be able to just do simple things that we know chatbots do today, but we're talking about delegating to your agency, basically, your agent, if you will.

180.526 - 201.218 Brian

And this can imagine, you can imagine this being things like your personal health agent, or it might be somebody that's an underwriting agent, somebody, right? Some agent, some AI that's an underwriting agent, or maybe it'll have something to do more with insurance, right? So soon we're going to be able to offload a lot to these AI agencies or like let's just call them these proxies.

201.518 - 216.182 Brian

And that creates, like I said, a conflict of interest at the hardware level. So do we want mercenaries that are beholden to you or citizens that are beholden to the stability of the system? And I know you're thinking like, Brian, it's citizen, man. We can't have a bunch of mercenaries out there.

216.563 - 236.06 Brian

But I would tell you to hold that thought for a second and then listen to this whole episode and then see where you come out on it. And I don't know. That's the fun of the conundrum, right? There's no there's no quick right or wrong answer. It's not even meant to be a right or wrong answer. This isn't meant to tell you our conundrum episodes are never meant to tell you what we think.

236.201 - 241.861 Brian

It's just there to make you think. whatever side you come out on, or maybe in the middle ground.

Chapter 3: How are AI proxies different from traditional chatbots?

591.047 - 606.389 Unknown

But the philosophical foundation here is actually it's deeply rooted in individual liberty. It is. It goes back to the idea of autonomy. The argument is that an A.I. is just an extension of the owner's will. Think of it like that old Steve Jobs metaphor, a bicycle for the mind. Right.

0

606.689 - 617.82 Unknown

If I have the right to negotiate a contract and I have the right to hire a lawyer to help me, why shouldn't I have the right to use an AI to do it better? So if you constrain the agent, you are effectively constraining the human. Exactly.

0

617.98 - 638.455 Unknown

The proponents of the mercenary model, they argue that any system where an AI refuses a lawful command because of some vague social good is, well, it's a form of algorithmic paternalism. It's the nanny state inside your laptop. It treats adults like they can't govern themselves. That's the argument. But honestly, the philosophy is just the warm up.

0

639.096 - 656.846 Unknown

The real weight of the mercenary case comes from the legal side, the fiduciary framework. This was the part of the research that I thought was the most bulletproof. It just makes so much sense when you compare it to the human world. It really does. I mean, think about a lawyer or a trustee or a financial adviser. In the eyes of the law, these are fiduciaries.

0

657.126 - 676.759 Unknown

They have a duty of loyalty to the principal. That's the client. So if I hire a lawyer to keep me out of jail. Their job is to fight for you. Their job is not to make sure the justice system feels balanced that day. A fiduciary cannot balance your interests against society at large. And the sources point out we're already seeing this language in actual legislation. Yes.

677.139 - 695.824 Unknown

The EU Data Governance Act actually uses this fiduciary language. It requires data intermediaries to act in the best interests of the data subjects. So if we apply that to AI agents, if an AI is managing my money, its default setting has to be loyalty to me. If it's not, you have a massive breach of trust.

696.485 - 705.259 Unknown

Imagine you have an AI financial advisor and you find out it deliberately got you 5% lower returns because it was trying to stabilize the housing market. I'd be furious.

Chapter 4: What are the implications of delegating agency to AI?

705.699 - 722.283 Unknown

I would sue the developer. And under the fiduciary framework, you would probably win. You hired a mercenary and you got a double agent. But let's look at the economics, because this was the most counterintuitive part for me. Usually we're taught that if everyone is selfish, the system breaks. The tragedy of the commons, right? Yeah.

0

722.704 - 744.075 Unknown

But the mercenary camp argues that unconstrained agents actually make the economy better. This is the AI economist framework. And you're right. It sounds completely backward. But the research shows that when agents are just allowed to maximize their individual utility, basically being selfish, they generate higher social welfare than the baseline.

0

744.095 - 766.564 Unknown

How does everyone being selfish lead to a better outcome? That feels like it violates a law of physics or something. It drives hyper efficiency. Unconstrained agents lower transaction costs to basically zero. They correct market mispricings instantly. The research points to things like labor specialization and even tax gaming. OK, stop there. Tax gaming is listed as a benefit.

0

766.882 - 788.929 Unknown

In an economic modeling sense, yes. Think about it. If agents are constantly finding the most optimal path, even through complex loopholes, it forces the system to innovate. It exposes the inefficiencies in the tax code. So by being ruthless, they force regulators to close the loops, which tightens the whole system. Exactly. It drives price discovery. It removes all the slack.

0

789.59 - 807.487 Unknown

And there's one more critical argument for the mercenary model, property rights and self-sovereign A.I. This is the pre-distribution idea. Right. It asks the question, if I don't own my agent, who does? Well, usually a giant tech company. Exactly. If we say that all agents must be constrained to serve systemic interests, well, who gets to define what those interests are?

807.747 - 828.322 Unknown

It's going to be the platforms, the regulators, the dominant market players. So the argument is, if you constrain the agents, you're just handing power to the people who control the constraints. Which is usually big tech. The mercenary model argues that distributing that power to individuals, giving everyone a powerful, loyal agent is the only way to prevent that kind of corporate capture.

828.562 - 850.941 Unknown

OK, I have to admit, when you lay it out like that, I'm feeling pretty good about the mercenary model. Liberty, fiduciary duty, economic efficiency. Sign me up. But then we have to turn the page. We do. We have to look at the case for the citizen model, the systemic primary alignment. And this argument paints a very, very different picture. It starts with a very simple, very scary premise.

850.961 - 871.46 Unknown

Individual rationality can lead to collective disaster. The prisoner's dilemma. Or the tragedy of the commons, exactly. What is rational for one person is catastrophic, if everyone does it. And the citizen model argues that in an agentic future, this catastrophe doesn't play out over years. It happens in milliseconds. Which brings us to systemic risk.

871.481 - 890.825 Unknown

And the sources bring up the flash crash as the ultimate warning sign. May 6, 2010. The Dow plunged nearly 1,000 points in minutes. The quant meltdown. And why did that happen? It wasn't because the algorithms were evil. It's because they were doing exactly what the mercenary model says they should do. They were protecting their owners. Exactly.

Comments

There are no comments yet.

Please log in to write the first comment.