Brian Maucere
๐ค SpeakerAppearances Over Time
Podcast Appearances
Here's what Microsoft did.
They just made their co-pilot researcher, that's a feature of co-pilot, that uses two models in sequence.
OpenAI drafts the research, and then Claude critiques the outputs.
And it's, you know, a model council mode that's also associated with this new multi-model researcher feature.
And that mode runs side by side and flags where they agree, disagree.
So not only does it pit two models against each other to do the research, but then it has a counselor that says, okay,
Here's the summary of what the differences are between those two.
I think that's brilliant.
This multi-model orchestration is what you're talking about, Brian.
It's what we talked about yesterday.
And I've fallen naturally into that way of using it by maintaining no single $200 subscription.
but I maintain $20 subscriptions across the full set.
And so I can call on any one of them to do work in support of or in order to check the work of another.
And the answer is, of course, you know, of course, nobody is where the open claw innovation solves one of those attention problems that we have as humans when we're operating with multiple agents.
Read that as multiple agents.
terminal windows, each of which has a Clawed code instance separate from every other one running some task.
The way Claw works is you get to, and you're building this for your purposes, you have multiple agents that are all now talking to you through typical human communications channels.
So you name those to recognize what the domain is of that particular correspondent.
And it's talking to you, giving you updates, notifications, asking for permissions and all of that.
in a way that the terminal doesn't.