Hayden Field
๐ค SpeakerAppearances Over Time
Podcast Appearances
They can easily switch around.
That's something they've always done and probably will always do.
But what I'm seeing for the first time now is that...
Some people are going off of loyalty more when they stay with a model or just knowing the tone or liking what they've built up with that model, the rapport, the instructions that they've laid into it via special docs and stuff.
I saw that even when OpenAI was beating Anthropic on some coding benchmarks, a lot of people were like, oh, well, I still like working with Cloud Code, so I'm just going to keep doing that.
And same with vice versa.
Some people, when Anthropic seemed to be winning the coding wars, some people wanted to stick with OpenAI because they were like, I just like this model better.
And I'm sure a lot of people feel that way about, you know, some aspects of Grok too.
Like maybe they just like the tone.
They just like this or that.
So they're just sticking with it.
So I think it's like I'm seeing more loyalty with the users in a way I hadn't really seen before.
I think every company, including XAI, is trying to create like a moat that will keep people in their ecosystem because, I mean, that's the only way to really eventually make money.
Well, they are really on a tear right now.
I recently wrote that story about how big of a moment Claude is having between the success of Claude Code and Anthropic's safety reputation.
And also their Super Bowl ads, which we're throwing shade at OpenAI for having ads in the chatbot.
I asked Amanda Askell about this, actually, when I last spoke with her about what responsibility Anthropic has to feed into or not feed into that narrative about Claude's potential consciousness.
Because I thought it was interesting that Anthropic didn't just outright deny it.
They've been towing the line of, oh, we don't know.
It depends on how you define consciousness.