Grant Harvey
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
So, like, yeah, use Factory because it'll save you those tokens.
So let's talk about the blog that you just published called Evaluating Compression.
Really, really awesome blog and research that you did.
The hardest part about working on large codebases is context, right?
So why is maintaining context across long agent sessions so difficult?
And how did you attempt to solve this?
So...
Two things.
Number one, I saw Ray Fernando, who's one of my favorite AI coders on YouTube, say he had a coding session with Factory with 7 million token context, which is just gnarly.
I can't believe that.
How is that even possible?
Is it just from applying this technique?
And I guess the other question is just in production, why does this matter, right?
We kind of talked about it, but maybe we could just put a finer point on it.
I was going to ask that.
Would you ever launch a general purpose?
Because I think that's the problem with cloud code, right?
Is that it says cloud code, so people don't think of it in that way.
Would you ever launch a general purpose agent?
I was going to say it's actually like a, I don't know the right phrase for it, but like a business sin that Microsoft and Google or even OpenAI and Anthropic have not made an agentic writing tool.