Andy Halliday
๐ค SpeakerAppearances Over Time
Podcast Appearances
The dot Mac slash Claude dash mem.
That's where you can find it.
And maybe you can post that link.
But this is... It's related in my mind to the whole idea of recursive LLMs, which are...
OLMs that use a Python REPL as an outboard memory and then manage a large context window more effectively because there's this thing called context rot.
But a recursive model takes these things that it's captured on files and puts them into the thing in a systematic fashion.
And similarly, CloudMem is going to capture the context that is often visible to you when you're doing cloud code, and it does a compaction.
When it reaches the limit of its context window, it says, okay, I'm going to have to stop here and create a little summary and pass that on to the next client.
iteration of this session.
And then is going to do even more than that.
It's not going to just do a simple compaction and kind of toss the ball forward.
It's going to have a major repository built for your cloud sessions that creates a longer term memory about what you're doing with cloud code.
Well, there's definitely a compression that happens, and they use the Claude AI agent-STK to do that compression.
So there's AI compression happening on the context that's being retained, and Claude is reviewing all of that at the beginning of each new session that you're doing.
And I think that I haven't implemented it yet, but I think you can actually go in and edit that to some degree so that if it starts to be something where you're doing Claude Sessions every day, a new one for 20 weeks, maybe this thing gets a little large and it could become...
costly in terms of the context tokens that are being injected at the beginning of the session.
But it is actively compressing that.
So I think that that's one of the advantages of this.