Taylor Mullen
๐ค SpeakerAppearances Over Time
Podcast Appearances
Yeah.
I like it.
Yeah.
And so, but it's super powerful because these LLMs will make subtle mistakes, but if they iterate over and over again, they can refine their own work pretty effectively.
Yeah.
And so that, so that's, so that's, the Ralph Fugum technique is one of those examples of you're just letting it turn to improve its own answer.
So, is one of these things which I was saying is,
That's a way I can let a thread run for a lot longer of a time and ideally get a better output at the end.
It doesn't always work that way.
Like there are times when the models will go off the rails and they'll go into totally unrelated territory.
It's a very fickle balance of you want pure autonomy.
Or do you want to keep the driver in the loop?
We, in Gemini CLI, we try and strike a good balance between the two so it's customizable, so you can kind of do both.
But yeah, always a hard problem.
Everyone has done a little, I think they're, for us, we feed into a new chat, new context every single time.
It's the most, it's arguably, I'd say it's the most pure version.
There are other clients that don't do that.
Like there are other ones that maintain the current thread and they just let the history kind of carry forward.
I think even Cloud Code does that.
But the intention is for it to be a truly true mindset is what the algorithm slash pattern is supposed to be.