Cal Newport
๐ค SpeakerAppearances Over Time
Podcast Appearances
But I do think Gary Marcus, I don't know if it was a scoop, but Gary Marcus captured in a recent newsletter something really important.
When Anthropic leaked the code for their cloud code coding harness that sits on top of their LLMs to do coding, it turns out
they've added a huge amount of old-fashioned, hand-coded, symbolic AI-style rules and pattern recognizers and special if-thens.
So they've just been sitting there tuning this program for specifically doing computer programming, and the LLM is being a little bit more isolated to just the code production.
So they've kind of just gone back to old-fashioned.
This is like an old-fashioned system that is plussing up an LLM.
But I'm with you.
Yeah, it's very hard.
Just asking an LLM,
Tell me, give me a plan for doing X. For almost any scenario of X, you really can't trust a plan from a model whose goal is primarily to finish text, to finish the story you gave it in a reasonable style way.
That's not how we plan.
That's not how we think about planning.
And it doesn't give you consistently usable plans.
So yeah, but you're right.
It's...
Like the agents are coming.
They've been saying this.
I mean, I wrote the article I wrote, you know, in January, it was like, what happened to the year of the agent?
2025 was the year of the agent.