Nathaniel Whittemore
๐ค SpeakerAppearances Over Time
Podcast Appearances
Coding agents are the first example.
There are more on the way.
Long horizon agents are functionally AGI, and 2026 will be their year.
Now in the next section, Pat and Sonja make sure to qualify that they do not have any sort of scientific authority to propose this definition.
And yet, with that said, they offer what they call a functional definition of AGI.
AGI, they write, is the ability to figure things out.
That's it.
A human who can figure things out has some baseline knowledge, the ability to reason over that knowledge, and the ability to iterate their way to the answer.
An AI that can figure things out has some baseline knowledge, pre-training, the ability to reason over that knowledge, inference time compute, and the ability to iterate its way to the answer, long horizon agents.
The first ingredient, knowledge and pre-training, is what fueled the original ChatGPT moment in 2022.
The second, reasoning and inference time compute, came with the release of O1 in late 2024.
The third, iteration and long horizon agents, came in the last few weeks with cloud code and other coding agents crossing a capability threshold.
Generally intelligent people can work autonomously for hours at a time, making and fixing their mistakes and figuring out what to do next without being told.
Generally intelligent agents can do the same thing.
This is new.
So what's an example of this new capability that they're talking about?
They provide an example of a founder telling his agent that he needs a developer relations lead.
He gives a set of qualifications, including the fact that this person needs to enjoy being on Twitter.
The agent starts in an obvious place.
LinkedIn searches for developer advocate, for example.