Andy Halliday
👤 SpeakerAppearances Over Time
Podcast Appearances
I mean, it's what, but the combination of features is inadequate for any practical use in my view.
Yeah.
I'd like to just add that this meta system that poetic has, you know, it's kind of blown the arc AGI to a competition open with is leveraging an existing winter in the, in the arc AGI to competition, which is Gemini three pro and Gemini three pro thinking.
And what it does is it does some kind of magic on the back end that it
you know, adapts to that model and then refines it in a way that is inexpensive, but dramatically improves the performance.
So the people at Google are probably going, wait, what?
Right.
We've just spent, you know, all this time getting to that level.
And then they, we put it out and you can take our model and make it go that much better and
Wow.
But the good news there is that this shows that some kind of system, I'm going to dig into it and try to figure out what poetic is doing there, but some kind of orchestration of actions with that model make it possible for it to very inexpensively produce much better reasoning results on the toughest exam out there, which is AGI-2.
We're making it.
Despite the limitations of LLMs,
We are making substantial progress towards artificial general intelligence.
But there's, at the same time, a whole cadre of people out there saying there's a ceiling on what LLMs can do in this respect, and we have to move forward with neurosymbolic or some other combination of systems that's going to get us there.
But...
there doesn't seem to be any, it's a glass ceiling.
Let's put it that way.
You can break through it with other techniques, but there's a glass ceiling for this class of models.
And then it'll be some other, you know, architectural approach that will in combination with LLMs get us, you know, into, you know, the stratosphere here.