Eve Bodnia
๐ค SpeakerAppearances Over Time
Podcast Appearances
And then you can orchestrate the agentic layers between LLMs, between EBMs, or you can just clone the hybrids of EBMs and LLMs.
And then you set up some sort of game theory situation, transitive games, non-transitive games.
And you have your full evolution of AI ecosystem all of a sudden, because it's going to self-train, it's going to self-align, it's going to create something we can't even dream of.
So that's the exciting part.
And I don't know at what moment you call it AGI.
Do you call it like when you already have agents, I don't know, bringing you solutions for human hypothesis or it's just ability to control the energy grid and the car?
So what is it?
Yeah.
But now we see, but, you know, 10 years ago, people would...
If they would know how LLMs perform right now, they would also call it AGI.
So I think the definition of the AGI is going to evolve as well, because we're going to have a new thing, we're going to see the flaws in the new thing, and we're like, oh, no, no, no, this is bullshit.
AGI is something higher.
Move the goalposts.
Yeah, I could see textbooks, the course on AI and AGI in the universities, they're going to talk about different areas, what was called as an AGI.
Yeah.
Exactly.
Yeah.
Well, it's always up to people, right?
What we need.
It's not up to AI model.