Eve Bodnia
๐ค SpeakerAppearances Over Time
Podcast Appearances
Um, no, it's, um, so the first, when we started this company, I just had like some theoretical idea, right?
And then I was surrounded by talented engineers who just brought this idea into a form of proof of concept.
And that was like a few months ago.
And the natural question was like, oh, can the proof of concept be the actual, like a toy model for the model we have today?
So the answer was yes, but to get there, we had to perform like a series of experiments to evaluate like what's right, what works, what doesn't.
So when the architecture was fully designed, the next step would be, oh, is it compatible with LLMs or with transformers in general?
Because it's so fundamentally different.
We didn't even know like
it's possible to do.
So the first step was to attach Transformer and try to scale it a little bit and then kind of shrink it back to the toy model version.
So we successfully done so.
And then we're like, oh, can we like not even have an LLM, but the most simplest version of something related to LLM attached to Transformant, which is small and we understand.
So we attach that, we also scale it and then scale it back and like, okay, that works.
How about we just attach the real LLM to the EBM and see how it is as a user interface.
Can it prompt the EBM in the way we want?
And the answer was yes.
So we again scale it and then test it.
We have a set of benchmarks, which is related to spatial thinking and hierarchical planning.
So we had baselines for the smallest version of
smallest version of the model, then proof of concept, then the real version of the model and kind of like compare it back and forth and it seems to be working.