Steve Hsu
๐ค SpeakerAppearances Over Time
Podcast Appearances
And so our startup realized early on that in order to make practical applications from these AIs, one would have to solve this problem.
One would have to control
both the behavior of the AI agent, if you want to call it an agent, and also control its fact base, the sort of core knowledge base that it uses to answer questions or conduct operations.
And that cannot be drawn from the pre-training data.
There's just too much junk in the pre-training data.
Or even if it's not junk, it may not be relevant to the specific problem that the AI agent is trying to solve for you.
Yes.
So there's something called the embedding space.
The models are actually working in this abstract space, which is literally a space of concepts.
Oxford and Cambridge are very, very close in that concept.
And maybe there's no other school that's exactly in that space.
But, you know, sometimes the details matter.
So if I'm a fundraiser for Cambridge, I actually care whether you went to Cambridge or Oxford in deciding whether to contact you.
Right.
Right, so we actually build systems in which we embed the language models in a larger software platform.
And the language model itself, we generally are using mainly only for its language
and to some extent reasoning abilities.
But the knowledge base is stored separately.
The fancy way we describe it is as a kind of attached memory for the AI.
The AI can rely on that attached memory.