Grant Harvey
👤 SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
Yeah.
That's right.
Because that's sort of like how the hippocampus works, right?
Where it's like...
I'm going to do a terrible job.
Does that mean that it's also more interpretable at some level?
Like you can kind of understand what it's going to do or no?
Well, then I guess what I want to know now is, so, you know, you've proven this BDH works at GBT2 scale, as I read, with 1 billion parameters.
Is that correct?
What's the path to scaling it to, say, 100 billion parameters?
What needs to happen to get there or grow larger?
Let's say, for example, it's learning something really complicated, like where it's I mean, you you you're a complexity scientist.
You know more about this than I do, but something like really complicated.
Does it at some point run out of brain power or how?
Because I'm used to thinking of parameters as this thing that's like, oh, this is like it can retain a lot more information.
Like and if this thing is just continuously running, at what point does it reach its limit of what it can think of?
Or do we just not know that?
What's the roadmap then?
Are we thinking we're going to be Lego blocking a bunch of different models together?
I mean, as far as the company goes, where do you see this going in production?