Grant Harvey
👤 SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
Well, then I guess what I want to know now is, so, you know, you've proven this BDH works at GBT2 scale, as I read, with 1 billion parameters.
Is that correct?
What's the path to scaling it to, say, 100 billion parameters?
What needs to happen to get there or grow larger?
Let's say, for example, it's learning something really complicated, like where it's I mean, you you you're a complexity scientist.
You know more about this than I do, but something like really complicated.
Does it at some point run out of brain power or how?
Because I'm used to thinking of parameters as this thing that's like, oh, this is like it can retain a lot more information.
Like and if this thing is just continuously running, at what point does it reach its limit of what it can think of?
Or do we just not know that?
What's the roadmap then?
Are we thinking we're going to be Lego blocking a bunch of different models together?
I mean, as far as the company goes, where do you see this going in production?
How do you prevent it from learning something you don't want it to learn?
And maybe you're not at the point where you can do that yet.
You can literally reverse it.
You could quarantine it, essentially.
Yeah, somehow.
With the current architecture?
Some would argue that it can kind of generalize.