Dr. Jeff Beck
๐ค SpeakerAppearances Over Time
Podcast Appearances
And so what do we do?
We run a VA that we do the pre-processing.
And we do it by, the pre-processing step,
completely independent right from the actual algorithm that's going to be tasked with solving the problem of interest and you know that's not something that
we necessarily have to stick with, right?
It would be very nice if there was a way of, again, jointly, we're getting right back to JEP again.
What we'd like to do is we'd like to choose our pre-processing algorithm in a manner that, not a priori, not do it first.
We'd like to choose the pre-processor that works the best in this space.
And I think that that's the ultimate motivation for a lot of this work is like, what's the right embedding?
One of my favorite tricks, like, of course, I pre-process the VAs all the time.
In fact, every time someone hands me a new neural data set, the first thing I do, I'm not ashamed to admit, I run PCA on it and pass it through a VAE and then sort of take a look, right?
It's the first thing you do with your data because it gives you a good idea of what the signal-to-noise ratio is in the data set itself.
Then what do I do?
I subsequently do most of my analysis in that discovered embedding space.
I don't see a huge problem with that from a purely pragmatic perspective, but it's certainly cleaner to have a single algorithm and approach and not just be stringing these things together in an ad hoc way.
When doing PCA, PCA is a really great example of this.
There's a failure mode for principal component analysis.
which is actually really common in neural data.
Because principal component analysis basically says, well, where's the most variability?
Okay, I'm going to worry about that.