Dr. Jeff Beck
๐ค SpeakerAppearances Over Time
Podcast Appearances
And then all the stuff that's not varying very much, I'm just going to throw it away, right?
Dimensions in which there's low variability are not important.
Well, it turns out that in neural data, the dimensions in which there's very little variability are some of the most important dimensions.
And so pre-processing with PCA runs a risk of throwing out the most valuable information in your data set.
And so there's a lot of wisdom in jointly fitting your pre-processing model as well as your inference and prediction model.
Yes.
And that is a laudable goal, right?
And I certainly share it, right?
The last thing you want to do is, I mean, you know, fortunately, like networks are so big, we don't really run the risk of like overfitting as much as we used to.
But the last thing you want to do is train your network to toss information that you might need down the road, right?
That said, the vast majority of what the brain does, just like these neural networks, is decide what information is currently task-irrelevant.
But that's all the more reason to do things in a self-supervised or unsupervised way, right?
Because you're basically not telling it, this is the importance.
You're not telling it what's all task-relevant and task-irrelevant.
That sounds like another philosophical.
So yes, is my answer.
There will always be something left over in the sense that we have this.
This has been the trajectory things have been going for a really long time.
It's sort of like we get algorithms that do amazing new cool things.
And then someone comes along and says, yeah, but it can't build me โ it can't pull a rabbit out of a hat, right?