Nolan Arbaugh
👤 PersonAppearances Over Time
Podcast Appearances
me and blitz talked for a long time about the difference between the two from the on the technical side okay so it'd be great to hear your okay so your side of the story open loop is basically um i have no control over the cursor um the cursor will be moving on its own across the screen and i am following by intention
the cursor to different bubbles and then my the algorithm is training off of what like the signals it's getting are as i'm doing this there are a couple different ways that they've done it they call it center out target so there will be a bubble in the middle and then eight bubbles around that and the cursor will go from the middle to one side so say middle to left back to middle to up
the cursor to different bubbles and then my the algorithm is training off of what like the signals it's getting are as i'm doing this there are a couple different ways that they've done it they call it center out target so there will be a bubble in the middle and then eight bubbles around that and the cursor will go from the middle to one side so say middle to left back to middle to up
the cursor to different bubbles and then my the algorithm is training off of what like the signals it's getting are as i'm doing this there are a couple different ways that they've done it they call it center out target so there will be a bubble in the middle and then eight bubbles around that and the cursor will go from the middle to one side so say middle to left back to middle to up
to the middle, like upright, and they'll do that all the way around the circle, and I will follow that cursor the whole time, and then it will train off of my intentions what it is expecting my intentions to be throughout the whole process.
to the middle, like upright, and they'll do that all the way around the circle, and I will follow that cursor the whole time, and then it will train off of my intentions what it is expecting my intentions to be throughout the whole process.
to the middle, like upright, and they'll do that all the way around the circle, and I will follow that cursor the whole time, and then it will train off of my intentions what it is expecting my intentions to be throughout the whole process.
Yes.
Yes.
Yes.
Yeah, so generally for calibration, I'm doing attempted movements because I think it works better. I think the better models as I progress through calibration make it easier to use imagined movements.
Yeah, so generally for calibration, I'm doing attempted movements because I think it works better. I think the better models as I progress through calibration make it easier to use imagined movements.
Yeah, so generally for calibration, I'm doing attempted movements because I think it works better. I think the better models as I progress through calibration make it easier to use imagined movements.
I've tried doing calibration with imagined movement and it just doesn't work as well for some reason. So that was the center out targets. There's also one where a random target will pop up on the screen and it's the same. I just move, I follow along wherever the cursor is to that target all across the screen. I've tried those with Imagine Movement and for some reason the models just don't
I've tried doing calibration with imagined movement and it just doesn't work as well for some reason. So that was the center out targets. There's also one where a random target will pop up on the screen and it's the same. I just move, I follow along wherever the cursor is to that target all across the screen. I've tried those with Imagine Movement and for some reason the models just don't
I've tried doing calibration with imagined movement and it just doesn't work as well for some reason. So that was the center out targets. There's also one where a random target will pop up on the screen and it's the same. I just move, I follow along wherever the cursor is to that target all across the screen. I've tried those with Imagine Movement and for some reason the models just don't
They don't give as high level as quality when we get into closed loop. I haven't played around with it a ton. So maybe like the different ways that we're doing calibration now might make it a bit better. But what I've found is there will be a point in calibration where I can use imagine movement. Before that point, it doesn't really work.
They don't give as high level as quality when we get into closed loop. I haven't played around with it a ton. So maybe like the different ways that we're doing calibration now might make it a bit better. But what I've found is there will be a point in calibration where I can use imagine movement. Before that point, it doesn't really work.
They don't give as high level as quality when we get into closed loop. I haven't played around with it a ton. So maybe like the different ways that we're doing calibration now might make it a bit better. But what I've found is there will be a point in calibration where I can use imagine movement. Before that point, it doesn't really work.
So if I do calibration for 45 minutes, the first 15 minutes, I can't use Imagine Movement. It just like doesn't work for some reason. And after a certain point, I can just sort of feel it. I can tell it moves different. That's the best way I can describe it. It's almost as if it is anticipating what I am going to do again before I go to do it.