Jun Li
š¤ SpeakerAppearances Over Time
Podcast Appearances
So going through, creating any kind of thing I want, whether it be, like for right now, I've been doing a lot of testing of different models in Higgs field, right?
How well that process goes.
Can I do character generation?
Can I do character consistency?
This is the same kind of thing, but...
But local.
But local, right?
Right now, for this highlight, I loaded up the WAN 2.2.
I wanted to try a model that was fairly new and fairly large, but I could still run locally.
Now, my system on the GPU side only has a 3070 with, I think, 8 gigs of RAM or something like that, 8 gigs of RAM.
on board uh yeah yeah nvidia and uh and then i have um uh i have 32 gigs of um
of normal RAM, normal memory.
And so going through the process, it'll eat up.
It'll push everything onto the GPU, offload as much as it can on the GPU, just like LM Studio, and then use my onboard memory for the difference.
It usually pushes it up.
It's highly intensive.
It's going to eat up a bunch of things.
But I was still able to look at YouTube while it was running in the background.
So not too bad.
It's not network intensive.