Gifs of pose estimation on vr video

I trained a few models in the above, but they’re not useful for various reasons.

I was looking at some image preprocessing methods the other day and I think the lineart model used as an image annotator with image generation might be a better model to use instead of the efficientnets I’ve been using.

output

From top to bottom, left to right:
original, canny, hed, lineart
lineart anime, midas, normal bae, oneformer
openpose (too close for it to work anything out), pidinet, zoe, blank

I’ve tried simplifying the image being fed to the model to lighten the workload of training and inference (although I still have dataset challenges), and lineart seems to be really consistent and fast at about 30ms per frame unbatched, and I want to try to utilize that as a feature extractor.

If you’re looking for something usable, I’d say check out Motion Tracking Funscript Generator v0.5.x - Software - EroScripts or How to use FunscriptToolBox MotionVectors Plugin in OpenFunscripter - howto - EroScripts, or pretty much any other posts under the software category. At some point I’d be curious to see how a custom model would go if it could utilize the motion vectors generated by the latter, but there wouldn’t be any pretrained feature extractors available and it’d be a slower initial training phase as well as require a fair bit of reprocessing on my current dataset.

1 Like