Hi.
Spent the last couple days working on a project to automate funscript generation. Tried several different ways and so far this is the best one I’ve managed.
It’s not perfect by any means but it’s functional. You can use a command line or a gradio ui.
It can do single videos or you can point it at a directory and let it do it’s thing. It works best with vertical movements with a lot more mixed results from horizontal movements. sometimes it’s pretty good and some times not so much. it also uses opencl for acceleration since that’s about the only option I have. testers and suggestions would be appreciated.
edit:github support is being slow as possible so here is a gitlab repo
yeah not sure what happened there. it there and public but only I can see it apparently? I dunno. trying to figure it out. I can only assume I screwed something up somehow?
apparently back in may my git got hacked. waiting on support now ;.; Hopefully it doesn’t take too long to get back up and running.
though that may be a good thing since it gives more time to work on the script to improve before people test it. the output is looking a bit better now.
it looks to be passing the eyeball test though most of the time.
Is this meant for IRL video scripting or did you also train it on hentai/furry animations? I see it’s in Python, I could look through the code since i’m personally interested in a tool like this as well., mainly for use in furry animations, hence why am asking.
Funnily enough I did try to make an ai model for the job but it failed horribly in training. Maybe someone smarter can pull it off but I gave up on that path.
I don’t know how it would work with pmv/hmv since it uses motion tracking it really wouldn’t follow the beat at all but the movements in the video. There’s probably some way to do it but I’m not sure how and at the moment I’m just trying to improve the tracking. I’ve tested it on hentai clips and it seems to be working fairly well with some exceptions. movements in small sections of videos isn’t being picked up right.
this is a direct output from the script loaded into openfunscript. it’s a basic video but it does work. trying to get the timing a bit better now and work out some.
in theory it would? resolution increases the time to generate by quite a bit. haven’t tested on a vr video yet to be honest with ya.
Recent changes for accuracy have bogged down generation times so I’m trying to figure it out now. If you happen to have a link to a short sample I can give it a try.
I actually don’t mind that it follows the video action rather than the PMV music in that case. I actually want that. For beat-based funsctript generation you have other tools like funscriptdancer and whatnot. I very much prefer action based and accuracy over everything given i’m planning to use this with a smart fuck machine setup.
I got it to work fine, results have been iffy but could create a good starting point for scripters. Tried it on about two videos so far. one animated video with a lot of jump cuts, one irl pov video where the dominant party was the one in reverse cowgirl a lot of the time.
in the pov video I tried with a lot of riding and throwing it back in doggy it got really confused. if someone is riding it should have been inverting the parabola. instead when the person riding was at the end of their stroke where it should have been the deepest, the parabola was at its shallowest.
this is easily fixed by going in and reversing it yourself with a lua but I wanted to report it! This is the closest ive seen to a fully automated script I’m interested to see how it progresses. I think it would be awesome for when you want to create a starting point for a script, or you just want a quick and dirty script for a video that just dropped.
That’s a “feature”, not a bug. It’s because all these scripts are made for the damn handy masturbator, which works on an inverted measurement axis compared to what we would logically assume. 0% is full down position and 100% is full up position on that device. It’s stupid tbh but what can you do.
yeah its iffy. trying to figure out how to improve it more but something like cowgirl is just confusing the heck out of the script. vertical tracking is not working quite right yet. if you’re running command line you can add --generate-debug(possibly --generate_debug. working on both versions at one time is causing some confusion on my part) to see how the video is tracking.
I’ve also noticed that tracking gets loose from time to time and I’m trying to figure it out. the cowgirl video I’ve been using for testing hasn’t had great results.
I started working on the pytorch version and it seems I haven’t ported some of the new things back to the opencl version. I noticed the inversion problem and added an option for it but didn’t back port it. gonna fix that soon and update it.
Also thanks for the feedback. Which version are you using? they don’t quite track the same and I haven’t figured out why yet.
new version is out and so far its looking better. --reduction-factor 4 and under seems to be more accurate.
The only proper solution to motion tracking in moving videos is using deep learning. Virtually every object and motion tracking nowdays is done with AI, while opencv is used mainly for image processing tasks because it is typically faster and easier than training models to do the same things.
See for example:
Secondly, for better penetration depth estimation you will need an additional model that would have to be trained to predict the length of the penetrating object. for IRL human penises it may be somewhat easier but when you get into CGI, especially furry stuff, it’s gonna pretty much require training on image/video datasets. With Python you should have no problems training it on e621’s database.
That really isn’t a feasible option currently. I’m doing some testing with it and the current version on the repo is tracking better. Also pips is cuda exclusive which eliminates me from working on it. pips is pretty gpu intensive. I just don’t have the resources to run and work on it. If someone wants to try that approach then good luck to them.
even on small videos I got it running at a solid 2fps and the accuracy isn’t what I would hope for if it’s going to be that slow.
I am definitely going to experiment with it at some point when i get more time for hobbies and experience with machine learning. I do have a 3080ti which could really use some leg stretching as i haven’t had much time to even game on it properly…
Also be sure to check out the MTFG python application since that’s i think currently the main motion tracking extension people are talking about on this forum.