Automated python funscript generator

I got it to work fine, results have been iffy but could create a good starting point for scripters. Tried it on about two videos so far. one animated video with a lot of jump cuts, one irl pov video where the dominant party was the one in reverse cowgirl a lot of the time.

in the pov video I tried with a lot of riding and throwing it back in doggy it got really confused. if someone is riding it should have been inverting the parabola. instead when the person riding was at the end of their stroke where it should have been the deepest, the parabola was at its shallowest.

this is easily fixed by going in and reversing it yourself with a lua but I wanted to report it! This is the closest ive seen to a fully automated script I’m interested to see how it progresses. I think it would be awesome for when you want to create a starting point for a script, or you just want a quick and dirty script for a video that just dropped.

That’s a “feature”, not a bug. It’s because all these scripts are made for the damn handy masturbator, which works on an inverted measurement axis compared to what we would logically assume. 0% is full down position and 100% is full up position on that device. It’s stupid tbh but what can you do.

yeah its iffy. trying to figure out how to improve it more but something like cowgirl is just confusing the heck out of the script. vertical tracking is not working quite right yet. if you’re running command line you can add --generate-debug(possibly --generate_debug. working on both versions at one time is causing some confusion on my part) to see how the video is tracking.
I’ve also noticed that tracking gets loose from time to time and I’m trying to figure it out. the cowgirl video I’ve been using for testing hasn’t had great results.

I started working on the pytorch version and it seems I haven’t ported some of the new things back to the opencl version. I noticed the inversion problem and added an option for it but didn’t back port it. gonna fix that soon and update it.

Also thanks for the feedback. Which version are you using? they don’t quite track the same and I haven’t figured out why yet.

new version is out and so far its looking better. --reduction-factor 4 and under seems to be more accurate.

The only proper solution to motion tracking in moving videos is using deep learning. Virtually every object and motion tracking nowdays is done with AI, while opencv is used mainly for image processing tasks because it is typically faster and easier than training models to do the same things.

See for example:

Secondly, for better penetration depth estimation you will need an additional model that would have to be trained to predict the length of the penetrating object. for IRL human penises it may be somewhat easier but when you get into CGI, especially furry stuff, it’s gonna pretty much require training on image/video datasets. With Python you should have no problems training it on e621’s database.

That really isn’t a feasible option currently. I’m doing some testing with it and the current version on the repo is tracking better. Also pips is cuda exclusive which eliminates me from working on it. pips is pretty gpu intensive. I just don’t have the resources to run and work on it. If someone wants to try that approach then good luck to them.

even on small videos I got it running at a solid 2fps and the accuracy isn’t what I would hope for if it’s going to be that slow.

I am definitely going to experiment with it at some point when i get more time for hobbies and experience with machine learning. I do have a 3080ti which could really use some leg stretching as i haven’t had much time to even game on it properly…

Also be sure to check out the MTFG python application since that’s i think currently the main motion tracking extension people are talking about on this forum.

Awesome! Will definitely check it out! :slight_smile:

When trying out different tracking methods I accidentally made that extension. tried some roi tracking. it didn’t work well but it was funny to find an at the very least a very similar method the the extension uses.

If you would like I can upload the script so you can tinker with it and see how it works. I got a 6600xt with 8gb of vram and it was making it cry. getting everything in order is a bit of a pain though.

Really I welcome ideas and suggestions. The better it gets for everyone the better it gets for me. this is the 4th or 5th time I’ve attempted this project and so far its the only one that can even be called functional let alone alright.

In my opinion, the best kind of automation for funscripting isn’t actually to make the entire thing automated, because it’s never going to work for 100% of cases even with machine learning.

I think it should be done as a sort of copilot experience, where you can load in a video or folder and have a semi-automated process where you set the initial points to track (e.g. bottom of penis, (human or machine estimated) top of penis, and hole entrance) and let the automated tracker do its job using PIPs or similar tech to properly account for camera movement and soft (same-context) scene changes, and then an intuitive automatic recovery behavior where it will notify you whenever it’s uncertain or cannot find the tracked points you’ve set (for instance on total scene changes such as in edited PMVs, or in situations where it detects multiple penetrative actions in one scene and cannot decide which to follow), allow you to specify new points again, and merge the two situations in the resulting script. Also you should be able to preview the script while it’s working.

Effectively this would combine the best of both worlds: automating the boring frame-by-frame scripting we’re all so weary of, while still allowing you direct control over the resulting content.

This isn’t me requesting this behavior of your project btw, it’s moreso my idea for a list of project requirements that I think would work best for most people. Ideally all this would be combined in a more modern editor than OFS which would still allow you to extend its functionality via community plugin scripts. That would basically be the ultimate funscript editor.

1 Like

What does the reduction factor parameter actually do? I am getting an issue with it adding strokes when there is clearly no action can adjusting this help?

documentation is just a little bit lacking. reduction factor essentially controls how many points are generated in the funscript. lower = more points higher = less points. At reduction 1 you will get a points every couple of frames.

from my testing fewer points also makes the generation less sensitive. The way the script works currently is motion tracking so using it for a full scene is going to have some parts to clean up after the fact. It’s one of the big problems I haven’t figured out how to deal with yet. parts that shouldn’t be scripted are.

awesome!! i will give it a try

Pairing this with a basic machine learning model that can separate frames with action from frames without is probably the easiest way to go to solving this.

I’ve been testing some things out but nothing has panned out yet. I did finally get my handy fixed so now I can actually test the output and try for some fine tuning to get the motions better.

Do you know of any models that might be worth testing out? I’ve tried yolo, nudenet, and a couple others with underwhelming results so far but that could just be my own limited knowledge that is causing said results.

Could you let me know how the output actions are? my handy has been wonky and finicky for a long while now so I can’t really trust the final output on my hardware.

Been doing some testing with this script!

1 minute video
video is in third person,all doggy, animated, shot from about 4 or 5 angles

Tried to see if increasing the point skip from 4 to 1 made it more accurate
Tried to see if upscaling the video from 720p 30 Fps to 8k 30 fps allowed it to be more accurate

accuracy increases towards the end of the video especially after returning to a previous angle,

upscaling to increase accuracy to see if pixel density did anything is inconclusive,

increasing skip count did seem to be slightly more accurate but there are some dead zones where it completely lost focus and they are much more visible.

It is at its most accurate in doggy, in third person, from the side so far.

Might try a longer looping video, I want to see if its even more accurate after 4 loops of a video. if any of my testing methodology seems silly, correct me, Im throwing things at a wall to see if it sticks because this tool is cool. :joy: I just want to see where it goes.

the methodology sounds pretty sound to me. upscaling should give it better tracking since the script doesn’t like smaller areas. with the changes I made last night it seems to be doing quite a bit better with things like cowgirl and such. haven’t tried a missionary scene yet but I can imagine it could have some issues.

losing focus is one of the big issues from what I’ve noticed and I’ll probably try to get that a bit better some time tomorrow. I believe the current versions have a look-ahead feature that in theory could help with scene changes but the testing is very limited. I kinda just tossed it in there then went back to working on other things that ended up breaking. So much breaking.

are oyu using the main or the torch edition?

1 Like

Did my own testing with this. I was able to get some decent results by splitting a video up into scenes using a python tool I found today called SceneDetect which detects changes in the video for splitting up different scenes. The goal is to minimize the effect of camera and scene changes.

Parameters I used for splitting the video are below, this split up a 6:33 video into 50 parts ranging from >0:01 to 1:21.

scenedetect -i “video.mp4” -o “.\split” split-video detect-adaptive --min-content-val 3 --frame-window 3 --threshold 2

After processing I then ran the path of the 50 videos through Python Funscript Generator.

The next part was a bit tricky but I used chatGPT to whip up a powershell script that merges the funscripts together with an offset based on manual adjustment per number of files.

Powershell script uploaded as a .txt, rename to .ps1 if you intend to use it. Target path is hard-coded so it’ll need to be updated for the path of the .funscripts being merged.

Funscriptmerger.txt (1.9 KB)

Scripts uploaded for comparison. The video used in testing was the Mantix-x Liz video with a length of 6:33. Neither is usable but it shows an improvement.

Whole video processed.funscript (151.9 KB)
Parts processed and merged.funscript (361.5 KB)

Trust me I understand how easy it is to bust a python code, Claude AI and I have been best buds for my busted codes as of late :joy:. You’re doing great tho! the newest iteration does seem to be working better than the first and im sure you’ll get it even closer with time!

and im using main edition for now.

How well does it handle multi actor scenes?