When trying out different tracking methods I accidentally made that extension. tried some roi tracking. it didn’t work well but it was funny to find an at the very least a very similar method the the extension uses.
If you would like I can upload the script so you can tinker with it and see how it works. I got a 6600xt with 8gb of vram and it was making it cry. getting everything in order is a bit of a pain though.
Really I welcome ideas and suggestions. The better it gets for everyone the better it gets for me. this is the 4th or 5th time I’ve attempted this project and so far its the only one that can even be called functional let alone alright.
In my opinion, the best kind of automation for funscripting isn’t actually to make the entire thing automated, because it’s never going to work for 100% of cases even with machine learning.
I think it should be done as a sort of copilot experience, where you can load in a video or folder and have a semi-automated process where you set the initial points to track (e.g. bottom of penis, (human or machine estimated) top of penis, and hole entrance) and let the automated tracker do its job using PIPs or similar tech to properly account for camera movement and soft (same-context) scene changes, and then an intuitive automatic recovery behavior where it will notify you whenever it’s uncertain or cannot find the tracked points you’ve set (for instance on total scene changes such as in edited PMVs, or in situations where it detects multiple penetrative actions in one scene and cannot decide which to follow), allow you to specify new points again, and merge the two situations in the resulting script. Also you should be able to preview the script while it’s working.
Effectively this would combine the best of both worlds: automating the boring frame-by-frame scripting we’re all so weary of, while still allowing you direct control over the resulting content.
This isn’t me requesting this behavior of your project btw, it’s moreso my idea for a list of project requirements that I think would work best for most people. Ideally all this would be combined in a more modern editor than OFS which would still allow you to extend its functionality via community plugin scripts. That would basically be the ultimate funscript editor.
What does the reduction factor parameter actually do? I am getting an issue with it adding strokes when there is clearly no action can adjusting this help?
documentation is just a little bit lacking. reduction factor essentially controls how many points are generated in the funscript. lower = more points higher = less points. At reduction 1 you will get a points every couple of frames.
from my testing fewer points also makes the generation less sensitive. The way the script works currently is motion tracking so using it for a full scene is going to have some parts to clean up after the fact. It’s one of the big problems I haven’t figured out how to deal with yet. parts that shouldn’t be scripted are.
Pairing this with a basic machine learning model that can separate frames with action from frames without is probably the easiest way to go to solving this.
I’ve been testing some things out but nothing has panned out yet. I did finally get my handy fixed so now I can actually test the output and try for some fine tuning to get the motions better.
Do you know of any models that might be worth testing out? I’ve tried yolo, nudenet, and a couple others with underwhelming results so far but that could just be my own limited knowledge that is causing said results.
Could you let me know how the output actions are? my handy has been wonky and finicky for a long while now so I can’t really trust the final output on my hardware.
1 minute video
video is in third person,all doggy, animated, shot from about 4 or 5 angles
Tried to see if increasing the point skip from 4 to 1 made it more accurate
Tried to see if upscaling the video from 720p 30 Fps to 8k 30 fps allowed it to be more accurate
accuracy increases towards the end of the video especially after returning to a previous angle,
upscaling to increase accuracy to see if pixel density did anything is inconclusive,
increasing skip count did seem to be slightly more accurate but there are some dead zones where it completely lost focus and they are much more visible.
It is at its most accurate in doggy, in third person, from the side so far.
Might try a longer looping video, I want to see if its even more accurate after 4 loops of a video. if any of my testing methodology seems silly, correct me, Im throwing things at a wall to see if it sticks because this tool is cool. I just want to see where it goes.
the methodology sounds pretty sound to me. upscaling should give it better tracking since the script doesn’t like smaller areas. with the changes I made last night it seems to be doing quite a bit better with things like cowgirl and such. haven’t tried a missionary scene yet but I can imagine it could have some issues.
losing focus is one of the big issues from what I’ve noticed and I’ll probably try to get that a bit better some time tomorrow. I believe the current versions have a look-ahead feature that in theory could help with scene changes but the testing is very limited. I kinda just tossed it in there then went back to working on other things that ended up breaking. So much breaking.
Did my own testing with this. I was able to get some decent results by splitting a video up into scenes using a python tool I found today called SceneDetect which detects changes in the video for splitting up different scenes. The goal is to minimize the effect of camera and scene changes.
Parameters I used for splitting the video are below, this split up a 6:33 video into 50 parts ranging from >0:01 to 1:21.
After processing I then ran the path of the 50 videos through Python Funscript Generator.
The next part was a bit tricky but I used chatGPT to whip up a powershell script that merges the funscripts together with an offset based on manual adjustment per number of files.
Powershell script uploaded as a .txt, rename to .ps1 if you intend to use it. Target path is hard-coded so it’ll need to be updated for the path of the .funscripts being merged.
Scripts uploaded for comparison. The video used in testing was the Mantix-x Liz video with a length of 6:33. Neither is usable but it shows an improvement.
Trust me I understand how easy it is to bust a python code, Claude AI and I have been best buds for my busted codes as of late . You’re doing great tho! the newest iteration does seem to be working better than the first and im sure you’ll get it even closer with time!
roadblocked to heck and back currently. gonna have to think of some creative ways to get around some problems but hopefully you’re right and it will be improved soon ;.;
whats the road block? Eroscripts is no stackoverflow but there are some pretty competent coders here. If you explain the issue you’re running into someone might know a way to fix it.
I honestly haven’t found a way to improve it yet without drastic losses which got just a bit demotivating. Spent a couple days trying new approaches and different tracking methods and combos and uh I haven’t found an improvement yet. Then my handy died so uh I have to go completely off eyeballing the output scripts. that was about the point it was time to take a break.
I know there is a way to get past the plateau I just haven no idea how yet. Maybe doing a custom yolo model is the way. Looking at the screenshots of it that seems to be the way SLR went with their auto tracker.
I’m assuming you’re trying to use pytorch right? the base script uses opencl which should work with nvidia too but I’m not certain. there is pytorch script on the repo so between the two it should cover most gpus. the pytorch version is a little bit out dated but it should work about the same.
weird then. can you send me the terminal output? Does nvidia have a way to monitor usage to see if its just running slow using the gpu or not using it at all?