Automated python funscript generator

What am I doing wrong if I get the following error?

File “C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.1776.0_x64__qbz5n2kfra8p0\Lib\subprocess.py”, line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified

Hi. I sent a message yesterday but I haven’t heard back. If possible send me the full output from the start of the command up until the error please.

I sent you the log. Thanks for looking into this!

Alright 95% sure it’s figured out now. I forgot to include ffprobe which screwed things up. I’m really good at forgetting things it seems. update from the repo and run the ffmpeg.bat which will get the ffprobe file and put it into the root directory and have everything fixed up.
If anyone would prefer to do it manually here is a link to ffmpeg
https://www.ffmpeg.org/download.html
unzip the ffprobe file from the bin directory and place it into the folder with the main.py/gui.py scripts.

1 Like

compare_data.py seems to be missing.

“point factor” is confusing. Just call it ‘processing framerate’ (give options for 30, 20, 10, 5fps etc…) or ‘subsample framerate’.

I built a similar app a long time ago. One thing I noticed is that in most videos, you can get much better results by selecting a small part of the video frame. For example in a pillow fuck scene, the pillow/legs often move in opposite directions as the body and this throws the motion detection off. Selecting only the upper or lower body resulted in much better output scripts.

I had ideas to implement co-scripting. This involved pre-computing the motion vectors and asking the user to script a small section. I would then correlate the motion vectors and script and generate the rest of the action automatically, call it funscript autocomplete, but I never went through with it because GUI programming sucks.

uh…whoops. I did indeed forget to include the debug script. That was one part I didn’t have to keep messing with to get working so I completely forgot it existed. It’s on the repo now.

Point factor isn’t as confusing at it would seem really. higher = fewer points generated and lower is fewer. Not the best name I will say but given how much motion data is extracted I had to figure out something and at the time that was the most effective means that I could think of. I have no doubt that it could be better and implemented better but the dev process is essentially get an idea that sounds like an improvement becoming obsessed with the idea and keep going until it is passable. Break absolutely everything somehow then spend a couple hours to days trying to get back to where I started to try for the idea again. That’s a pretty large part of why I keep forgetting things like the compare script.

Cropping the frames would be an effective way to do it but figuring out how to automate it is something I haven’t figured out. Early on I did try it out but I just couldn’t get it working right so I went on to try other methods. I have no doubts that if someone wanted to they could make an amazing system as a helper to get it fairly close to 1:1 but to do so as far as I can figure that is going to take manual input is outside the spirit of the project. in the future who knows but for now I want to see how far this way can go.

I have a “functional” cotracker version but the vram usage is pretty high so I can’t take it too far. The install is…not pleasant and I gotta figure out a way to streamline it enough for others who happen to not be masochistic enough to spend hours fighting with it. That version has the same issue that this version does.

The big problem isn’t the motion tracker but conversion. I figured the motion extraction would be the big problem but regardless of method that part is fairly easy. Converting that data to a usable funscript has been a god damned nightmare. So far amping then clamping within the range has been the most effective but it still has drawbacks and cases it doesn’t suit well.

This was my method for converting the motion data to funscript:

  • Collect the motion data as a 2D vector (x, y) for every frame with optical flow. This can be the full frame, or a rectangular section of the frame selected by the user.
  • Show an user interface, in this user interface the user can select the motion direction (2d vector), by default this is the first PCA component of the motion data.
  • The funscript is the result of bandpass filter (scipy.filter.filtfilt) over the motion data. For example, in slow scenes you might opt for a bandpass filter 0.5-2hz. For fast scenes 2-4hz. This could be selected in an user interface that also showed the frequency spectrum of the motion data.
  • Finally, I used rdp (pybind11-rdp) to reduce the number of points down to manageable levels.

The process was repeated on a per-scene basis. Different scenes/positions have different motion directions / areas of interest / frequencies.

Adjusting the scale was usually done manually in openfunscripter.

I noticed your code uses cv2.absdiff, I would recommend using optical flow followed by pca. It simplifies the motion extraction process a lot.

I don’t think there’s a way to avoid user input if you want to have decent results, unless you’re using some kind of pose extraction machine learning.

Found a memory error that happens on longer videos. working on a fix now.
edit: still trying to figure it out. my head hurts.

1 Like

Hi there, I stumbled upon your generator yesterday, when I posted about my own approach in that same section.
I am not equipped with a very strong or Nvidia powered machine, I am using a Mac Studio M2 Max, so I was looking for a compatible and light cpu/gpu solution (I think this makes cotracker a NoGo).
Anyway, while checking your script, I got an error that I think was memory related (based on asitop monitoring).
So I tried and actually just processed slices of frames of a VR video, instead of storing the whole video as frames in memory and it went through.
Ok, enough talking, sorry, now straight to the point : couldn’t it process the video 1000 frames (or whatever) at a time for instance, and then reassemble data in the end of just merge the funscript files that you will have generated ?
Might be a moronic comment, if so, please disregard :confused:

It’s not a dumb comment at all. I tried that very method and I swear the project is cursed. every attempted fix broke something else until I just said screw it. slicing and putting the script back together turned out to be far more annoying than it is worth.

Doesn’t macs use unified memory? It may not be beefy but if I understood it correctly it should essentially have all the ram as usable as vram. slicing shouldn’t even be needed if I understood correctly. Ai things are pretty neat so I try to keep up to date on them.

If you look on the git there should be a release with the older version that may very well work out well for you. the speed gains in the newer versions ultimately broke everything on videos with size.

Thank you for your answer :slight_smile:

Indeed, it has unified memory, but still, it capped it out at some point. I was working on a large size video file though (VR).

Your comment kind of resonates to me, as I am dealing with the same kind of frustration with my own project. I left it where it was and might just go back to it whenever I do not feel like throwing the computer by the window anymore.

Do you know if there are any tools for creating Funscripts utilizing this technology? :slight_smile: <3

Im just now starting on my funscript creation journey and would like to start learning the cutting edge on what AI models can do to assist me and make the creation as smooth lined as possible.

Thats why I found my way to this tool and all of your awesome conversation which Im now binge reading :smiley:

Also as a whole another comment: how is this project doing after the last 3 or so months of no conversation? :slight_smile:

Hey! This particular tool has evolved further in a different thread, you can now follow it here:

But yeah that tool is mainly meant for real human porn in VR POV. And it’s meant for use on the handy as well. So none of that is of interest to me. I actually described what I’d be looking forward to prototyping in a different thread of a tool I created:

And if this is done, why stop here? I have a more moonshot idea to build an entire funscripting IDE in as much as possible Python PyQt, with maybe some C++ if really needed for stuff where Python may bottleneck. I want to analyze and take the best parts of OFS and other funscripting tools, make it super easy to extend with community-driven Python plugins and hopefully easy enough to contribute to with good documentation and code architecture. And by extensions i mean I’d love to see if someone can make DL funscript copiloting plugin right into this bitch as well as this here extension and more, and to hopefully make it easy for other python projects that have been popping up lately on this forum to integrate with it too. Again, very moonshot, and i don’t know if there’s already a project that attempts this, but i personally dream of a future where i can open up a scripting tool and script a huge video in no time with user-friendly or maybe even fully automatic motion extraction and post processing. This whole ecosystem of pre-computed sex tech sync to content is just a deep learning video motion/context tracking problem in my view and it shouldn’t require manual slavery of hours per minute of video depending on action complexity.

So in short, no, such a tool doesn’t yet exist to my knowledge, but I would 100% be down to collaborate with people here to make it happen, because it would highly benefit my own funscripting work also.

Yeah I’ve been checking out that project but yes it is not for me at the moment as it is for VR scenes, but I think I might try it later on when the project evolves, it seems cool! :slight_smile:

Thanks for your reply, helps me also understand and verify that I’m pretty up to date after a few weeks of reading, just got my handy a bit over a week ago so very new to this space. :slight_smile:

I’m getting these problems when trying to install this project btw:

ERROR: Failed building wheel for pyopencl
Failed to build pyopencl
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (pyopencl)

Chatgpt is out of options and so am I, so feel free to help me out, would be cool to try this. :smiley:

edit. Still not working and at the same error after trying a bunch of ideas from chatgpt

Hibernating and stunlocked about sums it up. Weird bugs that don’t really make sense got it shelved for a bit. It’s not dead but it was put on a shelf for a while.

Trying to figure out the weird bugs that still don’t make any dang sense. starting over from scratch would probably be easier at this point. Like how there is a tiny desync that puts the script off by fractions of a second on loops. You probably wouldn’t even notice it during use but it’s pretty clear in the debug graphs.

As for ai models I have no idea. I’m completely out of the loop. I wonder if anyone has managed to train something up myself besides uh slr I think it was?

1 Like

are you using python or conda and which version? this sounds very familiar but danged if I can recall it right. So lets start with some basic hope it works steps.
In whatever env you’re using try this first
pip install -U wheel
then try to reinstall. if that doesn’t do it try
pip install -r requirements.txt --force-reinstall
without more details that’s the best I have off hand. But if I had to guess it may be a python version issue. I could very well be wrong though. I have been many times before but at least I know all the required files are there this time x.x

1 Like

Thank you for helping! <3 Ill be back at my computer early next week and will try again and report back around that time! :slight_smile:

Thank you for the amazing work on this and I totally understand the frustrations. Wish you well and good luck with your endeavours whatever they may be as of now! :slight_smile:

1 Like

I got it working with claude!

I am pretty sure it had to do with non ASCII letters in my windows username! I changed to another account and also did the install with miniconda, using python 3.10 (older version than last) and installing pyopencl first, then taking it out of the requirements.txt and installing everything else.

Then everything worked. Except, in the browser there was an error and it had to do with the fact that if I remember correctly, ffmpeg or something was not installed.

But now everything works, it is chugging and doing the first scripts as I am speaking! :slight_smile:

Other than that, is it supposed to use so much RAM, but no GPU for example? Im a total noob, dont really know why I thought it would use a gpu lol :smiley:

Anyways, thanks still so much and Ill report to you later! :slight_smile:

Now just need to figure out what tools to use to improve the funscripts that this generator spits out! :o No idea as this is the first time I get to make scripts, Im excited. :star_struck:

Still one comment: Tried making multiple videos, but the funscripts are just a mess of action from the first frame to the last. I have no idea why, even parts where there is absolutely nothing moving in the video, the script just goes fn ham :smiley:

I fed it still camera videos and moving camera videos, all of the videos scripts were the same randomly changing intensity from start to finish - no matter what happened on the video.

I have no idea what I did wrong or whats going on, but Ill come back to this later maybe at a better time hahha