You made a script generator. Thats something I could never do. You’re definitely not dumb. You’re smart as fuck
Wiztree worked thank you. I finally found the files
edit: checked it and the gui is working properly. update for the toggle.
I ran into this the first time I tried, when i ran a small video later you can see in the CMD it stores a copy in the temp files somewhere
After way longer than I had ever expected it to take there is a new update up. updates to the motion extraction, motion conversion, and funscript generation. reduction factor has been replaced with point factor which should make it more understandable and hopefully work better. It still runs backwards but after how long I spent today trying to fix that particular quirk I figured out that the script has its own personality and its wrong to try to make it make sense(If any wizard coders out there want to fix it please do. I’m not trying that shit again…ever.) The number of points generated should be more compatible by default with devices.
I wrote this a couple days ago but found some very confusing bugs but I think I got it all sorted out? I’ll say this is an experimental release. There is a small delay before processing starts and then it should start zooming.
There was a weird bug where 30fps videos processed properly but somehow 60 just broke everything. I believe I have it fixed now.
I learned from last time and fully tested with a fresh install and its working.
Hopefully it is an improvement. A major thank you to Nodude. Pointed me in great optimization directions.
What am I doing wrong if I get the following error?
File “C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.1776.0_x64__qbz5n2kfra8p0\Lib\subprocess.py”, line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified
Hi. I sent a message yesterday but I haven’t heard back. If possible send me the full output from the start of the command up until the error please.
I sent you the log. Thanks for looking into this!
Alright 95% sure it’s figured out now. I forgot to include ffprobe which screwed things up. I’m really good at forgetting things it seems. update from the repo and run the ffmpeg.bat which will get the ffprobe file and put it into the root directory and have everything fixed up.
If anyone would prefer to do it manually here is a link to ffmpeg
https://www.ffmpeg.org/download.html
unzip the ffprobe file from the bin directory and place it into the folder with the main.py/gui.py scripts.
compare_data.py seems to be missing.
“point factor” is confusing. Just call it ‘processing framerate’ (give options for 30, 20, 10, 5fps etc…) or ‘subsample framerate’.
I built a similar app a long time ago. One thing I noticed is that in most videos, you can get much better results by selecting a small part of the video frame. For example in a pillow fuck scene, the pillow/legs often move in opposite directions as the body and this throws the motion detection off. Selecting only the upper or lower body resulted in much better output scripts.
I had ideas to implement co-scripting. This involved pre-computing the motion vectors and asking the user to script a small section. I would then correlate the motion vectors and script and generate the rest of the action automatically, call it funscript autocomplete, but I never went through with it because GUI programming sucks.
uh…whoops. I did indeed forget to include the debug script. That was one part I didn’t have to keep messing with to get working so I completely forgot it existed. It’s on the repo now.
Point factor isn’t as confusing at it would seem really. higher = fewer points generated and lower is fewer. Not the best name I will say but given how much motion data is extracted I had to figure out something and at the time that was the most effective means that I could think of. I have no doubt that it could be better and implemented better but the dev process is essentially get an idea that sounds like an improvement becoming obsessed with the idea and keep going until it is passable. Break absolutely everything somehow then spend a couple hours to days trying to get back to where I started to try for the idea again. That’s a pretty large part of why I keep forgetting things like the compare script.
Cropping the frames would be an effective way to do it but figuring out how to automate it is something I haven’t figured out. Early on I did try it out but I just couldn’t get it working right so I went on to try other methods. I have no doubts that if someone wanted to they could make an amazing system as a helper to get it fairly close to 1:1 but to do so as far as I can figure that is going to take manual input is outside the spirit of the project. in the future who knows but for now I want to see how far this way can go.
I have a “functional” cotracker version but the vram usage is pretty high so I can’t take it too far. The install is…not pleasant and I gotta figure out a way to streamline it enough for others who happen to not be masochistic enough to spend hours fighting with it. That version has the same issue that this version does.
The big problem isn’t the motion tracker but conversion. I figured the motion extraction would be the big problem but regardless of method that part is fairly easy. Converting that data to a usable funscript has been a god damned nightmare. So far amping then clamping within the range has been the most effective but it still has drawbacks and cases it doesn’t suit well.
This was my method for converting the motion data to funscript:
- Collect the motion data as a 2D vector (x, y) for every frame with optical flow. This can be the full frame, or a rectangular section of the frame selected by the user.
- Show an user interface, in this user interface the user can select the motion direction (2d vector), by default this is the first PCA component of the motion data.
- The funscript is the result of bandpass filter (scipy.filter.filtfilt) over the motion data. For example, in slow scenes you might opt for a bandpass filter 0.5-2hz. For fast scenes 2-4hz. This could be selected in an user interface that also showed the frequency spectrum of the motion data.
- Finally, I used rdp (pybind11-rdp) to reduce the number of points down to manageable levels.
The process was repeated on a per-scene basis. Different scenes/positions have different motion directions / areas of interest / frequencies.
Adjusting the scale was usually done manually in openfunscripter.
I noticed your code uses cv2.absdiff, I would recommend using optical flow followed by pca. It simplifies the motion extraction process a lot.
I don’t think there’s a way to avoid user input if you want to have decent results, unless you’re using some kind of pose extraction machine learning.
Found a memory error that happens on longer videos. working on a fix now.
edit: still trying to figure it out. my head hurts.
Hi there, I stumbled upon your generator yesterday, when I posted about my own approach in that same section.
I am not equipped with a very strong or Nvidia powered machine, I am using a Mac Studio M2 Max, so I was looking for a compatible and light cpu/gpu solution (I think this makes cotracker a NoGo).
Anyway, while checking your script, I got an error that I think was memory related (based on asitop monitoring).
So I tried and actually just processed slices of frames of a VR video, instead of storing the whole video as frames in memory and it went through.
Ok, enough talking, sorry, now straight to the point : couldn’t it process the video 1000 frames (or whatever) at a time for instance, and then reassemble data in the end of just merge the funscript files that you will have generated ?
Might be a moronic comment, if so, please disregard
It’s not a dumb comment at all. I tried that very method and I swear the project is cursed. every attempted fix broke something else until I just said screw it. slicing and putting the script back together turned out to be far more annoying than it is worth.
Doesn’t macs use unified memory? It may not be beefy but if I understood it correctly it should essentially have all the ram as usable as vram. slicing shouldn’t even be needed if I understood correctly. Ai things are pretty neat so I try to keep up to date on them.
If you look on the git there should be a release with the older version that may very well work out well for you. the speed gains in the newer versions ultimately broke everything on videos with size.
Thank you for your answer
Indeed, it has unified memory, but still, it capped it out at some point. I was working on a large size video file though (VR).
Your comment kind of resonates to me, as I am dealing with the same kind of frustration with my own project. I left it where it was and might just go back to it whenever I do not feel like throwing the computer by the window anymore.
Do you know if there are any tools for creating Funscripts utilizing this technology? <3
Im just now starting on my funscript creation journey and would like to start learning the cutting edge on what AI models can do to assist me and make the creation as smooth lined as possible.
Thats why I found my way to this tool and all of your awesome conversation which Im now binge reading
Also as a whole another comment: how is this project doing after the last 3 or so months of no conversation?
Hey! This particular tool has evolved further in a different thread, you can now follow it here:
But yeah that tool is mainly meant for real human porn in VR POV. And it’s meant for use on the handy as well. So none of that is of interest to me. I actually described what I’d be looking forward to prototyping in a different thread of a tool I created:
And if this is done, why stop here? I have a more moonshot idea to build an entire funscripting IDE in as much as possible Python PyQt, with maybe some C++ if really needed for stuff where Python may bottleneck. I want to analyze and take the best parts of OFS and other funscripting tools, make it super easy to extend with community-driven Python plugins and hopefully easy enough to contribute to with good documentation and code architecture. And by extensions i mean I’d love to see if someone can make DL funscript copiloting plugin right into this bitch as well as this here extension and more, and to hopefully make it easy for other python projects that have been popping up lately on this forum to integrate with it too. Again, very moonshot, and i don’t know if there’s already a project that attempts this, but i personally dream of a future where i can open up a scripting tool and script a huge video in no time with user-friendly or maybe even fully automatic motion extraction and post processing. This whole ecosystem of pre-computed sex tech sync to content is just a deep learning video motion/context tracking problem in my view and it shouldn’t require manual slavery of hours per minute of video depending on action complexity.
So in short, no, such a tool doesn’t yet exist to my knowledge, but I would 100% be down to collaborate with people here to make it happen, because it would highly benefit my own funscripting work also.
Yeah I’ve been checking out that project but yes it is not for me at the moment as it is for VR scenes, but I think I might try it later on when the project evolves, it seems cool!
Thanks for your reply, helps me also understand and verify that I’m pretty up to date after a few weeks of reading, just got my handy a bit over a week ago so very new to this space.
I’m getting these problems when trying to install this project btw:
ERROR: Failed building wheel for pyopencl
Failed to build pyopencl
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (pyopencl)
Chatgpt is out of options and so am I, so feel free to help me out, would be cool to try this.
edit. Still not working and at the same error after trying a bunch of ideas from chatgpt