You made a script generator. Thats something I could never do. You’re definitely not dumb. You’re smart as fuck
Wiztree worked thank you. I finally found the files
edit: checked it and the gui is working properly. update for the toggle.
I ran into this the first time I tried, when i ran a small video later you can see in the CMD it stores a copy in the temp files somewhere
After way longer than I had ever expected it to take there is a new update up. updates to the motion extraction, motion conversion, and funscript generation. reduction factor has been replaced with point factor which should make it more understandable and hopefully work better. It still runs backwards but after how long I spent today trying to fix that particular quirk I figured out that the script has its own personality and its wrong to try to make it make sense(If any wizard coders out there want to fix it please do. I’m not trying that shit again…ever.) The number of points generated should be more compatible by default with devices.
I wrote this a couple days ago but found some very confusing bugs but I think I got it all sorted out? I’ll say this is an experimental release. There is a small delay before processing starts and then it should start zooming.
There was a weird bug where 30fps videos processed properly but somehow 60 just broke everything. I believe I have it fixed now.
I learned from last time and fully tested with a fresh install and its working.
Hopefully it is an improvement. A major thank you to Nodude. Pointed me in great optimization directions.
What am I doing wrong if I get the following error?
File “C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.1776.0_x64__qbz5n2kfra8p0\Lib\subprocess.py”, line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified
Hi. I sent a message yesterday but I haven’t heard back. If possible send me the full output from the start of the command up until the error please.
I sent you the log. Thanks for looking into this!
Alright 95% sure it’s figured out now. I forgot to include ffprobe which screwed things up. I’m really good at forgetting things it seems. update from the repo and run the ffmpeg.bat which will get the ffprobe file and put it into the root directory and have everything fixed up.
If anyone would prefer to do it manually here is a link to ffmpeg
https://www.ffmpeg.org/download.html
unzip the ffprobe file from the bin directory and place it into the folder with the main.py/gui.py scripts.
compare_data.py seems to be missing.
“point factor” is confusing. Just call it ‘processing framerate’ (give options for 30, 20, 10, 5fps etc…) or ‘subsample framerate’.
I built a similar app a long time ago. One thing I noticed is that in most videos, you can get much better results by selecting a small part of the video frame. For example in a pillow fuck scene, the pillow/legs often move in opposite directions as the body and this throws the motion detection off. Selecting only the upper or lower body resulted in much better output scripts.
I had ideas to implement co-scripting. This involved pre-computing the motion vectors and asking the user to script a small section. I would then correlate the motion vectors and script and generate the rest of the action automatically, call it funscript autocomplete, but I never went through with it because GUI programming sucks.
uh…whoops. I did indeed forget to include the debug script. That was one part I didn’t have to keep messing with to get working so I completely forgot it existed. It’s on the repo now.
Point factor isn’t as confusing at it would seem really. higher = fewer points generated and lower is fewer. Not the best name I will say but given how much motion data is extracted I had to figure out something and at the time that was the most effective means that I could think of. I have no doubt that it could be better and implemented better but the dev process is essentially get an idea that sounds like an improvement becoming obsessed with the idea and keep going until it is passable. Break absolutely everything somehow then spend a couple hours to days trying to get back to where I started to try for the idea again. That’s a pretty large part of why I keep forgetting things like the compare script.
Cropping the frames would be an effective way to do it but figuring out how to automate it is something I haven’t figured out. Early on I did try it out but I just couldn’t get it working right so I went on to try other methods. I have no doubts that if someone wanted to they could make an amazing system as a helper to get it fairly close to 1:1 but to do so as far as I can figure that is going to take manual input is outside the spirit of the project. in the future who knows but for now I want to see how far this way can go.
I have a “functional” cotracker version but the vram usage is pretty high so I can’t take it too far. The install is…not pleasant and I gotta figure out a way to streamline it enough for others who happen to not be masochistic enough to spend hours fighting with it. That version has the same issue that this version does.
The big problem isn’t the motion tracker but conversion. I figured the motion extraction would be the big problem but regardless of method that part is fairly easy. Converting that data to a usable funscript has been a god damned nightmare. So far amping then clamping within the range has been the most effective but it still has drawbacks and cases it doesn’t suit well.
This was my method for converting the motion data to funscript:
- Collect the motion data as a 2D vector (x, y) for every frame with optical flow. This can be the full frame, or a rectangular section of the frame selected by the user.
- Show an user interface, in this user interface the user can select the motion direction (2d vector), by default this is the first PCA component of the motion data.
- The funscript is the result of bandpass filter (scipy.filter.filtfilt) over the motion data. For example, in slow scenes you might opt for a bandpass filter 0.5-2hz. For fast scenes 2-4hz. This could be selected in an user interface that also showed the frequency spectrum of the motion data.
- Finally, I used rdp (pybind11-rdp) to reduce the number of points down to manageable levels.
The process was repeated on a per-scene basis. Different scenes/positions have different motion directions / areas of interest / frequencies.
Adjusting the scale was usually done manually in openfunscripter.
I noticed your code uses cv2.absdiff, I would recommend using optical flow followed by pca. It simplifies the motion extraction process a lot.
I don’t think there’s a way to avoid user input if you want to have decent results, unless you’re using some kind of pose extraction machine learning.
Found a memory error that happens on longer videos. working on a fix now.
edit: still trying to figure it out. my head hurts.
Hi there, I stumbled upon your generator yesterday, when I posted about my own approach in that same section.
I am not equipped with a very strong or Nvidia powered machine, I am using a Mac Studio M2 Max, so I was looking for a compatible and light cpu/gpu solution (I think this makes cotracker a NoGo).
Anyway, while checking your script, I got an error that I think was memory related (based on asitop monitoring).
So I tried and actually just processed slices of frames of a VR video, instead of storing the whole video as frames in memory and it went through.
Ok, enough talking, sorry, now straight to the point : couldn’t it process the video 1000 frames (or whatever) at a time for instance, and then reassemble data in the end of just merge the funscript files that you will have generated ?
Might be a moronic comment, if so, please disregard