Automated python funscript generator

Funscript_gui is trying to load a file that is now non existant, most likely renamed.

from Funscript import run_script

change this to

from main import run_script

and it should work :slight_smile:

That would work yes but that’s the old gui. pull the latest and run gui.py. it has a lot more options and such.

Thank you. that was about a day and a half of work saved x.x

1 Like

Please pass the fps to process_motion_data so the resulting length is correct.

Some funny code
    buffer = Queue(maxsize=20)
    def fillbuffer():
        while True:
            # Read each frame
            ret, curr_frame = cap.read()

            if not ret:
                print(f"Reached end of video in thread")
                buffer.task_done()
                break

            # Append the current frame to buffer
            buffer.put(curr_frame)

    threading.Thread(target=fillbuffer).start()

    # Process frames
    with tqdm(total=frame_count-1, desc="Processing frames") as pbar:
        for _ in range(1, frame_count):
            try:
                curr_frame = buffer.get(True, 10)
            except:
                print(f"Reached end of video after {len(motion_data) + 1} frames")
                break
2 Likes

oh boy that caused some havoc but I think it should be right now. the last frame is skipped quite simply because I could never figure out how to get it to not crash once it hit the end of the video. essentially a bandaid fix. But I did work on that today and I believe it’s working right. going to do a bit more testing before slinging the next version out the door and avoid the issues like the last release.

Also are you a wizard or something? That is the second massive speedup thanks to you. And I honestly don’t even know how. The resource usage is about the same but running at double or more. I’m honestly feeling bad for not working on the pytorch version more just to get it up to date for you.

720p videos running at some 600 fps with very little resource usage. I assumed something broke but the output was the same (until I completely broke processing of course) 180 on 1080p videos just on cpu.

1 Like

Where are the videos stored after being processed in the GUI? I ran one over night (not had a chance to look or test yet) but i have limited drive space and need to reclaim it lol

Edit: Scratch that, I found it. Script didnt come out very good with a VR video. I used default settings. Upside though, it gives you something to go in and edit afterwards… maybe this is a good use :slight_smile:

When i use this it creates way to many keyframepoints, is there a way i can change this or edit it? If i use the script with the handy its no good.

If you’re using the command line use --reduction-factor 4 or higher and the gui has a slider for it. 4 should in theory have few enough points to work well enough but depending on the video it could still be too many.

1 Like

how did it do on vr? I know great won’t be the word for it without question but hopefully it’s somewhat accurate. I don’t have a vr setup so I don’t have any videos for it to test with. In theory it should work since it’s just the same image twice. I could see vertical stack videos destroying the tracking.

if you have ffmpeg you can try pre-processing the VR video with something like this complex filter:

ffmpeg -i VR-video.mp4 -filter_complex "fps=30,crop=in_w/2:in_h:0:0,scale=-1:1080:flags=lanczos,v360=input=hequirect:output=flat:d_fov=140:pitch=-35,crop=in_w/2:in_h:in_w/4:in_h,unsharp=3:3:1" -c:a copy -c:v libx264 -preset ultrafast -tune zerolatency -y VR-video-out.mp4

that should works for VR180 LR videos and give you back only the R eye, undistorted and scaled down to 1080p 30fps. Sadly the value for the “pitch” parameter has to be fiddled with depending on the scene, ie. lying may be better at -40 while standing/sitting at -60 (it’s like looking up/down in VR).
The conversion should run relatively fast, I’ve seen from half to a quarter of the original video runtime. Bonus point: running hinro script on a 1080p should also be way faster than on a 8K 60FPS native VR.

1 Like

I opened it in OFS to have a look at the output and it was far from useable tbh. Perhaps it was the high resolution, the double SBS screen or some other issue like a full movie and not broken into scenes… but yeah, not great at all lol.

During the action scenes there was very little movement and all around the centre, and towards the end where there was very little movement in the scene itself there were big movements on the script… Like it was tracking something that wasn’t there.

I’m gonna give it another go with SlowTap’s suggestion and see how that goes. I’ll look into adding screenshots (i’m kinda new to using the board) to illustrate what i mean in a bit :+1:

So, unrelated to this prototype specifically but some insanely good food for thought:

Seriously, deep learning motion tracking is hands down the future of funscripting. This can be i think easily turned into a funscripter copilot tool. They even have a website demo set up, go play with it. From my first few tests with it, it will track almost flawlessly as long as you don’t outright give it like 360 degree turntables lmao (yes i tried one out of curiosity, still managed to track a few points throughout the whole rotation which is impressive but most were garbled after about half a rotation).

Just as a quick example of a short loop made by Kx2-SFM:

The dick itself isn’t tracked beyond entering the orifice but the character motion is tracked flawlessly even outside of frame, so you could easily have movement vector aggregation within a ROI and distance approximation between two binned ROIs. So much potential aaaa

4 Likes

To be clear I never doubted that deep learning would far surpass anything I have been making. The issue was figuring out how to train a model. That was a nightmare and never went anywhere. far too many sleepless nights trying to get something working and failing.

I’m testing out the co-tracking and giving it a test run. First step is to address the insane memory issues. The system instability when running a different video through the demo script was something. The weird thing is I was using a smaller video than the demo video so you would think it would be less intensive but alas it was far more intensive.

Looking at the output it may be feasible. Going to have to boot into linux to give it a real test for the rocm support. running at about 11 s/it which would probably make it faster to do by hand.

Assuming performance can be made usable and the movement data can be converted properly it could be pretty nice and would likely be a pretty big improvement over pretty much every automated option that’s out now. I can see it being incorporated into the ofs funscript generator as a tracking method.

The downside for anyone without nvidia cards is well no acceleration is really possible. backends that aren’t cuda have pretty much been abandoned for windows. rocm is a possibility under linux but that set up can be a nightmare. and I am not troubleshooting that.

I’ll play around with cotracker and see what I can get working.

edit: This has been a very unpleasant experiment… memory issues everywhere. I might have something sorta working? It’s processing so i dunno yet. Anyone got a spare A100 they wanna donate to the cause? x.x

edit2: still ongoing. turns out I forgot a step and was tracking some 57k points in a 320x180 video. whoops.

edit 3: last edit. tracking seems to work now though still slow. now to figure out conversions of movements to pos.

2 Likes

Can this generator work with Jav-VR videos?

Something is taking all of my space on my C: drive. I’ve been using the gui.py and then copying and pasting the ip address in a browser to go to the web interface. When I start I set the output folder to one of my other drives not my C: drive. I’ve tried looking in the folder I have the generator in and couldn’t really find anything other than a few cache files which are just a few kbs and couldn’t account for the amount of space taken up and I’ve looked in my user folders and haven’t found anything. The size of generator isn’t that big according to properties tab which is 176 KB

Start menu > Settings > System > Storage. Have fun.

Use WinDirStat or TreeSize Free to scan your C: drive, make sure to run as admin. You can get windirstat off of ninite and TreeSize from the jam-software site.

1 Like

If you set the output to the input directory it should prevent such issues. I’ll fix it shortly and update. In hindsight that should have been a toggle. In my uh lack of foresight the way the script processes folders is to make a copy to the output of the video for ease of testing and such.

I never claimed to not be dumb :x

1 Like

windirstat is incredibly slow now a days. I don’t know about treesize but wiztree is a massive upgrade in comparison to windirstat. so much faster it’s not even a comparison really.

1 Like

I tried it but can’t find the files