Automated python funscript generator

I haven’t been able to cap the usage unfortunately. After more testing and tweaking than I care to admit to I could never get the usage capped correctly so it was running either with lower amounts or blazing fast until it burned out and crashed on my 8gb vram.

I’m not sure what you mean by different postures and models. Could you explain a bit more about what you mean? I can’t think of a way to get past the current limitations despite smashing my head against that wall for days on end so a new perspective could be the magic that is needed.

On an unrelated note dang that’s a nice card. I dunno if you keep up to date with it but you could use the pytorch version with a bit of setting up since the 7900 is as far as I know the only amd card the supports rocm under windows. I got a 6600xt so I’m S.O.L on that front. That should improve the usage and speeds by a fair bit. I haven’t messed with the pytorch version in a couple days but I believe I was getting 100 or fps on 1080p videos. Assuming it works right your card should be substantially stronger.

I get a about 50% boost from using this class as the video reader imutils/imutils/video/filevideostream.py at master · PyImageSearch/imutils · GitHub
Using pytorch edition i go from about 100 fps to 150 fps when processing frames

2 Likes

dang that’s a really good improvement. Time to play around with it and see how well it incorporates

edit: That was surprisingly painless and it worked on the opencl generation also. I’ll have the updates pushed shortly. Thank you. Nodude is a hero for performance

1 Like

I guess maybe we can add key points on the video progress bar, such as blowjob start, blowjob end, cowgirl start, cowgirl end, using different analysis methods for different poses?

1 Like

I tried to get the porndb timestamp markers incorporated into the script but it never really got off the ground floor of an idea.

The hard part about that would be for the script to detect them. Which I’ve tried about ten different methods and models and none of them are really fit for the job. It is completely possible to create a model to accurately detect what we would need but the giant roadblock would be the dataset to train a model to do it.

Getting pictures of a handjob and such is easy. Properly tagging said images and video is the really tedious part and time consuming part and it is frankly not worth it I’m sad to say. On the low end each class would need 100k properly tagged frames and that would be for each thing we would want to track.

If there was an available dataset odds are fair that I would have a proof of concept at the very least by now. I’m sure there are other methods but I don’t know them.

Did you have a look at miles deep (GitHub - ryanjay0/miles-deep: Deep Learning Porn Video Classifier/Editor with Caffe) or P-HAR (GitHub - rlleshi/phar: deep learning sex position classifier)? Both do classification of porn videos. The former was used for the “Auto PMV Generator” (Auto PMV Generator V4.3). The latter is quite a bit newer and has more classes but can only be accessed via a commercial api. There’s an older, less accurate model available for free though. Maybe you could also try and contact the author rrleesi (his contact details are on his Github), he spend a lot of time preparing a massive dataset for his model.

1 Like

I…Did not know about those. There is always a git repo that does what I want or need and every time it will be found after I spend quite a lot of time trying to recreate the wheel. I’ll check them out and see how things work out in the morning.

phar is sounding very interesting. Won’t know until trying how it will turn out but based on the readme it could be exactly what is needed to get things a lot more accurate. Looking at the repo there is going to be a lot to go through.

miles if I’m understanding it correctly might be interesting but I really don’t know c so I can’t really say if it is or isn’t.

edit: phar is being an absolute nightmare.

phar didn’t work out. after four hours or so of messing with it trying to get it to work I gave up on that path.
On the plus side new version is up. reworked a lot of things and it seems to be working better. It isn’t one jumbled script any more. Launch with main.py. same command line args. I’m honestly not even sure how but it runs considerably faster now. I seem to have broken something in a good way? Still need to play around with the funscript creation part but it should be about the same as the previous version. so much debugging on that one script it’s not even funny. It still isn’t where I would want it but I really don’t know how long until it is at that point. Woohoo version 0.2 is a go!

still need to update the pytorch version and I’m working on the gradio front end now. Hopefully it will be up tonight. pytorch will be at a later time.

On a side note the --generate-debug flag makes some really damn neat virtualboy like videos.

woo! cant wait to test the new version!

once you get a chance to test it could you let me know if it’s better or worse compared to before please.
edit: apparently I forgot to upload a folder when I updated the repo. It’s fixed now ;.;

git pulled this morning tuesday aug 27 2024

had some trouble starting funscript_Gui but GUI opened immediately.
New GUI has some neat options! (im a sucker for a good gui)

  • Reduction Factor - im not sure what this does yet
  • Generate Debug Video - a returning feature but it feels like it tells me less now. gonna play with it more.
  • Invert Output - NEAT little option for if you know your script is gonna come out upside down
  • Look Ahead - Im assuming this is a feature that allows it to read through the video early
  • Timing Offset - super cool if you already know your offset
  • Amplification - increase the parabola size for device who dont have stroke limitation! super cool!

I WOULD ask if you are open to doing more with the gui

  • a return to default settings button for the slider
  • something to save prior output locations.
  • a log exporter, so that if there are recurring issues from the user base they might be able to send you a log of what happened

other than that every seems baller!

I do miss the point skip slider I did think it could have some interesting use cases

My testing videos are comprised of 4 irl and 4 animated videos ranging from 3 mintues to 15

First video: 3m IRL POV BJ 720p

  • test 1 lookahead 5 timing default amp 1.1
    Timing was way off here it ghosted crazy. it was moving sometimes when there was no action. I loved that it did increase the parabola size
  • test 2 default settings
    Realized I shouldnt have touched any settings retried. similar result
  • test 3 lookahead 0 timing default amp default
    similar result
    Going to try a higher look ahead next

Conclusion:
it really struggled here tracking was all over the place

Similar results with 3m 720p animated video : an xray 2d video all 3rd person missionary

Next I tried a 14m IRL video 3rd person 720 p

its mostly third person BJ, SUPER hit or miss here, accuracy is not great… but then it switched to third person riding and the accuracy got much better

HOWEVER here is where the point skip I think wouldve been handy. at the top and bottom points of the stroke it stays still for 4 frames then moves. its accurate-ish but its not measuring the full stroke.

so once again. longer videos are doing the best here on my side. WHICH MAKES SENSE. I kinda wanna try a longer like 30-45 min video see how good.

I’ll be testing a 2160p video as well

If anyone has suggestions on what worked best for them I’d appreciate seeing what you came up with!

the old script format of just having the finscript.py and funscript_gui.py were retired. Working on one giant script became a nightmare. It’s not more structed with a main.py which has been broken down into much more manageable smaller scripts. Now instead of having to keep messing with the entire script when I want to change and test things.

I believe you’re thinking about the reduction factor. That was the one that reduced/added points.

gui has been updated with a reset button. pull it and you should be good to go. going to have to look into saving paths. going to have to add some kind of configs to it but that will probably take a little bit.

The weird thing is that it isn’t really the tracking messing up (usually) it’s the conversion that is having issues. After accidentally nuking my other project and not having a backup I decided to give it a few days before going back at the conversion script. Lest it end up nuked also. so much work lost ;.;. anyhow about the debug video. I’m going to try to work out how to give an option or something for how it is displayed so if desired you could get the virtualboy version or something similar to the previous version with the moving arrow. or another method all together if anyone has some ideas of helpful visualization styles. Can’t say it will work because I haven’t tried yet but it should work.

Thanks you for the suggestions.

1 Like

if you work in vs code there is a code history in the editor where if you click it, it will show you a lot of past modifications you’ve made to your files, including those who have been deleted. It’s extremely clutch when you accidentally delete your files, it saved one of my coworker’s asses not too long ago.

1 Like

Funscript_gui is trying to load a file that is now non existant, most likely renamed.

from Funscript import run_script

change this to

from main import run_script

and it should work :slight_smile:

That would work yes but that’s the old gui. pull the latest and run gui.py. it has a lot more options and such.

Thank you. that was about a day and a half of work saved x.x

1 Like

Please pass the fps to process_motion_data so the resulting length is correct.

Some funny code
    buffer = Queue(maxsize=20)
    def fillbuffer():
        while True:
            # Read each frame
            ret, curr_frame = cap.read()

            if not ret:
                print(f"Reached end of video in thread")
                buffer.task_done()
                break

            # Append the current frame to buffer
            buffer.put(curr_frame)

    threading.Thread(target=fillbuffer).start()

    # Process frames
    with tqdm(total=frame_count-1, desc="Processing frames") as pbar:
        for _ in range(1, frame_count):
            try:
                curr_frame = buffer.get(True, 10)
            except:
                print(f"Reached end of video after {len(motion_data) + 1} frames")
                break
2 Likes

oh boy that caused some havoc but I think it should be right now. the last frame is skipped quite simply because I could never figure out how to get it to not crash once it hit the end of the video. essentially a bandaid fix. But I did work on that today and I believe it’s working right. going to do a bit more testing before slinging the next version out the door and avoid the issues like the last release.

Also are you a wizard or something? That is the second massive speedup thanks to you. And I honestly don’t even know how. The resource usage is about the same but running at double or more. I’m honestly feeling bad for not working on the pytorch version more just to get it up to date for you.

720p videos running at some 600 fps with very little resource usage. I assumed something broke but the output was the same (until I completely broke processing of course) 180 on 1080p videos just on cpu.

1 Like

Where are the videos stored after being processed in the GUI? I ran one over night (not had a chance to look or test yet) but i have limited drive space and need to reclaim it lol

Edit: Scratch that, I found it. Script didnt come out very good with a VR video. I used default settings. Upside though, it gives you something to go in and edit afterwards… maybe this is a good use :slight_smile:

When i use this it creates way to many keyframepoints, is there a way i can change this or edit it? If i use the script with the handy its no good.