Automated python funscript generator

@hinro Here’s the terminal output after running command python funscript_gui.py

Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
OpenCL is not available. Using CPU.
OpenCL status: Disabled
Extracting motion vectors using OpenCL with OpenCV...
Successfully read the first frame. Shape: (1080, 1920, 3)
Video properties: 1920x1080 @ 29fps, 126922 frames
Processing frames:  66%|█████▉   | 83690/126922 [15:15:32<9:14:45, 1.30frame/s]

Check your gpu usage when generating please. That does seem to be pretty slow. The big issue is I’m using the rocm implementation for the pytorch operations which you would think would be straight forward but much like dang near everything else with this project its never that simple :smiling_face_with_tear:

What kind of gpu are you using? and please let me know if there is no gpu load when the script is running.

@hinro I figured out the issue. It was related to some Nvidia Driver package updates!!
I am using RTX 2060 GPU(6GB), which gives a speed of 30 frames/sec. Is this speed optimal for this GPU or am I lacking something again??

That speed looks about right. It could be faster but I haven’t managed to find the middle ground of using the gpu and exploding the gpu. It is a major pain that I simply haven’t figured out. I have managed to get 100% usage but then vram runs out in a second or so and crashes the script.

spent yesterday trying to figure it out to no avail. I’m honestly happy it was just a driver update problem and not script related.

2 Likes

Thank you for your hard work. It is much appreciated!

1 Like

Thank you :grin:

Great software, thank you for your development! I am using AMD 7900xTX, and the graphics card usage rate is only 5%. Is this normal? I think software can incorporate different postures to adapt to different models, would this be more accurate?

I haven’t been able to cap the usage unfortunately. After more testing and tweaking than I care to admit to I could never get the usage capped correctly so it was running either with lower amounts or blazing fast until it burned out and crashed on my 8gb vram.

I’m not sure what you mean by different postures and models. Could you explain a bit more about what you mean? I can’t think of a way to get past the current limitations despite smashing my head against that wall for days on end so a new perspective could be the magic that is needed.

On an unrelated note dang that’s a nice card. I dunno if you keep up to date with it but you could use the pytorch version with a bit of setting up since the 7900 is as far as I know the only amd card the supports rocm under windows. I got a 6600xt so I’m S.O.L on that front. That should improve the usage and speeds by a fair bit. I haven’t messed with the pytorch version in a couple days but I believe I was getting 100 or fps on 1080p videos. Assuming it works right your card should be substantially stronger.

I get a about 50% boost from using this class as the video reader imutils/imutils/video/filevideostream.py at master · PyImageSearch/imutils · GitHub
Using pytorch edition i go from about 100 fps to 150 fps when processing frames

2 Likes

dang that’s a really good improvement. Time to play around with it and see how well it incorporates

edit: That was surprisingly painless and it worked on the opencl generation also. I’ll have the updates pushed shortly. Thank you. Nodude is a hero for performance

1 Like

I guess maybe we can add key points on the video progress bar, such as blowjob start, blowjob end, cowgirl start, cowgirl end, using different analysis methods for different poses?

1 Like

I tried to get the porndb timestamp markers incorporated into the script but it never really got off the ground floor of an idea.

The hard part about that would be for the script to detect them. Which I’ve tried about ten different methods and models and none of them are really fit for the job. It is completely possible to create a model to accurately detect what we would need but the giant roadblock would be the dataset to train a model to do it.

Getting pictures of a handjob and such is easy. Properly tagging said images and video is the really tedious part and time consuming part and it is frankly not worth it I’m sad to say. On the low end each class would need 100k properly tagged frames and that would be for each thing we would want to track.

If there was an available dataset odds are fair that I would have a proof of concept at the very least by now. I’m sure there are other methods but I don’t know them.

Did you have a look at miles deep (GitHub - ryanjay0/miles-deep: Deep Learning Porn Video Classifier/Editor with Caffe) or P-HAR (GitHub - rlleshi/phar: deep learning sex position classifier)? Both do classification of porn videos. The former was used for the “Auto PMV Generator” (Auto PMV Generator V4.3). The latter is quite a bit newer and has more classes but can only be accessed via a commercial api. There’s an older, less accurate model available for free though. Maybe you could also try and contact the author rrleesi (his contact details are on his Github), he spend a lot of time preparing a massive dataset for his model.

1 Like

I…Did not know about those. There is always a git repo that does what I want or need and every time it will be found after I spend quite a lot of time trying to recreate the wheel. I’ll check them out and see how things work out in the morning.

phar is sounding very interesting. Won’t know until trying how it will turn out but based on the readme it could be exactly what is needed to get things a lot more accurate. Looking at the repo there is going to be a lot to go through.

miles if I’m understanding it correctly might be interesting but I really don’t know c so I can’t really say if it is or isn’t.

edit: phar is being an absolute nightmare.

phar didn’t work out. after four hours or so of messing with it trying to get it to work I gave up on that path.
On the plus side new version is up. reworked a lot of things and it seems to be working better. It isn’t one jumbled script any more. Launch with main.py. same command line args. I’m honestly not even sure how but it runs considerably faster now. I seem to have broken something in a good way? Still need to play around with the funscript creation part but it should be about the same as the previous version. so much debugging on that one script it’s not even funny. It still isn’t where I would want it but I really don’t know how long until it is at that point. Woohoo version 0.2 is a go!

still need to update the pytorch version and I’m working on the gradio front end now. Hopefully it will be up tonight. pytorch will be at a later time.

On a side note the --generate-debug flag makes some really damn neat virtualboy like videos.

woo! cant wait to test the new version!

once you get a chance to test it could you let me know if it’s better or worse compared to before please.
edit: apparently I forgot to upload a folder when I updated the repo. It’s fixed now ;.;

git pulled this morning tuesday aug 27 2024

had some trouble starting funscript_Gui but GUI opened immediately.
New GUI has some neat options! (im a sucker for a good gui)

  • Reduction Factor - im not sure what this does yet
  • Generate Debug Video - a returning feature but it feels like it tells me less now. gonna play with it more.
  • Invert Output - NEAT little option for if you know your script is gonna come out upside down
  • Look Ahead - Im assuming this is a feature that allows it to read through the video early
  • Timing Offset - super cool if you already know your offset
  • Amplification - increase the parabola size for device who dont have stroke limitation! super cool!

I WOULD ask if you are open to doing more with the gui

  • a return to default settings button for the slider
  • something to save prior output locations.
  • a log exporter, so that if there are recurring issues from the user base they might be able to send you a log of what happened

other than that every seems baller!

I do miss the point skip slider I did think it could have some interesting use cases

My testing videos are comprised of 4 irl and 4 animated videos ranging from 3 mintues to 15

First video: 3m IRL POV BJ 720p

  • test 1 lookahead 5 timing default amp 1.1
    Timing was way off here it ghosted crazy. it was moving sometimes when there was no action. I loved that it did increase the parabola size
  • test 2 default settings
    Realized I shouldnt have touched any settings retried. similar result
  • test 3 lookahead 0 timing default amp default
    similar result
    Going to try a higher look ahead next

Conclusion:
it really struggled here tracking was all over the place

Similar results with 3m 720p animated video : an xray 2d video all 3rd person missionary

Next I tried a 14m IRL video 3rd person 720 p

its mostly third person BJ, SUPER hit or miss here, accuracy is not great… but then it switched to third person riding and the accuracy got much better

HOWEVER here is where the point skip I think wouldve been handy. at the top and bottom points of the stroke it stays still for 4 frames then moves. its accurate-ish but its not measuring the full stroke.

so once again. longer videos are doing the best here on my side. WHICH MAKES SENSE. I kinda wanna try a longer like 30-45 min video see how good.

I’ll be testing a 2160p video as well

If anyone has suggestions on what worked best for them I’d appreciate seeing what you came up with!

the old script format of just having the finscript.py and funscript_gui.py were retired. Working on one giant script became a nightmare. It’s not more structed with a main.py which has been broken down into much more manageable smaller scripts. Now instead of having to keep messing with the entire script when I want to change and test things.

I believe you’re thinking about the reduction factor. That was the one that reduced/added points.

gui has been updated with a reset button. pull it and you should be good to go. going to have to look into saving paths. going to have to add some kind of configs to it but that will probably take a little bit.

The weird thing is that it isn’t really the tracking messing up (usually) it’s the conversion that is having issues. After accidentally nuking my other project and not having a backup I decided to give it a few days before going back at the conversion script. Lest it end up nuked also. so much work lost ;.;. anyhow about the debug video. I’m going to try to work out how to give an option or something for how it is displayed so if desired you could get the virtualboy version or something similar to the previous version with the moving arrow. or another method all together if anyone has some ideas of helpful visualization styles. Can’t say it will work because I haven’t tried yet but it should work.

Thanks you for the suggestions.

1 Like

if you work in vs code there is a code history in the editor where if you click it, it will show you a lot of past modifications you’ve made to your files, including those who have been deleted. It’s extremely clutch when you accidentally delete your files, it saved one of my coworker’s asses not too long ago.

1 Like