We want to address this installation matter and ensure that it remains transparent, guaranteeing no unauthorized use of your resources (coin miner or whatever).
However, as a two-person team, with little to no help, balancing daily jobs and family responsibilities, we currently have limited capacity.
Regrettably, this means we cannot prioritize this issue at the moment.
So I am doing my first AI generated perfected by human script.
Here are my thoughts:
Am I a fan of this project? Yes definitely, I am officially a fan. Its great.
immediately rename Funscript AI Generator (FAG) to AI Funscript Generator (AFG) Before FAG sticks.
I it better than MTFG? Yes by far
Does it spare time scripting? Definitely. There are large parts that need no or almost no change. How much time is spared. Dunno how much yet. You still have to go through most points but minimal changes. This will get better as I see it.
How good are the scripts generated? I would say a lot better than from a beginner scripter or work from a scripter that gives a shit about endpoints. AFG does set endpoints well. Not yet perfectly not much advanced scripting yet. The script generated is usable which I canāt say some scripts that get published.
Can it handle all scenes? No, scenes you have difficulties yourself scripting wonāt be much good in AFG either.
How good is detection of body parts? Itās amazing to live watch how good detection is.
Does it get better? As much as I see its still one an exponential scale. it will slow down soon when limits are hit.
How fast is conversion. With a string Nvidia conversion is really fast atm. Less than an hour for a 40 Minute 8K movie. Will slow down when more is processed but who cares if scripting takes 8 hours manually.
Installation is doable. Donāt forget to to change directory to the install directory in the Anaconda/Miniconda terminal screen.
GUI is easy to use.
They want to keep it public. Can not imagine it will be free forever but at least not shut away.
The Discord is super nice and super helpful. Oh look a scripter! They are really happy about people joining. Funny thing is they donāt seem to know this will skyrocket fast.
Scripts coming from AFG are free and canāt be sold which is great. So they are doing a big job for the community.
Weāre back with an update packed with improvements, fixes, and exciting new features to make your experience smoother than ever!
Whatās New? App Renamed to FunGen ā A fresh identity to match its evolving capabilities! TensorRT Support ā Faster inference, lower latency, and better performance on NVIDIA GPUs! Portrait Video Support ā Full compatibility for vertical formats. Edit Video Settings Popup ā Is our app not detecting the right format? Now this can be overwritten. Help Page Added ā Get the guidance you need, right inside the app.
Bug Fixes & Optimizations Fixed generate debug video not adhering to start frame Fixed CLI argument handling for folders + improved logging Disabled YOLO analytics for better privacy Auto-load reference scripts for a smoother workflow Expanded debug overlays for deeper analysis General performance tweaks & minor bug fixes
New YOLO v12 Models! More accurate detection, improved recognition, and better tracking stability, especially in tricky scenes!
Sneak Peek: Tracking 2.0 for Enhanced Funscript Generation!
Weāre making big improvements to our funscript generation!
Itās a work in progress (and not released), but early results look fantasticāexpect better scripts, better handling of tricky positions, and overall higher quality.
Stay tuned for more updates!
Give it a spin and let us know your feedback!
Note: make sure to re-install the requirements if updating:
pip install -r core.requirements.txt
and if on NVIDIA:
pip install -r cuda.requirements.txt
pip install tensorrt
Would be cool to just show the boxes without the movie and see if the viewer can imagine a funscript from those boxes as that is what Fungen sees only.
I can get this running, but I canāt figure out the .bat file for batch production.
If I try to do anything else on my system, it freezes and crashes ā so this is really resource intensive and maybe could have some more efficient error-checking in the code. Which Iām sure isnāt easy ā so, what might be easy is a pause button for processing. Right now itās all or nothing it seems ā āStop processingā after 40 minutes makes it look like I will lose my progress. But Iād like to have this run while Iām not on the computer, so Iād pause it when Iām here.
If I hit āstop processingā, and start again, will it pick up where I left off?
Well, we would need a bit more info on your hardware setup, the model version and code version you are using, etc. before knowing where the issue might be located.
Happy to discuss this in the Discord.
Anyway, at the moment, you cannot pause the processing of a single video and resume it afterwards.
However, if you stop a batch processing, re-running it will have the program skip all the videos for which a funscript was generated.
And we have less time to work on this type of feature for now, as we are trying to focus on the new major version.
A very quick update for you folks, while spatialflux is away for a couple days of very well deserved rest.
I decided to dive in again in the Prod version as the new version is gonna to take a damn effin while to release.
Wanted to try and see if I could improve some stuff in there before we end up giving birth to this huge baby coming ahead.
First, performance wise , the new version under development is going to be a killer, specifically on the YOLO inference side. I have no gains on that part to offer on the prod version, but the second stage of generation now is down from 8 / 10 mins to barely 15 seconds.
Second, in term of quality of signal . This quick screenshot to show that we now ditch some of the annoying unwanted moves (hands or feet passing over, etc.). Here an example, blue line being the base prod version, green one being the tweaks of the prod version I am referring to. Still need quite some work, but looking good !
It even looks like we mightāve here a fix for the issue @roa mentioned of being a couple frames late in terms of endpoint (prod funscript in blue, tweaked version signal in green)
Regarding the YOLO model, you need to use the menu View ā Settings and select the right model file from there. Ideally, restart the app after selecting the model.