Most existing AI-based script generation systems rely on object recognition, which limits their versatility. DeepFunGen instead uses deep learning to map video frame sequences directly to script position values for script generation.
Although the model was trained only on live-action videos, it can also recognize animation videos to some extent.
Update Notes
v1.2.0 (2025-11-05)
VR Model Added: vr_conv_tcn_8
You can enable it by checking the VR checkbox located to the right of the model selection dropdown.
UI Updates
Default language set to English
Added brief usage descriptions (to be replaced with a more detailed guide later)
Added estimated time to completion indicator
v1.1.0 (2025-11-02)
Pipeline Improvements
Added automatic Y-axis normalization based on inference range.
Improved pre-processing pipeline for faster and more stable performance.
Increased frame inference output to 10 frames, resulting in more stable predictions.
UI Enhancements
Improved progress display.
Enabled auto-refresh.
Added absolute path display for browser-based users.
uv Support
uv is now shipped as a bundled executable
CLI Mode
Added command-line interface mode for running inference without the desktop UI.
Execution history is still recorded and visible in the UI.
VR videos
We’re still struggling with training VR videos
Previous Updates
v1.0.0 (2025-10-26)
Official release
Added improved model: conv_tcn_56
Switched distribution to Python (uv)
v0.2.0
Added improved model: conv_tcn_49
Fixed issue where peak polarity was reversed during training
Heuristically corrected peak drift artifacts
Improved accuracy with noise reduction using FFT (Fast Fourier Transform)
Added video preview feature to the program
v0.1.0
Initial test release
Known Issues
In scenes without insertion, meaningless strokes are generated due to lack of proper handling
→ Need to test whether training on idle segments can fix this
Lower recognition accuracy for animation compared to live-action
→ Need to verify whether a dedicated model trained on animation data can improve results
I tested a few videos chosen from this week’s Top list.
I uploaded the results only to check the model’s performance.
If the original creators feel uncomfortable about this, I sincerely apologize.
Please let me know and I will remove the scripts immediately.
What’s the speed on this supposed to be? I’m using a test 13min vr video, and after 20 minutes it’s at 25%. I have a 12gb 4070 rtx. 64gb ram and a good CPU.
edit Took over an hour and the result was horrible? Maybe post some examples you have done, thanks.
I wasn’t able to test VR videos properly.
It seems the model is trying to process too large an area of each frame, which likely caused incorrect output.
I will run additional VR tests later and adjust the pipeline to handle them correctly.
The long processing time is also unusual. My guess is:
Very high video resolution, or
High FPS which slowed down preprocessing
Also, the UI did not auto-refresh, so I had to manually click to update the status.
If you tested with a video that is already posted in the forum, please share the link — I would appreciate it.
For standard (non-VR) content, I will attach example scripts in the main post using a few suitable videos from this week’s Top list.
(There is currently an issue with forum uploads, so I will upload it as soon as the problem is resolved)
I’ll upload my test generated script as soon as the upload problem is resolved
Perhaps this model could even be used in FunGen as a fallback when no objects are detected. I have just uploaded the license file to GitHub under the MIT License, so feel free to use it freely.
Tried it out, seems to require a lot more extra training/tuning on the model side. Currently losing thrusts here and there, random patterns are added when no action on screen (no worries here can just delete), inverted for certain positions such as missionary (but filmed from below/under the nutsack POV)…etc
That said, the rest of the software/pipeline is well built so it’ll just require more training. Any thoughts on something like federated learning to allow users to improve the model over time?
You can use the application through pywebview instead of a browser. In case of issues, we added support for manually specifying the absolute path in version 1.1.0.
I’m a dumbass with no clue of what i’m doing and figured i could try this out and see if it could script a video i posted in request’s that didn’t get picked up, but when i try to run the “run.bat” file i get this message:
error: Distribution onnxruntime-directml==1.23.0 @ registry+https://pypi.org/simple can’t be installed because it doesn’t have a source distribution or wheel for the current platform
hint: You’re using CPython 3.14 (cp314), but onnxruntime-directml (v1.23.0) only has wheels with the following Python ABI tags: cp310, cp311, cp312, cp313
Already spoke to you in DMs, but just wanted to say, as a first use feedback, i had the following things to highlight:
the app starts in Korean by default, I would suggest changing the default language to English so that it’s more accessible to everyone
currently there is no explanation for what the various processing settings do, it would be really helpful if they had hover hints or a more in-depth description somewhere either in-app or on your github page
Im not exactly sure how to make sure it runs through pywebview, Im just suppose to use the run.bat file right? I do see where a drag/drop hook is being skipped in the console, not sure if its related. Absolute path works for the time being, thank you
I’ve tested it on few mmd videos (3d animation video) and the script were quite good, any chance to have it working on audio? Just to get an upgraded version of Funscript Dancer?