DeepFunGen 1.2.0 Release (Deep Learning–Based Funscript Generation)

DeepFunGen Release (Deep Learning–Based Automatic Funscript Generation)

Download

:backhand_index_pointing_right: https://github.com/oddish-s/DeepFunGen/releases/tag/1.2.0

How to Use

Please refer to the GitHub README:
:link: https://github.com/oddish-s/DeepFunGen


Project Overview

Most existing AI-based script generation systems rely on object recognition, which limits their versatility.
DeepFunGen instead uses deep learning to map video frame sequences directly to script position values for script generation.

Although the model was trained only on live-action videos, it can also recognize animation videos to some extent.


Update Notes

v1.2.0 (2025-11-05)

  • VR Model Added: vr_conv_tcn_8
    • You can enable it by checking the VR checkbox located to the right of the model selection dropdown.
    • vrmodel
  • UI Updates
    • Default language set to English
    • Added brief usage descriptions (to be replaced with a more detailed guide later)
    • Added estimated time to completion indicator

v1.1.0 (2025-11-02)

  • Pipeline Improvements
    • Added automatic Y-axis normalization based on inference range.
    • Improved pre-processing pipeline for faster and more stable performance.
    • Increased frame inference output to 10 frames, resulting in more stable predictions.
  • UI Enhancements
    • Improved progress display.
    • Enabled auto-refresh.
    • Added absolute path display for browser-based users.
  • uv Support
    • uv is now shipped as a bundled executable
  • CLI Mode
    • Added command-line interface mode for running inference without the desktop UI.
    • Automatically loads saved configuration (model path, post-processing options).
    • Execution history is still recorded and visible in the UI.
  • VR videos
    • We’re still struggling with training VR videos
Previous Updates

v1.0.0 (2025-10-26)

  • Official release
  • Added improved model: conv_tcn_56
  • Switched distribution to Python (uv)

v0.2.0

  • Added improved model: conv_tcn_49
  • Fixed issue where peak polarity was reversed during training
  • Heuristically corrected peak drift artifacts
  • Improved accuracy with noise reduction using FFT (Fast Fourier Transform)
  • Added video preview feature to the program

v0.1.0

  • Initial test release

Known Issues

  • In scenes without insertion, meaningless strokes are generated due to lack of proper handling
    → Need to test whether training on idle segments can fix this
  • Lower recognition accuracy for animation compared to live-action
    → Need to verify whether a dedicated model trained on animation data can improve results




Test Videos

I tested a few videos chosen from this week’s Top list.
I uploaded the results only to check the model’s performance.
If the original creators feel uncomfortable about this, I sincerely apologize.
Please let me know and I will remove the scripts immediately.

1 HEAVEN _ PMV [Arckom]_68f31d66759b8455de739d1c

  • Video length: 3:46 (472 MB)
  • Processing time: 3:42

HEAVEN _ PMV [Arckom]_68f31d66759b8455de739d1c.funscript (79.4 KB)

2 Fast and Slow until you can’t Hold it Anymore - Stop and Go Teasing Handjob

  • Video length: 13:12 (278 MB)
  • Processing time: 5:23

Fast and Slow until you can’t Hold it Anymore - Stop and go Teasing Handjob_TheMagicMuffin_1080p.funscript (132.9 KB)

3 Meiilyn (yuumeilyn) Cosplay Nicole Demara - Zenless Zone Zero

  • Video length: 6:40 (785 MB)
  • Processing time: 8:38

Meiilyn (yuumeilyn) Cosplay Nicole Demara - Zenless Zone Zero [Crucial].funscript (87.7 KB)

4 Octokuro VR: Moo Means Fuck Me In The Ass, Stepbro

  • Video length: 42:35 (2.75 Gb)
  • Processing time: 40:10

Octokuro - Moo Means In The Ass, Stepbro_2160_180_LR.funscript (311.3 KB)


Test Machine Specs

  • CPU: i5-12500
  • GPU: RTX 3060 Ti
  • Provider: DmlExecutionProvider (DirectML)
25 Likes

What’s the speed on this supposed to be? I’m using a test 13min vr video, and after 20 minutes it’s at 25%. I have a 12gb 4070 rtx. 64gb ram and a good CPU.

edit Took over an hour and the result was horrible? Maybe post some examples you have done, thanks.

I wasn’t able to test VR videos properly.
It seems the model is trying to process too large an area of each frame, which likely caused incorrect output.
I will run additional VR tests later and adjust the pipeline to handle them correctly.

The long processing time is also unusual. My guess is:

  • Very high video resolution, or
  • High FPS which slowed down preprocessing

Also, the UI did not auto-refresh, so I had to manually click to update the status.

If you tested with a video that is already posted in the forum, please share the link — I would appreciate it.

For standard (non-VR) content, I will attach example scripts in the main post using a few suitable videos from this week’s Top list.
(There is currently an issue with forum uploads, so I will upload it as soon as the problem is resolved)

How well does this work for non-vr scenes? Can you maybe show some comparisons between human created scripts and the ai version?

Also, how does it compare to the FunGen project? You should reach out to @k00gar, maybe you can join forces?

1 Like

I’ll upload my test generated script as soon as the upload problem is resolved

Perhaps this model could even be used in FunGen as a fallback when no objects are detected. I have just uploaded the license file to GitHub under the MIT License, so feel free to use it freely.

3 Likes

My bad, i wasn’t aware this wasn’t for VR, i might have missed that being mentioned somewhere.

Tried it out, seems to require a lot more extra training/tuning on the model side. Currently losing thrusts here and there, random patterns are added when no action on screen (no worries here can just delete), inverted for certain positions such as missionary (but filmed from below/under the nutsack POV)…etc

That said, the rest of the software/pipeline is well built so it’ll just require more training. Any thoughts on something like federated learning to allow users to improve the model over time?

I wanted to give it a go but whenever I tried adding a video I would get this error


Drag and drop didn’t work either

You can use the application through pywebview instead of a browser. In case of issues, we added support for manually specifying the absolute path in version 1.1.0.

I’m a dumbass with no clue of what i’m doing and figured i could try this out and see if it could script a video i posted in request’s that didn’t get picked up, but when i try to run the “run.bat” file i get this message:

error: Distribution onnxruntime-directml==1.23.0 @ registry+https://pypi.org/simple can’t be installed because it doesn’t have a source distribution or wheel for the current platform

hint: You’re using CPython 3.14 (cp314), but onnxruntime-directml (v1.23.0) only has wheels with the following Python ABI tags: cp310, cp311, cp312, cp313

My mistake — I had the Python version fixed to 3.12 in the pyproject.toml
Thanks for reporting this.

requires-python = "==3.12.*"

Changed the existing line in the file with the one you supplied and it seems to be working, thanks!

Already spoke to you in DMs, but just wanted to say, as a first use feedback, i had the following things to highlight:

  • the app starts in Korean by default, I would suggest changing the default language to English so that it’s more accessible to everyone
  • currently there is no explanation for what the various processing settings do, it would be really helpful if they had hover hints or a more in-depth description somewhere either in-app or on your github page
1 Like

We’ve just added VR-support models. Thanks for the report!

3 Likes

Im not exactly sure how to make sure it runs through pywebview, Im just suppose to use the run.bat file right? I do see where a drag/drop hook is being skipped in the console, not sure if its related. Absolute path works for the time being, thank you


I’ve tested it on few mmd videos (3d animation video) and the script were quite good, any chance to have it working on audio? Just to get an upgraded version of Funscript Dancer?