Funscript AI Generation - VR (& 2D POV now?) - Join the Discord :)

Ok, my head hurts already… :exploding_head: :exploding_head: :rofl:

OrgyVR -  Linda Lan, Lulu Chu, Ember Snow, Nicole Doshi - The Pussycat Girls

3 Likes

Lol, maybe you should focus on BG to begin with.

2 Likes

lol, you are so right, I actually lost myself in this video, no wonder why :rofl: :rofl: :rofl:

3 Likes

So, while I could not focus on anything, and needed some rest from this, I played with something else…

VRCONK_Kiara Cole_game_of_thrones_daenerys_targaryen_a_porn_parody_8K_180x180_3dh

2 quick notes:

  • This looks like quick and dirty, cheap passthrough AI (needs refinement)
  • I think I may have ADHD
3 Likes

Well, two quick updates…

1. Regarding the OBB model training.

Epoch 92 / 200 :

  • I still get a lot of flickering on the orientation (check the ass in the GIF, looks like a Rubik’s cube :sweat_smile:)
  • I am pretty sure it is related to my training set (see below why), but still reckon there is something to leverage here to get at least a second axis, since we can see the penis box seems to be properly oriented.
  • Why ? Because it was a done in a quick’n dirty PoC way: I reused my initial dataset with classic boxes, and applied a random -90°/90° rotation to both images and matching annotations. However, my dataset was already containing classes (faces, breast, penis, etc.) that were not simply “straightly” positioned (if this makes any sense, please pardon my very approximate use of English).

OrgyVR -  Linda Lan, Lulu Chu, Ember Snow, Nicole Doshi - The Pussycat Girls 2

2. Regarding the Python VR funscript generation helper itself.

As I will have less time to focus on it in the coming days (work, family, etc.), it is unlikely that I make a breakthrough by myself alone.

Therefore, I decided to open access to the GitHub repository (either upon request, or just make it public, still wondering what to do since it is adult content related) in the next 12 hours or so, hoping this will leverage a fruitful collaboration and somehow end up being useful to the community.

I will try to add a couple features by then and proceed with committing them before opening the repo.

Miscellaneous

A Kofi or Patreon might be opened for whomever feels like giving a dime for a refreshing beer.
A Patreon could come up with some added features (like further fine-tuned models, specific support, draft funscript generation, no clue what could be acceptable or wished), but still with a free tier, as promised.

And a basic Discord channel will be opened for discussion.

If this gets any traction, development will carry on.

Thank you all :slight_smile:

3 Likes

Finally… As promised, and earlier than I initially envisioned, time has come :watch:

I just made a commit for this version in a state that would allow it to be tested and enhanced together.

:tada: It’s time to share the repository and collaborate further! :tada:

Took me time to do some cleaning, fixing, tuning, and I also created a plain simple interface this afternoon to ease up the use…

Load your video file, a reference funscript if you want to get some benchmark information in the end.

If the video is large (in definition) you will be offered a downsampling that the tool can do for you, using ffmpeg.

The yolo detection will be saved to a json file, so it can be reused.

Debug mode will save the tracking to a json file, so you can then click the Debug button with a specific frame start value so the video and all debut information will be played and displayed from that frame point.

Live Display Mode will display the video during the yolo inference, it is more for testing, uses a lot of resource and prints a lot of debug info to the console, I advise you no to use that.


:man_technologist: Repository: :warning: Tested on Apple Sillicon only :warning:

There still is soooo much to do :slight_smile:


:dancer: Yolo models
The models are available in the Discord below


:love_letter: Discord: where you will get support, love, and the yolo models…


:beer: Beer Time

And last, but not least, I opened those two accounts if anyone feels like it:

Buy Me a Coffee at ko-fi.com


:superhero: Patreon

I opened a Patreon, though I still need to think about what could be a nice offer, maybe finetuned models for specific needs, generating funscripts upon request, pre-releases… Open for suggestions… not a priority.

https://www.patreon.com/c/k00gar


Ok, now, I really need a beer. Off to the next door supermarket to grab a couple of fresh ones… :beers:

Will be back online shortly…

@jcwicked @ellequadro @lsp888 fyi

12 Likes

He, fun, currently doing my own passthrough version or rather some clips of that one (with some others done over the past weeks) by using After Effects with retroscoping. Pretty easy, but power hungry.

2 Likes

Thanks for the feedback @Shayuki, my passthrough approach was raw, dirty, and lame hahaha.

The quality of your PT segmentation is awesome, congrats! :medal_sports:

This project looks awesome

Sounds like you and @geogan should be teaming up and making a full 6-dof AI tracking system for 2d and 3d video :slight_smile:

Oh, thank you, I really appreciate your comment!

@geogan 's work is awesome, but I have no clue if we could apply his algorithm to VR video as it mainly focuses (if I am not mistaken) on BJ FPV non VR vid (while VR does not only focus on BJ, and presents distortion in the image).

Anyway, very nice comment, and happy to work with whomever might feel like it!

Though, now that you mention 6dof, just got a new idea.

What if I was running detection and tracking on both left and right frame and then rebuild a 3D position :exploding_head: :exploding_head: :exploding_head:

My head hurts already :rofl:

image

4 am here, I need some rest, will think this through haha, thank you for your message!

Added this testing on the to do list, we’ll see if I can do anything out of it :grinning:

1 Like

Niceee! Cool to see you releasing the project. We need more community collaboration like this.

2 Likes

Hello k00gar, first of all thank you for the work done in developing an automatic script generator program via python. I am a novice and I would like to try the program with a video but for the moment the only thing I can do is instantly open and close a black window (like a windows console) I can’t start your program, yet I have the latest version of python installed. If someone could help me, thank you. How to do it?

Hi there,

Thank you for your message.

I should be able to troubleshoot you in 2 to 3 hours in the discord mentioned a few posts above.

Need to go afk for now.

Btw, I was thinking. You could probably use alpha passthrough videos as these already have a mask. Then take a number of common background plates without people in it and place those backgrounds behind the frames from the alpha mask at random. You could fairly easily generate a huge amount of training data this way.

1 Like

@fenderwq : thank you for the coffee! Really appreciate :slight_smile:

Also:

You and I need to talk, haha, I like that :slight_smile:

2 Likes

And a quick shot on an AR video below…

Heatmap:

Report:

Extract:

More details in the discord channel :slight_smile:

Well done k00gar! This is very interesting.
One question though, do you think this would also be usable for non-VR videos?

Hi there and thank you for your message !

That’s a tricky question as of now, as the whole game here for VR POV video consists in detecting and tracking specific body parts frame by frame.

I will try to detail my answer below, but to summarize : yes and no (with a bigger weight for the no). And pardon the long answer…

First, in 2D:

  1. Body parts that are not always visible depending on the camera angle (doggystyle with the woman facing the camera and man behind her for instance).
  2. Body parts are not always aligned in a vertical axis like in POV VR for which is solution is initially designed (doggystyle filmed from sideview for example)

But there could be difference between regular 2D and POV 2D clips…

Please check below for results on regular 2D and POV 2D.

Regular 2D:

Let me illustrate that with the processing of an “oldie” stored for posterity.

Spoiler alert, it’s broken, and seems so even on “scripted parts”, I would need to troubleshoot but unfortunately this is not my top priority right now.

The solution could partly consist in the following approach:

  • Apply a dynamic rotation to the frame to have a vertical alignment of the “penetration” / “interaction with penis”
  • Train a YOLO OBB model like showcased here and work with euclidian distance instead of y-axis differential (I initially worked with distance, but finally retained the y-axis approach for POV VR

POV 2D:

Here an example with Mr Lucky POV / Valerica Steele:

So, in theory it could somehow work for POV scenes at least, but the code is not fine tuned for 2D (yet), therefore the algorithm would sometimes behave very weirdly.

Also, in VR, the female is doing most of the job, the stunt cock moves less often than it would in 2D, and upon quick check up here, I can see I have issues related to the fast movement of the cock.

Why is that? Because the “locked penis box” is not adjusting quickly enough due to an anti-jittering system I set up, but which is too strong in this case, I need to losen it up so we can detect more movements.

Last but not least, I need to lessen the moving average logic depending on the frame rate, as this clip is 30fps when most VR are 60fps.

Not sure if it would help, but here is how I did it using Miniconda, which allows to isolate different Python application (i.e. use different versions of a library for each application):

  1. Install miniconda
  2. Start a miniconda command prompt
    image
  3. Execute (assuming you already cloned VR-Funscript-AI-Generator and copied the model into models folder)
conda create -n VRFunAIGen python=3.11
conda activate VRFunAIGen
pip install numpy opencv-python tqdm ultralytics scipy matplotlib simplification
pip uninstall torch torchvision torchaudio
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
cd <VR-Funscript-AI-Generator folder>
python FSGenerator.py

While executing, you’ll need to say “yes” a few times. The lines “pip uninstall / pip3 install” is to replace the “CPU” version of torch with a “cuda enabled / GPU” version (you might need to install nvidia CUDA stuff for it to works, I’m not sure).

3 Likes