🤖 FunGen - AI-Powered Funscript Generation - VR & 2D POV - Join the Discord

Of course, thank you for your hard work. I have uploaded 3 scripts to the robot and hope to make more contributions. However, I found that the script cannot be uploaded after modification. :smile:

Be in touch with me through Discord for that, will look into it.

Guys, what about the hardware requirements?
Been following the topic for a long time - but maybe I missed it.
How many cores/memory/video card should be for conformal work - for example, to script in 24 hours.

Hi @flowerstrample,

We offer 3 versions of the model.

The first one is optimized for CUDA (NVIDIA), the second one is optimized for Apple Silicon, and the third one is for CPU inference.

Depending on your NVIDIA GPU, you can expect to process a 8K 60fps video in realtime, and up to 100+ fps if processing multiple in parallel.

I am running it on a Mac silicon, and I am getting close to real time on 8K 60fps.

CPU inference will be slower, but AMD GPU owners could expect better results with a ROCm optimized version of PyTorch.

3 Likes

Yeah. Okay.
I just have a mini-PC with i5-6500t - theoretically I don’t care about speed - let it count as much as it needs standing in the pantry. OS - is it better to use Ubuntu or can I keep Windows 10?

1 Like

Given it’s relying on Python, it should be platform agnostic, so you can keep win10 :slight_smile:

Curious to know what perf you can get out of a light setup.

You can also go with smaller files (in terms of resolution), it could help a bit, as anyway we are doing an on-the-fly resizing to 640p. Before all the optimization, I was feeding it with 1080p or 1440p versions of the videos.

1 Like

Hello,

Just found this project by accident and just wanna express the appreciation of this wonderful project!

I have a bit background of Research in CV, specialized in human pose estimations, etc.

Therefore I have couple questions, hope u don’t mind :stuck_out_tongue:

  • Why use yolo? In the past base off my experience NMS layer of the model wasn’t very great for accuracy, or realtime.
  • Have you try using DeepSORT? Or maybe even Transformer based end to end model such as TrackFormer or MOTR? It will get rid of a lot side effect of NMS layer.
  • Any thought of adding segmentation in the pipeline? My idea is to segment out actress and actors in the scene and then processes tracking on specific body parts, which it will increase the accuracy and performance in my experience.
  • I had a similar personal project in the past but more human skeletons based methods, and I end up gave up on the idea cuz of dataset limitations. Maybe that could be also and idea for the pipeline?
  • On the topic of the dataset, was the model trained on NSFW specific frames or just generic human interaction frames? I know there some egocentric dataset out there might be useful, example as egoMe, etc. Since I am not sure if there any dataset out there I am assuming ur creating this by urself, have u thought about synthetically generate dataset? Or maybe even pushing model that utilize self supervision learning method?

Sorry for such long reply, you don’t have to answer all of them. I am just curious as in my personal interests.

And again thank for all your time, and efforts.

1 Like

You can try WSL (Window sub Linux) that’s how I ran in my home PC. And new version with Windows 10/11 support some extend of GUI as well.

And big fan of ur work too!

@flowerstrample @illegalwaterdealer you don’t have to particularly use WSL, I am running it on windows 11 using an old GTX graphics card and can confirm it runs normally!

2 Likes

Hey @illegalwaterdealer , thanks for your interest and nice message! I really appreciate it, especially coming from someone with CV experience.

I went with YOLO mainly for its speed and availability of pre-trained models, and because it gave me good results (better and faster than anything opencv related that I had tried before, earlier in the thread). But once again, I started with basically no experience in CV at all.

So, I started with YOLO, moved to YOLO + tracking (also tried DeepSORT for tracking, but disappointed with performance on my setup), and now moving back to YOLO only in our rebuilt from scratch approach.

I experimented with basic segmentation, but was not happy with both the erratic results and the impact on performance.

I also added a yolo pose model at some point, to deal with the grinding or lean in cowgirl when we have no noticeable movements in between body parts, but it was either highly unreliable for that use case when using a nano size, and computationnally counterproductive when using a large sized model.

The model was trained on NSFW frames, indeed, mainly built out of my video library, before getting some help from @StillHorizon to incorporate more JAVR.

Sorry I might not answer properly all you questions, but really happy to have you in the Discord if you want to challenge the approach and discuss potential areas of enhancement with @spatialflux and I, we would really enjoy that.

2 Likes

Hey guys, I currently have a prototype for a new device that I could relatively easily expand to include multiple simultaneous independent strokers e.g. to simulate hand and mouth at different positions at the same time.

Could your system generate a funscript file for each occurring contact type in the video (left hand, right hand, mouth etc) with not just the usual bottom point of the contact but also the physical length of the contact (in the same units as stroke position) ? Guess that would need something added to the funscript format for the length.

That would need some adaptations to output it, but doable somehow as the current version tracks independently the body parts interacting.

K, great, thanks! Will get back to you when I have the prototype ready.

1 Like

:star2: Exciting Bot Update! :star2:

Hey everyone! :wave:

While @spatialflux and I are hard working (still on the little free time we have aside of work) on the new version of the program, I took a bit of time on the bot today, and we have news for you ! :tada:
This update is packed with features designed to make your experience smoother, faster, and way more fun. Let’s dive in! :rocket:

What’s New?

:one: Smart Scene Suggestions
When you upload a .funscript, the bot will now automatically search the internet using relevant keywords and suggest the best studio scene link for you! :mag::sparkles: Say goodbye to manual searching and hello to convenience!

:two: Full Control Over Suggestions
Not happy with the suggestion? No problem! You can decline the suggestion and provide your own link instead. :paperclip: It’s all about giving you the flexibility you need.

:three: Link + Funscript Combo
Already have the perfect link? Just send it directly in the same message as your .funscript file, and the bot will handle the rest! :rocket:

:four: Mass Uploads Are Here!
Got multiple .funscripts to upload? Now you can send them all at once! The bot will process them one by one: suggest a link, let you validate or decline, and boom—done! :white_check_mark: This is a game-changer for bulk uploads.

:five: ThePornDB Integration
Thanks to the awesome ThePornDB API to which it is now plugged, the bot will soon be able to retrieve even more accurate and relevant data for your funscripts! :clapper::boom: Expect better suggestions and a smoother experience overall.

What’s Coming Next?

  • Search funscripts by actress :dancer::eyes:
  • And much more!

Stay tuned, this bot is only getting better! :wink:

I can’t wait to hear what you think! Drop your feedback, suggestions…
…or just say hi to the bot :robot::wave:

4 Likes

first off, I have zero experience with github git files. Zero experience with python, miniconda, and very limited OFS scripting. I found this software intriguing, so I decided to give it a go.

I joined the discord, found and downloaded all the required files. Got everything installed without to much hassle. Had a few file path issues, figured out how to correct those in the file configs.
With a little searching through the general help and reading a bunch of other peoples questions and problem solving I was able to get Conda to install the fungen. A few more problem solving issues and I was finally able to get the GUI to open. Had a couple more issues with ffmpeg not cooperating. I was having the common input/output error. Found the suggestion about changing a line in the constants.py file. and that worked. I now have a VR video live view running and doing its thing. Running in the CPU (AMD 7900xt graphics card, 7800x3d (running at 100% pretty steady 67.7 - 70 C)

Overall this look amazing and I cant wait to see the result (about an hour left for the analyzing part I think. 8.8 gb file about 40 min long) Will post an update with my impressions of the generated script of course, but I applaud your work on this. Im going to need a bigger XXX hardrive :grin:

1 Like

Please review my script, which is based on the script created by Funscript AI Generation

1 Like

After some “testing” I have discovered the AI script generator performs extremely well with a VR video. While its not 100% perfect, certainly not a match for some of the talented scripters we have here in our community, but still more than acceptable for sure. It seemed to have a few added strokes when there wasnt any action happening but I accept it as some filler strokes. Im ok with that. it followed riding scenes extremely well. Even a handjob/blowjob didnt really confuse it much. In these early stages of the software the authors should be very happy with their results. As it matures this is going to be an awesome thing for the scene.

Now, I did also run a shorter 2D POV vid for testing, it didnt do nearly as good with that and im going to have to go in and do some manual adjustments to that script, BUT I’m certain its going to cut the time required to script that vid WAY down.

Im definitely excited to see how this project evolves as they update and the software learns more. If they can get this to a much more simple EXE install thats going to help a lot of users try this out. The install isnt the most straight forward thing and I learned almost as much just getting it running. Though I had a easier time getting this to produce a pretty good script than I have with the motion tracking software in OFS.

Looking forward to working with this a lot more.

3 Likes

Ran it on a file with a really badly made unusable script and wow FAG easily beat it. The output is really good. I also very much like the fact that is for nonpaid scripts only. Please keep it public don’t sell it to some studio. Its really good already as its very much in sync and spares a lot of time. Install manual could be a bit more specific at some points but hell its installable. I needed some help on discord thanks @StillHorizon !

Scripters should install and try it.

5 Likes

@Mruno69o @Alexus @roa

A massive thanks to you all for your recent comments!

Your feedback truly means a lot, it made our day! :heart: :star:

@spatialflux and I are very grateful. :rocket: :man_astronaut: :man_astronaut:

And buckle up, because the next big update is going to be HUGE in both performance and quality!

We’re taking the time to do it right, this is a full overhaul of the codebase.

Stay tuned! :fire:

8 Likes

I also hope that @k00gar and @spatialflux will make the installation process more user-friendly or provide a detailed step-by-step manual. BUT we must understand that they are running this project purely on their enthusiasm and investing a lot of time to make it even better. So, I believe that once they reach a certain level and take a little break, they will make the installation more accessible to regular users, which will bring in more new script creators.

@k00ga @spatialflux Once again, thank you for your efforts!

2 Likes