WIP Script Tester Wanted (JAV)

Hi everyone,
I am experimenting with some script creation methods in python and am looking for someone who can give an unbiased opinion on different variations. Currently working on some JAV scenes and in time I would like to cover multi-axis with as much auto-generation as possible while still keeping crisp hand tuned sync where it counts.

I am only producing a few min at a time so a bit too early to release for general consumption.
If you are a potential (and patient) beta tester please get in touch. Ideally you will be able to spot things that are out of sync or where the movement is too little too much and help dial in the process over time.

I am working from an OSR2 which I received as an unexpected bonus recently … what a strange new world this is.

Thank you in advance for your interest.


Some follow-up:
Below are a few screenshots from the various python processes I am currently tinkering with (mostly CV2, dlib etc)
Samples from video https://spankbang.com/4dire/video/serina+hayakawa+1




The python output is a full runnable script which can be edited/run in openfunscripter or multifunplayer.

Currently working mostly on up-down but with a view to automate management of pitch and roll to fill in the quiet parts. Also keen to factor in some growth ranges so the experience builds gradually instead of going full bore in the first 10 seconds. From reading the forums it seems some adjustment will also be needed to not try to over-track small movements.

May peace prevail on Earth


This is amazing! Excited and looking forward for the potential jav boom. Got a lot of 2d jav vids id want to script. I would want to test it but im still trying to learn how to use ofs. Been using jfs this entire time.

The “Image Difference” image (above) contains more white pixels when the image reflects a greater amount of movement. Counting white pixel percentage of each frame, followed by several rounds of simplification to remove points (frame numbers) which are similar to the previous frame yields the following:


Note that low points on the chart represent points of relatively little motion (not a low funscript position) and it does not identify the direction of movement. With additional logic however this might help establish a usable rhythm. Its also very easy to generate a lot of points quickly as its not relying on AI libraries to make smart decisions.

This might also be useful for having pitch and roll react to action on screen.

Hi there! im also an ex scripter and have stopped scripting for some time due to overburns. Would love to see if i could help you out in this automation process. Do you mind if i could pm you via Discord? do send me a pm for your ID ! :slight_smile:

SpankBang.com_serina+hayakawa_240p.funscript (59.4 KB)

A version based on image difference with various developmental processes applied. Far from perfect, but generated without any manual intervention.

Some rudimentary audio (volume) analysis (thanks to moviepy and librosa).
Volume peaks do appear to align to the peaks in the action (predictable genre I guess).


Apologies in advance that any of my scripts are likely to be complete rubbish for a good long while…


Tonight’s automated version:

SpankBang.com_serina+hayakawa_240p.funscript (70.5 KB)

SpankBang.com_serina+hayakawa_240p.funscript (138.8 KB)

Testers please
Added auto-generated roll script
SpankBang.com_serina+hayakawa_240p.roll.funscript (139.5 KB)

Summary of Device Feedback Received:
Handy: Script stalls out, jerky stuttery mess.

1 Like

i would love to help move the censor side of scripting forward, also @xephaolic (did you ever get to dl the cosplay vr vid?) But am I just testing for accuracy? Is there a certain part you’re looking at or just need a review as a whole?

The last script was produced completely hands off (no manual edits - though as a final step it was loaded manually to OFS and exported back from there). It would be great if anyone can go through it end to end and say if there are any sections that actually works for them (and if so, which device they are using).

I made a short list of times (start and end) during which the action is categorised as high, medium or low, and applied that as a sort of multiplier to the activity captured from the facial tracking. I do realise this is never going to perfectly match the activity, and I guess I prefer scripts that don’t stop completely just because the actress has turned her attention to gazing at her coworker or whatever.

I hope its not total rubbish.


1 Like

gotcha, document “scipt banger from XX:XX until XX:XX” Script has trouble during this range -. type stuff. you shall have it! keep doing the lords work please. And once you figure out how to auto generate censor videos, please keep me somewhere in the back of ur mind

Sounds perfect thank you :slight_smile:
“Censor videos” being heavy mosaics? Got an example link? I’ve already spent WAY too long looking at sweet Serina this week (time to switch to fresh material)

I should clarify “once you figure out how to auto generate sripts for censored and or heavily mosiac videos” please keep me in mind.

I think you need a tester with the OSR2. It was very jerky/stuttery for me. I think they way your code generates the script causes the handy to stall out. The general motions matched up, but the finer motions were mostly off. Hope that helps some

Back to the drawing board then.

Seems the devices do behave quite differently, obviously not ideal to be scripting for one and not the other(s).

Do you think its too many points being mapped per second? The actress is moving pretty fast…

Have a great day.

1 Like

I’ll have to actually load the script and take a look. But I’ll do that and get back to you. It may just be the handy. I’m scripting something right now where I’m gonna have to find some work around cause the action is simply to fast for the handy to endure. I’m interested in getting an OSR2 but I’m not in a space where the noise would be acceptable sadly

you need to simplify the points if its generating a lot of points - generally it should be one point per stroke up or down if the movement is really fast.


Agreed, the points should only record is 2.
==>1 point going to the top and
==>1 point at the bottom just right before it leaves his men tool.

I definitely agree that the pitch and roll function is something that is well desired from everyone ! Im thankful for your initiative to start with it as well! Not many people have the skills to do it well. Considering that SLR themselves also invested alot of money to start with it as well and have not been able to replicate the works of a human scripter as well.

One thing for sure we would like is to master the up and down motion(Phase 1) before moving onwards to the pitch and roll function! ( Phase 2 ) !