The problem I’m facing is that OFS wont see the modification on the .funscript file. Too update in OFS you have to open the .funscript at a new project each time
I dont know why some times the mod with 2 box lose the man focus. I guess when the movement is really fast or close to the edge of the video
@Cerbere Yes, the main problem is that the distortion is highest at the edge and the tracker can’t find the features anymore. A possible improvement for the algorithm would be to first calculate a projection of the video and then apply the tracker to it.
Something not easy to work with is the minimum time tracking. Is it possible to able shorter tracking. For exemple when there is 10 seconde of HJ then 10 seconde of BJ … I can’t track 10 seconde I have to wait more to be able to complete the process
Even a very simple seek bar like this one made this FAR more usable. Thank you!
I know this is only a small app you put together in your own time but the potential here is tremendous. There are so many things that I want to suggest. I genuinely believe that something like this is the future of scripting!
First suggestion - it looks like the algorithm sets a point the instant movement in one direction stops. The timing would be even more accurate if the algorithm set a new point the instant movement begins in the opposite direction.
So a point would not be set at the end of an up thrust, it would be set at the beginning of the next down thrust, if that makes sense. Is it possible to do this?
I have now finished the OFS integration for windows. You need the new Release v0.0.3 and the install instructions. I have also created a video how to install the software: install OFS integration.
Currently there is a problem in OFS that the VideoFilePath
variable in the lua script is not always set. Therefore it does not always work. Sometimes it helps to create a new project in OFS. I haven’t figured out exactly when the variable is set by OFS.
Thanks a lot for all this updates. I will try this soon and make some feedbacks
Feedback: OFS integration works, it’s amazing. Thanks so much for all your efforts and lightning fast updates. Everyone should try this!
Since all UI needs are basically handled by OFS this project has just taken a huge leap.
I currently have 2 suggestions:
- Please set the algorithm to set a point at the start of a new movement, rather than the end of the old one (I went into more detail further up)
- When setting the area to track, ability to resize the window or zoom in would be very useful
I can’t get over how well this works!
Okey this is going to be crazy. Working geat in OFS. I have already done a script using your tool but I will not be able to try it seens weeks …
Just tried it with the OFS. It’s working pretty well! We just need the x-axis for roll
This is incredible. Can’t wait to use this to try and churn out scripts faster.
Two suggestion to make this more function if you don’t mind.
-
Ability to resize window as mentioned earlier. When new window opens for me it’s too large to reach female lower anatomy.
-
Ability to slow playback on new window that opens. This feature would be really helpful to script repetitive motions but it moves to fast in normal speed and tracking box gets lost quickly.
Thanks and definitely look forward to any updates you make!
works great for hentai too!
I have done one script with the tool but it will be my first multi axis so I want to try it before post. I will make one with this video Kayla Kayden - Neighbor Affair in 4K and count time spend on it. And then post it as showcase to let people see what it does in how mush time.
If you need to change the window size, then there is already a function available to adjust the preview size. Therefore you have to adjust the configuration file, which in release v0.0.3 is located in funscript-editor/funscript_editor/config/video_scaling.json
. This values define the preview size.
Currently the following 3 default entries are defined: {"1920": 1.0, "3500": 0.5, "5000": 0.25}
. These entries always consist of a pair of values, e.g. "1920": 1.0
. Each pair of values defines which scaling should be used for which video resolution. The first value, refers to the video width in pixels. Videos with size larger than 1920 pixels horizontally use a scaling of 1.0. Videos with 3500 pixel and more are scaled with 0.5 and from 5000 with 0.25. All videos which are smaller than the smallest value (in this case 1920) are scaled always with 1.0 (original size). You can enter as many values as you want and change the existing scaling.
It’s best to look at your screen resolution and calculate which scaling you need for which video size so that the window fits on the monitor. e.g. You have 1920x1080 screen and 5400x2700 Video, you can divide 1920 / 5400 = 0.36 → add "5000": 0.36
to the config (the key value have to be a little bit smaller than the Video resolution to apply the correct scaling).
@poet145x For 1. see my answer above.
- The playback speed has no influence on the tracking accuracy. Currently we try to track the video as fast as possible. But if the preview runs very fast you can’t really see if the tracking point has drifted (manual abort criterion for better quality). Therefore I added a function to set the maximum speed via a config file, e.g. 90 frames per second. The function will be available in the next release. The code is already in the Github repository.
@Gandalf I implement the function to track the x-Movement but the results are unfortunately not as good as for the Y-direction.
The code to track the x-Movement is already in the Github repository. Due to the poor results, it is currently not certain whether these will make it into a later release version.
The algorithm currently searches for the local max and min, as shown in the image below.
Should we only slightly delay (move to the right) the upper points?
We could include a function that shifts the points by a few frames depending on the speed (derivative of the movement).
Hey buddy,
Could you give me a little guidance as to how to install this and get it running?
Should it be running natively on OFS or should I need to run a separate app in the background?
I have all the files, extracted everything. I put the LUA file in the right place.
-I don’t know where the Settings.TmpFile is located as there isn’t one on my system.
-I use the shortkeys to launch the generator, but nothing happens.
-getting the following error since I unpacked the .003 Python-funscript-editor
Cannot load nvcuda.dll
Cannot load nvcuda.dll
INFO: Active hardware decoder: no
INFO: MPV (cplayer): (+) Video --vid=1 () (h264 1920x1080 59.939fps)
INFO: MPV (cplayer): (+) Audio --aid=1 () (aac 2ch 44100Hz)
INFO: MPV (cplayer): AO: [wasapi] 48000Hz stereo 2ch float
INFO: MPV (cplayer): VO: [libmpv] 1920x1080 yuv420p
ERROR: ! MPV (libmpv_render): after creating texture: OpenGL error INVALID_VALUE.
!
I would really appreciate any help.
Thanks