How to use FunscriptToolBox MotionVectors Plugin in OpenFunscripter

Introduction

For those who don’t know, one of the ways that compression formats (MPG, MP4, etc) achieve a high level of compression is by detecting motion from frame to frame. Basically, instead of encoding the whole image, algorithms can instead reference similar blocks from previously encoded images, which take a lot less space to encode.

I just added new features in FunscriptToolbox (FSTB) that take advantage of this to simplify the creation of scripts. In short, FSTB can extract motion vectors from a video and create an optimized .mvs file that can be used by a new OFS plugin. The FSTB plugin has the ability to learn “rules” from already scripted actions, and can then generate new actions based on those rules, and it usually takes less than a second to do it. In some way, this is almost like an ‘auto-complete’ feature for scripting. The plugin is also able to adjust all the generated actions as ‘one unit’ instead of individually, which is a lot more efficient.

In short, the process looks like this:

Note:
Like all tools that try to generate .funscript automatically, it will not create perfect scripts. The goal is to automate things that a computer can do better, like detecting movement, while leaving the rest of the decisions to the scripter.

Initial setup

  1. Download FunscriptToolbox on github (version 1.2.0 or later)

  2. Unzip the archive somewhere (for example, C:\Tools\FunscriptToolbox).

  3. Double-click on --FSTB-Installation.bat

    This will install the plugin in OFS extensions directory.
    This will also create a few ‘use-case’ folders.
    The one that is needed for the plugin is “FSTB-PrepareVideoForOFS”.
    You can leave the folders there, or move them somewhere else, the script inside will still work if they are moved.

    Note: If you move the “FunscriptToolbox” folder in the future, you will have to re-run --FSTB-Installation.bat to update the plugins and ‘use-case’ scripts.

Demo

  1. Download the files from (please import if you can):

    Demo files

  2. Open OFS

  3. Enable the plugin and windows (menu: Extensions\FunscriptToolbox.MotionVectors)

  4. Open ‘Position-CowGirlUpright-MenLaying-C .mp4’ in OFS
    This should also load the prepared .funscript (which contains only a few actions).

  5. Select the menu “Project\Pick different media”.

  6. Select “Position-CowGirlUpright-MenLaying-C .mvs-visual.mp4”. This is optional but it will allow you to see the ‘motion vector’ and understand a little bit better how to plugin makes its decision.

  7. Place the video at the end of the first series of strokes that I included and click on “Create”

  8. This should open this window:


    Note: the virtual canvas is kind of crappy but you should be able to pan and zoom.

    It shows the analysis of the scripted strokes. The green vectors shown are the ones that “agree” with the script over 90% of the time (i.e. script was going up or down, and the motion vectors extracted from the video were going in the same direction as the arrow or in the opposite direction accordingly).

    If you want, you can place all sliders to the left (value 0, 50, 0) and move them to see how it impacts the rules (i.e. 1 green arrow => 1 rule that will be used to generate the script).

    • Activity: percentage of time where there was movement in the square. For example, in the background, the activity is usually really low.
    • Quality: % of the time that the motion vectors agreed with the script.
    • Min %: If the activity and quality filter leave less than this % of the rules, add the best quality until we reach this %.

    There is also a manual mode where you draw a square that you want to track and input the direction of the movement that you expect (value: 0 to 11, think of the hours on a clock). So far, I haven’t found a great use for the manual mode. I find the ‘learn from script’ mode more useful.

  9. When done, click on “Accept”.

  10. Back in OFS, the application will have generated the actions based on the rules extracted from the scripted actions.

    Those actions are ‘virtual’, in the sense that you can adjust them as a ‘unit’ using the “Virtual Actions” settings (top point offset, bottom point offset, etc). You can also hide/show/delete them if needed.

    An action stop being “virtual” as soon as a change is made later in the video. For example, if you moved past the first 10 generated actions and, then, you make a change to one of the settings (ex. change 'Min Pos), the first 10 actions will not be changed, only the following actions will be impacted by the change.

  11. You can “play” with the Virtual Actions settings to see how they impact the virtual actions.

  12. Move to the last actions in the timeline. I included a few scripted actions in the original script because the actions change from a basic up and down to a grinding movement. This is something that you should have detected and corrected like I did while validating the generated actions.

    By default, the plugin analyses the last 10 seconds of actions (configurable). But, in this case, this is not what we want because we only have 1-2 seconds of grinding actions and 8 seconds of the ‘wrong type of movement’. To fix that, you need to select the ‘grinding’ actions in OFS and then click on Create. This will tell the plugin to use the selected actions instead of the last 10 seconds.

  13. Once you get the generated points back, you’ll notice that the quality of the generated script degrades after 10-15 strokes. When that happens, I usually get to where the script is degrading and try to redo a “Create”. This will relearn rules based on the last 10 seconds (instead of only 1 or 2 strokes) which usually makes it better.

Getting Started with a real video

  1. Move the video that you want to script into the “FSTB-PrepareVideoForOFS” folder.

  2. If it’s a 2D scene, change in the .bat file,
    --ffmpegfilter=VRMosaic
    to
    --ffmpegfilter=2D

  3. Double-click on: --FSTB-PrepareVideoForOFS.version.bat

  4. This will prepare the video for the plugin.
    Depending on your machine, it can take a while. On my old machine (intel i7700), converting a VR video took about 7 times the duration of a video (30 min => 3h30). On my new machine (intel i13900K), the same video takes about 75% of the duration of a video (30 min => ~23min, almost 10 times faster).

    It will:
    a. Reencode the video with mostly only P-Frames (which give better motion vectors).
    b. Extract the motion vectors from the P-Frames in a .mvs file (a custom format that I created).
    c. Reencode the P-Frames video with ‘visual motion vector’ and with only I-Frames for OFS.

  5. Open your video in OFS, like the demo above.

FAQ (that no one really asked)

The original video already contains motion vectors, why do we need to reencode it?

The problem is that the original video is compressed to favor size and convenience. It has a lot of I-frames that can be decoded as-is so that a video player can skip ahead without having to decode multiple frames to show an image. And it has a lot of B-frames that reduce the file size because they take up the least amount of space.

In short, the compression works like this:

  • I-frames are not using blocks from others frames. So, for our purpose, they don’t contain motion vectors, which is bad.
  • P-frames are using blocks from the previous I-or-P-frames, which might be more than a frame away which is also bad (there might be a direction change in those frames).
  • B-frames are using blocks from the previous I-or-P-frames and from the future I-or-P-frames, which might be more than a frame away which is also bad.

To get the best result, we need to have motion vectors that are always related to the previous frame only. Reencoding the video without any B-frames and with as few as possible I-frames gives us that.

image

How do the virtual actions adjusting work exactly?

When the server creates virtual actions, it only generates the direction and amplitude of a movement on a 100 points scale (i.e. 100 points up, 50 points down, etc) without any information on the exact starting or ending position. The server doesn’t know if it’s from 0 to 50 or 10 to 60 or 50 to 100.

It’s the job of the plugin, with the scripter’s help, to create the final positions for the actions. Using the default settings of Min Pos=0, Max Pos=100, and Center Pos %=50, the plugin will simply create a wave centered in the middle.

To be able to shape the generated actions, the scripter can:

  • Change ‘Top Points offset’ to move only the top points to the left (value < 0) or right (value > 0). ‘Bottom Points offset’ does the same for bottom points.
  • Change the minimum or maximum position of the wave.
  • ‘Center Pos %’ will change the center position of the wave. More precisely, it will place the specified % of ‘empty space’ below the wave.
  • ‘Min % filled’ will expand the length of all movements by a specific value.
  • ‘Extra %’ is similar to ‘Min % filled’ but is a multiplication. So, small values will expand a little bit, and bigger values will expand more.

For example:

  • Min Pos = 0, Max Pos = 80, Center Pos % = 0, Min Filled % = 20 would give this:

  • Min Pos = 10, Max Pos = 90, Center Pos % = 20 would give this:

    Notice how the points are all shifted toward the lower part of the range but they don’t create a ‘hard line’ at the minimum position.

Versions

1.2.5
  • Fixed some verbs that didn’t download ffmpeg (ex. as.cfs). If you tried that verb first after installation, it would fail with an error “cannot find ffmpeg”.
  • Fixed script “–FSTB-GenericCmd.1.1.bat” (which contained my local path, instead of the path on your machine).
  • AudioSync verbs: Rewrote “audiosync.createfunscript” verb to be more flexible. You can now use multiple inputs and/or multiple outputs. The tool will “virtually” merge the inputs and output for the comparison and then unmerge them to create the final funscript/srt file. It also synchronizes all .funscript / .srt linked to the file (ex. .funscript, .roll.funscript, .pitch.funscript, .srt, .jp.srt, etc).

For example, you can use a pattern * in the filename like this:
FunscriptToolbox as.cfs -i 3DSVR-0628-*.mp4 -o 19512.mp4

This will load and merge all .funscript / .srt linked to files “3DSVR-0628-A.mp4”, “3DSVR-0628-B.mp4”, etc and create .funscript / .srt synced to the file 19512.mp4.

It also works with one input and multiple outputs, or with multiple inputs and outputs.
The files can also be listed individually, separated by “;”.
For example, FunscriptToolbox as.cfs -i 3DSVR-0628-A.mp4;3DSVR-0628-B.mp4;3DSVR-0628-C.mp4 -o 19512.mp4

1.2.4
  • OFS Plugin: Added the possibility to set the default values for the “adjust” settings.
  • OFS Plugin: Added the possibility to reset all the “adjust” settings to the default.
  • OFS Plugin: Added the possibility to automatically reset some of the “adjust” settings when creating virtual points (i.e. the checked “R” will be reset).
  • OFS Plugin: Minimum Percentage can now be lower than 1.
  • OFS Plugin: Fixed the channel locking mechanism. You should be able to use the plugin in multiple OFS opened at the same time.

Since the plugin has been updated, make sure to start --FSTB-Installation.bat to update it.

1.2.3
  • OFS Plugin: Fixed bug where some actions were not “unvirtualized”. For example, if you generated actions then move ahead without making a change then click on increment ‘top offset’ twice. The first increment would affect all the actions, even the one behind the current time, and the second was only affecting the future action, as it should.
  • OFS Plugin: Added better “debug logs”.
1.2.2
  • OFS Plugin: Fixed bug in “create rule” request when no actions could be found.
  • OFS Plugin: Fixed UI when using in an older version of OFS (before 3.2, ofs.CollapsingHeader not available).
  • OFS Plugin: Added an error message if FunscriptToolbox.exe could not be found, with a tooltip on how to fix the problem.
  • OFS Plugin: Added an error message in OFS if the server returns an error message for a request.
  • OFS plugin: if the video file cannot be found or if there is an error while extracting a screenshot, show the UI anyway with a black screen.
  • AudioSync verbs: Transform chapter time when creating a synchronized script. Also, if a .srt file is found, transform the time also (forgotten change from a previous release).
  • Installation verb: Added a new use-case folder: VerifyDownloadedScripts to see if your video is in sync with the script (only if the original script contained an AudioSignature).
  • Installation verb: Added script --FSTB-GenericCmd.bat in all the use-case folder, this simply open a command prompt with FunscriptToolbox folder in the path.
1.2.1
  • Frames were not cached correctly on the server. The same frames had to be read on each request, eating memory that had to be garbage collected.
  • Removed the validation that height needs to be divisible by 16
1.2.0
  • First release of the plugin.
63 Likes

Whats the --ffmpegfilterHeight=2048 syntax for specifically? I’'m getting an error and am thinking its pointing towards this. The video I tried to use was a 1280x720

[libx264 @ 0000013d6bcb1780] width not divisible by 2 (3641x2048)

Edit- altering the bat file to this seems to of worked.

echo — motionvectors.prepare —
FunscriptToolbox.exe ^
motionvectors.prepare ^
–ffmpegfilter=2D ^
–ffmpegfilterHeight=720 ^
*.mp4

Edit#2- Tried a 1920x1080 video. Tried changing 720 to 1080 and it said that it needed to be divisible by 16. So i changed it back to 720 and it worked.

2 Likes

Very cool. Do you know if it’s possible to build and use on non-windows machine (for example Mac or linux)?

Humm, I’ll admit that I didn’t do a lot of tests with 2D.
The goal of the Height parameters was mostly to reduce VR video size.
But for small videos (ex. 720x404), it might also help to increase the size to have more motion ‘sensors’.

The ffmpegfilter parameters accept 3 predefined values:

  • VRLeft: keep only left eye
    -filter:v crop=in_w/2:in_h:0:0,scale=-1:{HEIGHT}

  • VRMosaic: keep left eye, and add a smaller -20 pitch and -55 pitch projections to the right

    -filter_complex 
    [0:v]crop=in_w/2:in_h:0:0,scale=-1:{HEIGHT}[A];
    [0:v]v360=input=he:in_stereo=sbs:pitch=-20:yaw=0:roll=0:output=flat:d_fov=90:w={HEIGHT}/2:h={HEIGHT}/2[B1];
    [0:v]v360=input=he:in_stereo=sbs:pitch=-55:yaw=0:roll=0:output=flat:d_fov=90:w={HEIGHT}/2:h={HEIGHT}/2[B2];
    [B1][B2]vstack=inputs=2[B];
    [A][B]hstack=inputs=2
    

    (I added newlines to make the ffmpeg ‘graph’ more clear.

  • 2D: only scale
    -filter:v \"scale=-1:{HEIGHT}\"

But it also accepts any ffmpeg filter directly (in that case, if you don’t include {HEIGHT} in your filter, it will simply ignore Height parameter, ).

I did a quick test on a 720x404 video with:
--ffmpegfilter=-filter:v scale=-1:-1 (i.e. no scaling)

It worked, and the rest of the application/plugin seems to work fine too, even if it was not divisible by 16. I will probably remove the ‘divisible by 16’ check in the application in the next release.

If you need to change the size, and one of the dimensions is not divisible by 2 (i.e. your first test 3641x2048), you could specify the width and height:
--ffmpegfilter=-filter:v scale=3640:2048

It should be possible to compile it in .Net Core but, right now, I have no plan to do it.
I heard that it works fine on Linux using Wine (well, at least, the previous version worked). Maybe the new UI might not work, but it’s optional.

1 Like

Looks awesome, hell it may even bring me out of my semi retirement and finish all the random scripts I’ve started :laughing:

1 Like

Is the idea to have the green arrows only on the performers? In regards to adjusting the sliders.

Good question, @IIEleven11.

The main goal of the UI is to make sure that it has enough arrows to make predictions for future frames while minimizing the introduction of arrows that would only make the predictions worse. In that sense, the most important setting is “min %” but I can’t really say what the “best setting” is.

Another goal of the UI was to help you (and me) understand what movement the algorithm “sees” in the video, but there isn’t much that we can do about it right now. I might add features in the future. For example, I am curious to see if using only one of the 3 sections would give better results sometimes (ex. selecting the bottom right section when scripting a doggy position).

But, no, it’s not a problem if arrows are not on the performers. It just means that, in the learning frames, the performers were moving in and out of that zone, which isn’t a problem.

So far, when I’m scripting, I set the default value for the learning phase in the plugin config, and I don’t even show the UI.

With a 3D VR video, I would think that the 3D positions could be worked out pretty accurately. Is there software that uses both eye’s view to calculate the exact positions in 3D space to create multi-axis scripts?

Tried installing exactly as written, but im getting this error?

1 Like

Do you use the latest version of OpenFunscripter?

It seems that my plugin is using features (i.e. CollapsingHeader) that have been added recently. Added in Release v3.2.0 · OpenFunscripter/OFS · GitHub.

You need to upgrade OpenFunscripter. And, if you upgrade from a version prior to 3.0, you might have to run OpenFunscipter once (so that OFS can create the extensions folder), and then re-run --FSTB-Installation.bat to install the plugin for the new version.

2 Likes

media2.giphy.com/media/75ZaxapnyMp2w/giphy.gif?cid...

This is some straight up sorcery. Excited to mess around with this some more. Thanks for your efforts!
_

EDIT:

Good news is that it didn’t take nearly as long to prep the video as I was anticipating!
A 39 minute long 2D 4k video took about 80 minutes (2x realtime vs 7x). For reference, my CPU is a Ryzen 7 3800X.

More good news is that this is REALLY good at tracking motion. I had been using MTFG prior to this and the accuracy already seems better here. I messed with manual selection a bit, but the results were nearly identical

Bad news is that like MTFG, it doesn’t have the best sense of how far things are traveling. Bottom points are frequently too high and top points too low.

BUT the virtual actions adjustments are SUPER handy to tweak some of this. There is still manual work to be done, but again, this seems like less work than MTFG.

I definitely like this so far! I’ll report back once I mess with it even more, but I see this saving a ton of time. Thanks again!

4 Likes

Thanks for the feedback @PO0000OP

Good to know that tracking is good for you.

As for the traveling distance, it will always be an approximation. For now, it’s really basic. For example, if you generate 1 minute of tracking, and the biggest “weight” for one action is 123,534 (just the addition of the weight, for all sensors, for all the frames in the action), that action will be 100%. All the other actions will be a percentage of that maximum weight (43,000 => 34%).

In the future, I might try to use a fancier method. I already have a few ideas on what might give better results. But for now, I just want to use the plugin to make some scripts and see what works and what’s not.

It’s why it was so important for me to have the option to tweak the actions afterward.

1 Like

Can’t see any step I’m missing in following the instructions, but when I click “Create” no additional window opens and I just see this timer continue to count up. I’m also on the latest release of OFS (3.2.0).
image

It looks like the “server” was not started.

By any chance, did you move the FunscriptToolbox folder (where FunscriptToolbox.exe is) after running the “install” batch file? Because what the install batch does is to ‘hard code’ the original path to the folder inside the plugin. If the folder is moved, the path in the plugin script is not valid anymore and the server cannot be started by the plugin (note to self: I should add a specific error message if I don’t find the executable).
If that’s the case, you can rerun the batch file from the new location, make sure to reload the plugin in OFS and it should work.

If you open the extension log (menu: Extensions\Show logs), do you see something like this or do you see some error logged:

I haven’t moved the folder since running the install batch file and if I run it again this is the result.

The log output on startup is this which looks correct to me.

I did however just notice this error in FunscriptToolbox.log which I’m guessing is the issue though I’m not sure how to address it.

Thanks for checking FunscriptToolbox logs, that was the next step.

It seems that your OS platform doesn’t like ffmpeg (or the file was not downloaded properly):

Can you look in the appdata\FunscriptToolbox\ffmpeg folder, do you see this, with those exact sizes?
image

If they don’t match, delete everything in the folder, including version.json. And retry.
If they match, can you try running ffmpeg in a command prompt? If you have an error and you have another copy of ffmpeg that works on your computer, copy it here and retry.

Also, what is your OS platform? I’m using Windows 10.

This is what my “AppData\Roaming\FunscriptToolbox\ffmpeg” folder contains.
image

The only difference from yours is I see an extra zip file in there. Other than that the file sizes match yours and I get no error when running ffmpeg. My OS is just Windows 10 and I’ve used ffmpeg plenty for other things and never seen anything like an OS compatibility error from it.

I will open a bug on github to continu the investigation. I’m not sure why it fail for you.
In that part, I’m using ffmpeg to extract a snapshot of a frame of the video.

For now, can you try to ‘uncheck’ UI in the plugin and see if it works? You won’t see the UI but you should get the generated actions in your OFS timeline.

image

Also, I’m curious to see if the preparation of a video will works since it’s also using ffmpeg. Only differences is that it’s not in a async context. Anyway, lets continue the discussion here, if you don’t mind.

This is very interesting and brilliant. Will definitely be following the progress of this project. Cant wait to give it a try!

1 Like