My guess is that it will take 1-2 hours for 5 min depending on how complicated the scene is. Blowjobs takes a lot of time compared to straight penetration without crazy hip movements. I usually spend ~10 hours on a 45-60 min VR video. But it varies a lot depending on the video contents.
IMHO you don’t save that much time using the motion tracker. This statement is under the condition that you want the same, or almost the same, quality as frame by frame scripting. There are almost always a lot of manual corrections to do so you have to step through the video or play it slowly and make corrections afterwards. A lot of movement is also very hard to track due to angles. Everything that moves toward and away from the camera is hard to capture or when movement is in the background e.g. the guy thrusts in cowgirl position. There isn’t much movement to track because it happens behind the girl and you just see a little bit between her legs. You have to divide the tracking into video segments and try to find a suitable tracking points and min/max based on that. If you aren’t that picky about the accuracy or if you are scripting very simple movements there is a definitely a gain to use it.
I’m pretty sure that many won’t agree to my assessment, but as I said, it mostly depends on what standards you put on your work. I want frame by frame quality and the time it takes to adjust the output from motion tracking to get the same accuracy it requires roughly the same time with the difference that I can’t do it repeatedly and must focus more on what I do compared with not using motion tracking.
Recommendation:
Your best bet is to use a combination. Use motion tracking for easy stuff that goes up down in one axis without rotating hip movements or obscured views etc. Use frame by frame for the rest.
Depends on how quality you’re trying to make it. If you want it to have accurate varying thrust lengths, it’ll take a while…
You can take advantage of repeated patterns, and hit Ctrl -RighrArrow to go ahead several frames and try to estimate the peak and valley of each thrust.
If you record your mouse and you just want a simple synchronized full thrust in and out, you can do that in 0.3 to 0.5x the time of the length of the video
It really depends on the scene. When you got blowjob with hand motions and stuff it starts to take more time and effort as well…
Using the AI has been good as long as 1 of the people are completely still, or return to the same spot after the motion, but you usually miss a thrust or two out of every 10 or 20 thrusts when there’s any camera movement at all…
Some blowjobs I’ve been able to get a complete automation of a minute or two at a time though, same with reverse cowgirl scenes. If the person has a tattoo, you can lock onto that as well.
Some angles don’t work with the AI at all, like those grinding on top motions where they’re going forward and back rather than just up and down.
Hope this helps.
There’s one way you can actually do a really accurate recording, and I just need to write out my theory on it, maybe add some mathematical components. Basically I think if you just trace the scene with your mouse and change the popup slider so that the size looks like the cock in the scene, you can record at 0.3 or 0.5x speed and have it really accurate by just moving your mouse right with the seven with a 1:1 movement ratio for accuracy. I think you can zoom for more precision the math would be figuring out how many thrusts per 10 seconds you see, then calculating an ideal video speed, so that you have the easiest time tracing the scene. You want to record as fast as you can, and without being too many frames behind, and without being thrown off by sudden changes in thrust frequency.
I like the mouse recording over the AI because you can quickly add more accuracy and delays at the top and bottom of thrusts.
I’m sure it may be difficult to implement, But I would love to see the tracker boxes be able to use point splines instead of just boxes. It would allow us to better isolate edges especially when using Two Person mode.
Can anyone explain the parameters for Supervised? The documentation really doesn’t say much about what to do under that mode.
By experimenting I found out that Supervised track a feature inside a specified area. It asks for pair of parameters: first the small area containing the feature to track, then a larger bounding area that must contain the feature area at every moment. If you use more than 1 tracker then it goes (feature1, bounding area1), (feature2, bounding area2) etc.
You can then specify blocking/non blocking: the first means that when the feature “escape” the bounding box the process stop, the latter (as you may have guessed) it keeps going and if the feature gets back into that bounding box it then also keep tracking it’s movement.
Thanks! I will have to play with that feature a bit more now that I actually have a Handy. I have done a few dozen scripts before now but haven’t posted any because all I had was a Max2 to test. This program has really made scripting so much easier!
could you wright me a script for multi axis tracking for a 5ft 2 woman in a 180degree vr video
it give me this
Import necessary libraries
import math
import numpy as np
import tracking_device
import vr_player
Initialize tracking device and VR player
device = tracking_device.TrackingDevice()
player = vr_player.VRPlayer()
Set height of woman in inches
woman_height = 62
Main loop to track movement of woman and update VR player
while True:
# Get current position and orientation of tracking device
device_position = device.get_position()
device_orientation = device.get_orientation()
# Calculate position and orientation of woman relative to device
woman_position = device_position + [0, 0, woman_height]
woman_orientation = device_orientation
# Update VR player with woman's position and orientation
player.set_position(woman_position)
player.set_orientation(woman_orientation)
# Check for user input to exit loop
if device.is_exit_requested():
break
Stop playback and close VR player and tracking device
player.stop()
player.close()
device.close()
If you give chat gpt more information could you improve your tracking tool ?
Thank you for the answer. I understood what you said.
Actually, I did a lua plugin too for myself which does the post processing. The undo system worked pretty well for me. Basically I select all the raw points that I want to process, then I click the button from my plugin, then the selected points get simplified and if I’m not happy about the outcome, I just undo the process and all the raw points go back.
Basically now I just use the tracking for raw data, and then do the post processing.
How my plugin works is pretty simple, I do the post processing with scipy in Python, then I use lua to call the python script with the selected points as input, it then gives back the output will as simplified points, now I can call lua API to keep only simplified points from the selected ones. It worked pretty well.
I think it kind of only works when the scenes are going in a smooth rhythm with no delay on insertion/removal, and at 0.3-0.5x and with the loss of accuracy, I’ve been debating whether it’s worth it over just doing it frame perfect. You might be able to record with the mouse then check the points or play it back, but that may take longer than just doing it perfectly the first time.
Idk, seems like there should be a faster way but I notice even 50ms delay and it’s kind of off-putting on scripts. You gotta be like a pro-gamer or memorize the scene to do it.
I think another way is probably to just time and map the bottom and top with 0 and 100 with keys rather than do the mouse thing. The mouse recording is more useful for certain delayed movements but that’s it
@redeus In the last few days I have also been working on an interactive preprocessing menu which is directly integrated into the MTFG. If you have an Github account and want to try an prerelease version you can download the Windows binary from here (login required for download).
How my plugin works is pretty simple, I do the post processing with scipy in Python.
Which functions exactly do you use from this library and can you recommend any as particularly suitable to integrate into MTFG?
So I only used find_peaks and savgol_filter from scipy.
What I do is similar to what you are already doing in the extension. I do the following steps:
use savgol_filter to smooth out the signal, this will reduce the noise so the 1st/2st drivatives are more “pretictable”
find the local min/max points with find_peaks and add them to the result set
use np.diff on the signal to get 1st derivative, and smooth it with savgol_filter
use np.diff on the 1st derivative to get the 2nd derivative and smooth it
use find_peaks to get local min/max from the 2nd derivative and add them to the result set
do a filtering by grouping “close” points, then check the concavity of the grouped points using the 2nd derivative, e.g., if the groupped points are concav then get the max, if they are convex, get the min.
do also a filtering on the groupped points based on the “slope” using the 1st derivative, e.g., if 2 points are close, and they have big slope, it means that they are both climbing or descending at the same rate, so we can just choose a middle point between the two.
Here is an example of the raw data that I captured with the tracker.
Orange points are the ones before the last 2 filterings.
Green points are the final point set.
You easily see the 2 filterings logic in action.
I think this can be improved. But this is already pretty good for me, the manual fine tuning was much easier.
The new updates have been pretty cool. I like the ability to delete the recent scenes during a scene change so that it doesn’t mess up a whole script segment, and the feature to add/remove complexity during scripts is pretty awesome.
The tracking has been a little better in my experience, I think it uses those point splines when there’s 2 actors moving?
I just have to exaggerate points and I can get a pretty solid script where I felt like I couldn’t before.
This update is revolutionary for me. I was bugged by the post-processing parameters before and now everything is crystal clear thanks to the UI. Thanks for all effort poured into this project!
I think it uses those point splines when there’s 2 actors moving?
Yes the Program use the distance between the center points of the 2 tracking boxes (the point splines). Depending on the selected tracking metric the program use the x, y component of this vector. With distance you get the normalized length of the vector as raw tracking data.
Would it be possible to integrate some sort of “undo” function, where after adjusting the processing menu of the tracking generator and applying the results, to go back to that processing menu if you are not satisfied with the inputted data into the OFS ‘positions’ bar? Would this be possible or maybe it would have to be a function of OFS rather than the motion tracker plugin? Thanks in advance!
Idk your use case, but you can choose the Complex high number of data points (low number like 10 or less) and then later simplify the data points using OFS itself, and then just undo from there. There’s a setting in advanced or something called Simplify for a range of points. I’ve been doing that when I don’t like the result, usually if I find the range or speed to be insufficient, and I was going for realistic with high number of points.
Would it be possible to integrate some sort of “undo” function, where after adjusting the processing menu of the tracking generator and applying the results, to go back to that processing menu if you are not satisfied with the inputted data into the OFS ‘positions’ bar?
Should now possible with v0.5.1 by using the Reprocess Data Button in the MTFG OFS Extension Menu.