So I was wondering why a studio has not started using some kind of motion tracking accelerometer to record the motion while shooting the scene. It would record every move, even those the cameras cant see and could be built small enough to be worn on a belt, garter, panties, etc.
I’ve experimented with this, but it’s a far, far harder problem to solve than it at first seems, for a variety of reasons, at basically all levels (getting good data, turning that into actual movement data, turning that into scripts that are actually good).
I think the AI approach holds more promise because ultimately scripts aren’t designed to be a 1:1 match for movement so much as they are designed to provide a pleasurable experience that has enough synchronization to a video to fuse them in your brain. At the end of the day, the translation of on-screen movement to ‘pleasurable toy motion’ is an intuitive process done by scripters, and so is a good target for AI to replicate (rather than fixed motion-filtering algorithms from something like an accelerometer).
This would require actualy both points (male and female) to even slightly work.
But this isnt where it ends, since if you start tracking at these levels, you might also want to consider the angles (if you are measuring, measure it in multiaxis). Simple accelerometers dont work at this part anymore. And good sensors are still bulky and would disrupt the recording (they would be clearly visible).
For the videos themselve they will always be visible, there is no ‘hide it behind the dick’. As in that case its simply not going to be stable enough to measure it properly (it needs a strong grip).
It would only work in lesbian scenes with a strapon, as in that case the entire strapon can be made as a sensor. And even then, its measurements will still require some fixing (but at least this is where AI can do a realy good job by having both and a good point of reference for synching).
The best thing they can try is using some sort of dye (that doesnt move by lube and therefor can give study reference points in for example an UV camera - i suspect IR isnt reliable due to body heat messing with it) that can be recorded in specialized machines, but isnt normaly visible. But this is still depending a lot on camera positioning, and therefor still limited.
I think the efforts here arent worth the gain when AI can generaly get a 90% accuracy, and the remaining 10% is barely noticed to be off. Sure, it could take it to 97% which would be nice. But that only works when the methods to record it arent disruptive.
Technicaly this shouldnt be part of the script itself, but of the script player. Or as metadata above the script (an amplifier value), so the script is pure raw data. In the current state this obviously no longer works as a lot of scripts are made to adjust.
But even then, you can have a funscript v2 that is backward compatible, but supports this information. They are a plain json so adding data isnt too hard.
However… in the end, no matter how much AI is involved many of the aspects cannot be copied over anyway (the vagina internals do also move and can compress. AI cannot guess these reliably yet, and might potentialy never do - it would need a clue on the skin surface as reference point).
So at this point, trying to mimic the actual scene isnt realy needed. So on this part maybe just keeping it simple is the best solution. Even if the AI makes mistakes or things a bit excessive, it probably just fine.
Things are rarely as easy as just adding a sensor.
Not just the answer above, but remember that porn shoots are just that, shoots. There are multiple takes, cuts and all that so you’d also have to pay someone to piece together all that data just to match your final edit.
Makes far more sense to just do it with the final edited video.
This can be handled by software relatively easy. Its analog data that is essentialy the same as audio (even if the audio gets digitaly recorded). And by using a keyframe check, you can automate this synching
You remember that board that they used to click closed that has the scene number etc written on it, this is for the purpose of synching audio. Adding other detectors too it shouldnt be too difficult. If its a light detector, you can make it flash brightly in that IR/UV light.
And if its known that drift happens in signals or recordings. You can also always make sure that after the normal place where they say ‘cut’. That you perform that click again, as this also gives a reference point at the end.
Sure, the editing will be in the final piece, but this is irrelevant when the recording of that data is automated. And this is something they movie industry has already optimized for decades.
The real problem is still to just automate that tracking/recording to begin with. As for this there are no tools available yet, and suffers from strict angles in which recording is even possible (a position change can throw this off easily, and is very likely to happen)