Film Reel - preprocessing video for scripting

This is ongoing research :man_scientist:

Based on this github issue I would like to share something and see if people find this useful.
TBH I’m not convinced this is useful.

The idea is to have a few frames in the future and a few frames in the past visible at all times.
You should get the idea by looking at the screenshot in the issue.

lmao turns out my python script was completly unnecessary and you can achieve this with a single ffmpeg command.

So just grab ffmpeg from somewhere and execute the following command.
ffmpeg -i input.mp4 -filter_complex "tile=5x1:overlap=4" output.mp4

For VR you can crop out the left eye like this.
ffmpeg -i input.mp4 -filter_complex "crop=in_w/2:in_h:0:0,tile=5x1:overlap=4" output.mp4

The 5 specifies the amount of frames which are side by side. It makes only sense to pick uneven numbers. And overlap is always the amount of frames minus one.

I verified that it’s working by using it on this frame sync test video.

5 Likes

The only concern I have is that it creates a really long video roll, effectively zooming out the video itself. At times it might be hard to notice a difference if the images are too small.

Another thing you don’t actually have to set FRAMERATE to the actual framerate of the video by setting it to something lower you would end up with a video where some frames are ommited however the timing would still match.

Well no wonder you think it’s not useful if you believe that.
80% of my time scripting goes to making sure the dots are on the correct frame. Test it, correct dots that are still on wrong frame. At only 60FPS the difference between two frames is pretty big. - I guess it also matters which toy you use when it comes to accuracy.

Sometimes even manually dragging them ‘between’ two frames if neither two match up correctly.

But thanks for the instructions! I’ll see how far I get, never used python 3 interpreter.

I already start off with making an uncompressed YUV video file and timestamp each frame anyway for scenes I script. It eliminates delay when going back and forth between frames or scrubbing. It’s instant.

1 Like

I updated the thread. You actually only need a single ffmpeg command to do this. :joy:

1 Like

I thought about having multiple frames visible to find the peaks and bottoms easier/faster, but seeing this I don’t think it will help. I believe that it would require cropped frames with the relevant scripting area for this to be remotely useful. It’s simply too hard to see the difference between adjacent images when they are so small and there is so much irrelevant stuff in each frame.

What do you mean with only 60FPS? Basically every VR video (and probably 2D) is 60FPS (sometimes just 30FPS) and that means the time between each frame is as little as 16.7ms.

I didn’t mean time-difference. I mean difference between the images. Especially when doing PMV’s with shortcuts, speedups, overlays, etc. (I do the Mutiny VR PMV’s)

For example, I have to know what happens before and after to make the transition (if there is any) flow into eachother. - Right now much of my time is ‘wasted’ playing back and forth the same handfull of frames. - (With uncompressed source video you can play it backwards smoothly just by holding the left arrow.)

I always find it baffling other people decide for me what is useful or not. I’m not demanding this feature to be made, but if I ask for it, I have a use for it. - I’m also surrounded by big-ass monitors, heaps of GPU’s; Space enough to display whatever I need.

In my experiments with OFS I’ve had the same frustration of scrolling back and forth to find the “correct” frame (sufficiently frustrating that I would rather spend my time automating funscripting), and had the same idea that things would go much faster if there were more context available - i.e. I could see frames in a timeline instead of video.

As for implementation, I was thinking of an app calling FFMPEG in a background thread to extract frames scaled down and trimmed to a range that pages as the user scrolls forwards/backwards, saving from FFMPEG and loading to scripter app via a RAM disk (/dev/shm on civilised OSes) and deleting images once loaded.

I suspect this is sufficently different from OFS that it should be a separate app.

The quickest test of whether this would help is to extract frames from a video and open them in an image viewer gallery. If you can quickly see where the end of strokes should go, it would help. If not, it wouldn’t.

you wouldn’t happen to have the videos/scripts saved somewhere would you? These were taken off the site recently.