I run Vanilla os. Vanilla os supports the installation if nix packages via its own packagemanager “apx”. Would it be possible to provide such a package?
A release in the official nixpkgs is not planned by me as the administrative effort is high and the additional benefit is low.
However, a quick test in vanilla os 2.0 VM showed me that it is possible to install the nix package manager. I was able to use the commands from the ubuntu installation instructions 1:1 and was able to launch the program in the vm.
Also if might add to it, is there any way we can filter out low frequency sin waves like in the picture? Its currently caused by camera movement.
Since Version V0.5.5 there are some buttons in the MTFG OFS Window in the post processing area that can help you to remove some camera movements. These function were original create by quickfix in post-processing-for-ofs-mtfg-lua-script.
run workaround for ubuntu
non-network local connections being added to access control list
Warning: Force QT_QPA_PLATFORM=xcb for better user experience
playsound is relying on another python subprocess. Please use `pip install pygobject` if you want playsound to run more efficiently.
Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
(process:34496): Gtk-WARNING **: 06:00:16.226: Locale not supported by C library.
Using the fallback 'C' locale.
(python3.9:34496): dbind-WARNING **: 06:00:16.289: AT-SPI: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
qt.glx: qglx_findConfig: Failed to finding matching FBConfig for QSurfaceFormat(version 2.0, options QFlags<QSurfaceFormat::FormatOption>(), depthBufferSize -1, redBufferSize 1, greenBufferSize 1, blueBufferSize 1, alphaBufferSize -1, stencilBufferSize -1, samples -1, swapBehavior QSurfaceFormat::SingleBuffer, swapInterval 1, colorSpace QSurfaceFormat::DefaultColorSpace, profile QSurfaceFormat::NoProfile)
No XVisualInfo for format QSurfaceFormat(version 2.0, options QFlags<QSurfaceFormat::FormatOption>(), depthBufferSize -1, redBufferSize 1, greenBufferSize 1, blueBufferSize 1, alphaBufferSize -1, stencilBufferSize -1, samples -1, swapBehavior QSurfaceFormat::SingleBuffer, swapInterval 1, colorSpace QSurfaceFormat::DefaultColorSpace, profile QSurfaceFormat::NoProfile)
Falling back to using screens root_visual.
after running:
nix run github:michael-mueller-git/OFS --refresh --impure
it shows this error message:
error:
… while setting up the build environment
error: bind mount from '/etc/resolv.conf' to '/nix/store/sna21qf2lixychxfh4g93xmvmnc1p5j3-OFS-f5c8f69.drv.chroot/root/etc/resolv.conf' failed: No such file or directory
If you have any idea, help is welcome. Otherwise i’ll maybe run OFS under Windows
I also added the OFS stuff to my mtfg cachix cache. In theory, if you execute the command now, you shouldn’t have to build anything when using my mtfg cachix cache.
Which exact version of vanilla os are you using (Settings : System : Info)? - I just find it strange that it behaves completely differently for you than it does for me in my test vanilla os 2 vm. Do you have any special setup on settings of your vanilla os?. What kind of hardware are you running the system on (Mainly interested in your GPU)? I was able to enter all commands in the “Black Box” terminal which uses VSO v2 and there were no problems.
I had an idea for how this tool can be much much much more useful for making a whole video scripted, with different scenes.
So, the main reason why you can’t auto-generate an entire script with this is due to a scene change.
So, it would be cool if you could prep your script by adding markers throughout the video, and having the tool let you select the tracking for each scene.
ex. A 10 minute video.
You might have a scene starting at 0:30 to 3:00, then 4:00 to 10:00. (3:00-4:00 is filler or you don’t want it to script with the motion tracker)
You’d create a “marker” by placing a point at 0:30, 3:00, 4:00.
The tool would show the video at 0:00 and ask “Script or skip this scene?” You hit skip.
Then it will show the video at 0:30 and ask “Script or skip this scene?” You hit script, and then select the scene.
Then, before processing the entire scene with the AI tracker you make, the tool will ask if you want to script the segment at 3:00-4:00. You say no. Then it asks for 4:00 to 10:00, and you say yes, and setup the motion tracking.
Than, it will run the motion tracker and script the parts 0:30-3:00 and 4:00-10:00.
The default behavior is that the tool stops when you place markers along the script, but it would be great to have the flexibility of prepping multiple scenes, then having the AI go at it for like an hour at a time to see, for each scene. Then maybe it could show you your final result preview sequentially.
There is a tool called “pyscenedetect” that can read a video and generate a list of time codes where scene changes are found. I am working on an experimental python script + lua extension for OSF to run this scene detection before starting to script a new video, and import the scene changes as alternating 0/100 actions. This way you can quickly step through the scenes and decide if motion tracking will work or not.
In my experience, you need to keep watching the trackers, as they sometimes gets lost. Also, some manual post processing is almost always needed, and is best done immediately after tracking a scene.
Often I see sections in a video where the scene alternates between two or three camera viewpoints. It would be helpful if on a scene change, the tracker would skip to the next scene and try to continue tracking from there before giving up.
I changed to one tracker and because of movements the curve height and position is always wrong anyway so I dont need exact distance, I just need a quite good end of movement and direction. That’s when your minmax macro comes into play to correct this. In my opinion not one tracker should be active but lets say 5 per actor and always use two or three that make sense for calculation. That way some of them could leave the screen and someties come back.
If a camera change would make it possible to set the new trackers then we would not have to set them at all, right?