I just found that it is sometimes the best choice to only use one tracker instead of two.
I followed the installation guide to use it on Ubuntu, but when I run nix run ...
I get the following error and OFS won’t start:
error: unable to execute '/nix/store/3v7rhdhi2qfq3a02i8bkmpz33m4zsi7l-OpenFunscripter-4.0.1-nixgl-wrapper/bin/OpenFunscripter': Exec format error
.
When I use sudo nix run ...
, OFS starts but MTFG doesn’t.
Even for general users, the command nix run github:michael-mueller-git/Python-Funscript-Editor --refresh -- --generator
displays a dialog box saying “Video file was not specified!”, so I think Python-Funscript-Editor is able to start.
I have selected the multi-user install for the nix package manager, but I get the same error with the single-user install. This is my first time using Nix. I don’t even know what’s going on. What should I do?
I adjusted the Code so that it is compatible with ubuntu now. (With commit 99e9874 I had unfortunately introduced an incompatibility to some linux. This should now be fixed) Can you please try again. If it is still not working for you, please give me the following details:
- Exact Ubuntu Version
- GPU
–
I have tested the adjusted code with a ubuntu 24.04 vm. Below my setup process:
- Create VM:
quickget ubuntu 24.04
echo "disk_size=\"60G\"" >> ubuntu-24.04.conf
quickemu --vm ubuntu-24.04.conf
-
Use th gui installer and use default settings to install ubuntu.
-
Install nix in ubuntu vm:
sudo apt update
sudo apt install curl
sh <(curl -L https://nixos.org/nix/install) --daemon
reboot # apply nix setup
- Enable flakes
mkdir ~/.config/nix
echo "experimental-features = nix-command flakes" > ~/.config/nix/nix.conf
exit # close current shell
- Create MTFG dependencie cache
nix run github:michael-mueller-git/Python-Funscript-Editor --refresh -- --generator
When the command is executed on the pc for the first time it will take a vew minutes to compile the custom opencv dependencie. When the command success you will see the message box Video file was not specified
.
- Compile and run OFS:
nix run github:michael-mueller-git/OFS --refresh --impure
It now works without any problems. Thank you for your quick response.
I run Vanilla os. Vanilla os supports the installation if nix packages via its own packagemanager “apx”. Would it be possible to provide such a package?
I run Vanilla os. Vanilla os supports the installation if nix packages via its own packagemanager “apx”. Would it be possible to provide such a package?
A release in the official nixpkgs is not planned by me as the administrative effort is high and the additional benefit is low.
However, a quick test in vanilla os 2.0 VM showed me that it is possible to install the nix package manager. I was able to use the commands from the ubuntu installation instructions 1:1 and was able to launch the program in the vm.
Got a potential improvement
When showing this window, show a frame coresponding to the first point and the last point to more easily pinpoit a change in scene
Also if might add to it, is there any way we can filter out low frequency sin waves like in the picture? Its currently caused by camera movement.
after executing:
nix run github:michael-mueller-git/Python-Funscript-Editor --refresh – --generator
the shell gets stuck at 100% building the opencv dependencies. The whole computer gets irresponsible.
Any ideas?
the shell gets stuck at 100% building the opencv dependencies. The whole computer gets irresponsible.
Any ideas?
One way such an appearance can occur is if your ram is full.
Check with watch free -h
in a second terminal while executing the nix command.
I can’t think of any other reasons why the system becomes unresponsive
Also if might add to it, is there any way we can filter out low frequency sin waves like in the picture? Its currently caused by camera movement.
Since Version V0.5.5 there are some buttons in the MTFG OFS Window in the post processing area that can help you to remove some camera movements. These function were original create by quickfix in post-processing-for-ofs-mtfg-lua-script.
You could also try to use my cachix cache:
sudo /nix/var/nix/profiles/default/bin/nix-shell -p cachix
cachix use mtfg
exit # leave root shell
Then try the command again:
nix run github:michael-mueller-git/Python-Funscript-Editor --refresh -- --generator
it shows this window:
and in terminal it gives the following output:
run workaround for ubuntu
non-network local connections being added to access control list
Warning: Force QT_QPA_PLATFORM=xcb for better user experience
playsound is relying on another python subprocess. Please use `pip install pygobject` if you want playsound to run more efficiently.
Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
(process:34496): Gtk-WARNING **: 06:00:16.226: Locale not supported by C library.
Using the fallback 'C' locale.
(python3.9:34496): dbind-WARNING **: 06:00:16.289: AT-SPI: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
qt.glx: qglx_findConfig: Failed to finding matching FBConfig for QSurfaceFormat(version 2.0, options QFlags<QSurfaceFormat::FormatOption>(), depthBufferSize -1, redBufferSize 1, greenBufferSize 1, blueBufferSize 1, alphaBufferSize -1, stencilBufferSize -1, samples -1, swapBehavior QSurfaceFormat::SingleBuffer, swapInterval 1, colorSpace QSurfaceFormat::DefaultColorSpace, profile QSurfaceFormat::NoProfile)
No XVisualInfo for format QSurfaceFormat(version 2.0, options QFlags<QSurfaceFormat::FormatOption>(), depthBufferSize -1, redBufferSize 1, greenBufferSize 1, blueBufferSize 1, alphaBufferSize -1, stencilBufferSize -1, samples -1, swapBehavior QSurfaceFormat::SingleBuffer, swapInterval 1, colorSpace QSurfaceFormat::DefaultColorSpace, profile QSurfaceFormat::NoProfile)
Falling back to using screens root_visual.
after running:
nix run github:michael-mueller-git/OFS --refresh --impure
it shows this error message:
error:
… while setting up the build environment
error: bind mount from '/etc/resolv.conf' to '/nix/store/sna21qf2lixychxfh4g93xmvmnc1p5j3-OFS-f5c8f69.drv.chroot/root/etc/resolv.conf' failed: No such file or directory
If you have any idea, help is welcome. Otherwise i’ll maybe run OFS under Windows
The output from MTFG looks good so far.
I also added the OFS stuff to my mtfg cachix cache. In theory, if you execute the command now, you shouldn’t have to build anything when using my mtfg cachix cache.
Which exact version of vanilla os are you using (Settings : System : Info)? - I just find it strange that it behaves completely differently for you than it does for me in my test vanilla os 2 vm. Do you have any special setup on settings of your vanilla os?. What kind of hardware are you running the system on (Mainly interested in your GPU)? I was able to enter all commands in the “Black Box” terminal which uses VSO v2 and there were no problems.
Thank you, now it works
I had an idea for how this tool can be much much much more useful for making a whole video scripted, with different scenes.
So, the main reason why you can’t auto-generate an entire script with this is due to a scene change.
So, it would be cool if you could prep your script by adding markers throughout the video, and having the tool let you select the tracking for each scene.
ex. A 10 minute video.
You might have a scene starting at 0:30 to 3:00, then 4:00 to 10:00. (3:00-4:00 is filler or you don’t want it to script with the motion tracker)
You’d create a “marker” by placing a point at 0:30, 3:00, 4:00.
The tool would show the video at 0:00 and ask “Script or skip this scene?” You hit skip.
Then it will show the video at 0:30 and ask “Script or skip this scene?” You hit script, and then select the scene.
Then, before processing the entire scene with the AI tracker you make, the tool will ask if you want to script the segment at 3:00-4:00. You say no. Then it asks for 4:00 to 10:00, and you say yes, and setup the motion tracking.
Than, it will run the motion tracker and script the parts 0:30-3:00 and 4:00-10:00.
The default behavior is that the tool stops when you place markers along the script, but it would be great to have the flexibility of prepping multiple scenes, then having the AI go at it for like an hour at a time to see, for each scene. Then maybe it could show you your final result preview sequentially.
There is a tool called “pyscenedetect” that can read a video and generate a list of time codes where scene changes are found. I am working on an experimental python script + lua extension for OSF to run this scene detection before starting to script a new video, and import the scene changes as alternating 0/100 actions. This way you can quickly step through the scenes and decide if motion tracking will work or not.
In my experience, you need to keep watching the trackers, as they sometimes gets lost. Also, some manual post processing is almost always needed, and is best done immediately after tracking a scene.
Often I see sections in a video where the scene alternates between two or three camera viewpoints. It would be helpful if on a scene change, the tracker would skip to the next scene and try to continue tracking from there before giving up.
Yeah, kinda true. Gotta find something to do in the meantime while scripting I guess.
I changed to one tracker and because of movements the curve height and position is always wrong anyway so I dont need exact distance, I just need a quite good end of movement and direction. That’s when your minmax macro comes into play to correct this. In my opinion not one tracker should be active but lets say 5 per actor and always use two or three that make sense for calculation. That way some of them could leave the screen and someties come back.
If a camera change would make it possible to set the new trackers then we would not have to set them at all, right?