I used the OFS today for about half an hour on a script I am working on. Some things I noticed:
One thing I noticed right away are the colors. I love the colors. They have an extra punch to them. I often struggle to see, if it’s orange or red, green or yellow due to colorblindness. But these colors are way easier to see for me, cause of the extra punch to them. I can see if it’s red or orange, or green or yellow. They have so much clarity.
I like that I can change size of the UI-elements. I can make everything how I want and optimize it to my needs.
It is so smooth, when holding arrow keys right and left. The video plays totally normal and doesn’t stutter. Really good to get an overview of a part and to see the movements clearly.
What I like to see:
How much time is left in the video. So I know how much I have to script. It could just be there, where the current timestamp of the video is.
Problem I found:
The simulator doesn’t always move in real time. When I want to check, if a part is good or not, I hold right arrow key, to one point to the next point. I want to check, if everything is precise enough and if the movement fits my script. The simulator stays at the number I started on and doesn’t move slowly forward to the next point.
It’s not always like that though. It seems to occur, when the part is “freshly scripted”. On the older parts, the simulator moves in real time.
Ugh, I hope you get what I mean here. I don’t really know, how to explain it any different. If not, I try to explain better.
All credit goes to ImGui it’s just their default dark theme. It’s an amazing library which enables all these things without me having to do anything
There’s also a Light mode. If anybody cares about that I could make my custom widgets (the script position bar and the heatmap timeline ) also be affected by the style and add it to the options.
I have one idea/suggestion that might be quite a bit of work if you chose to implement it, but I think multi-track support (such as with audio editing/mixing software) would be an incredible feature.
Being able to script a sequence twice and then splice or merge the two tracks would let us try out multiple approaches and compare/improve them simply without destroying work or creating many redundant files.
Having multiple “tracks” within one script file would simplify things when scripters release multiple versions such as with/without filler strokes or simplified scripts intended for the launch. (obviously this would also require a player that can select between tracks as well)
I suspect multi-axis scripting is eventually on its way, so having multiple tracks that could account for x, y, z, rotation axes, etc. will likely be useful (eventually). Even if you only had a single-axis device you could switch things up for variety. (Again, we would need a player that cooperated with this. Could even have the player set up so you can assign any track to any axis and do live “mixing”)
There are probably other uses too, but those are the ones that come to mind for now. I don’t really think there is any urgent need to develop this since multi-axis devices are far from ubiquitous and even the simpler features would require adjustments to funscript players. Just some food for thought as a “maybe someday” project if you are ever interested.
Here’s why (so it’s not just complaining)
The archive is only 25MB, when I have 87GB of videos with funscripts to match.
I appreciate that you’re concerned about saving me an additional 25MB, but it’s at the cost of making me go find, download and install a secondary component, without a link included to go get it in, like, three clicks. (you know, like when you have to go fetch your own copy of LAME encoder for MP3, because the encoder isn’t licensed for redistribution - at least there, LAME is linked so you can easily get it yourself and meet the licensing limitations.
@AcademicInside Alright I will put it back in next time. I hope I can get rid of the dependency at some point.
@Hydra Not gonna lie I don’t see myself implementing any of those things.
Don’t get me wrong if this was a trivial addition I would do it but this very major stuff.
It has more to do with creating the custom GUI than the programming of the functionality.
Sadly nobody seems to have made an ImGui extension for multi-track editing yet.
Hey bud, trying to open the program but I’m getting a blank screen, see image below, and a terminal is opening up and i get the following message. “Cannot load nvcuda.dll
DEBUG: Funscript changed!” Any idea as to what’s causing this issue?
@gagax123 Haha, I completely understand and I fully admit to not having enough knowledge about this to even know the difficulty level of what I am asking for. Thanks for taking the time to read my suggestion!
@poet145x libmpv ( the video player ) is trying to load a hardware decoder which doesn’t exist on your system. What gpu do you have?
I might have to add a switch to enable software decoding.
@poet145x nvcuda.dll is definitely an nvidia driver so no wonder it’s not working
Right now I’m forcing hardware decoding which is actually not recommended.
I’m afraid you’ll have to wait for the next update.
I won’t force hardware decoding anymore and instead add an option to force hardware decoding ( it’s really only relevant for 5k vr footage anyway ).
This is really great, nice update, i really like the key binding. If i could recommend i wish it had and equalization function to make multiple points space out evenly. Another thing would be the ability to move points left or right as the video changes. So if you are a little early or late then you could just move it until it lines up,
Also, where are the manual snapshots saving by default, i can’t find them.
Ok, here’s what I’m thinking tell me if this is what you mean.
You select the points you want to equalize, execute the equalization function, the points get spaced based on the distance from the first point to the last point divided by the amount of points selected.
I don’t understand what you mean with as the video changes.
The manual snapshot just creates point you can return to via the undo system.
It’s very useless since everything should be automatically “snapshotted” right now.
Out of curiosity, what did you expect them to be?
so for equalize if you selected say 5 points the equalized the , the first and the fifth would stay the say but the rest would adjust so that they would be even distance between them. So this:
would become this…
i think we are on the same page on that.
for “As the video changes” currently say i am 3 frames early on when someone bases out. right now i have to go to the point and shift-right for three frames. But the video pane stays on the same frame. for one point not to bad and if I know the exact amount of frames I need to move but after a couple thousand times tiring.
What I would like to be able to do is go to that point, then when I am on it press a button that advances the point and the video frame by one so that I am moving the point and video in-sync so when the video shows them bottoming out my point matches. Much faster when manually adjusting each point.
I thought manual snapshot was taking a screen grab of the frame like save image in MPC, but i get it now.
@gagax123
Regarding the comment from fievel45 above:
Personally I find the frame that is where I want the point, then I just adjust the script by moving (the closest) point until it is aligned with the current frame. This works really well given that you can move the closest point without actually standing on it (that is why I requested it earlier). Please don’t remove this option of editing. If there is no good way of combining the current move action with the suggestion from fievel45 then maybe have an option where the user can decide how it should behave.
A small but annoying thing. When you keybind using SHIFT it makes a different if you use the left or right shift key, but you don’t see that in the UI. I first thought the move action was broken. It would be preferable if SHIFT meant any shift key. If that is difficult given the current mechanism of identifying key presses then at least write LSHIFT or RSHIFT in the keybinding UI depending on which key is bound. Also, change the default value for move actions to the right shift + arrow keys (it is the left now).
Question: Does this have support for “digital” gamepad controls? (That is, using the buttons on a controller to advance frames, add and move points, etc.)
@sentinel I’ve changed it so that left shift/ctrl & right shift/ctrl can be used interchangeably.
Left alt & right alt are still different though.
AltGr becomes Ctrl + Alt which is a windows thing apparently.
@fievel45 I’ve implemented the moving the default binding is Ctrl + Shift + Left / Right.
I called it “Move actions left / right with snapping” as it tries to snap to the closest action that’s being moved.
Equalize selection is also going to be in the next update.