I’ve been trying a couple of times to script pmvs and the like but I’m always having difficulties interpreting the waveform function in ofs. I’ve looked at quite a lot of funscripts from other people and tried to find a pattern as to how people find the right audio “keys”.
For example while looking at “noodledude - kawaii vs goth” scripted by the amazing @AutomaticLove there comes a part as following:
and while I can see some of the individual “ups” I can’t really tell when individual sound ends and the next one begins. Even when playing back the audio I get a slightly better image of how it should look like but to then align the beat to the motion takes me enormous amounts of trial and error. (Maybe I have just answered my own question here and it is just that)
My project was to properly script this hmv: NETOxNETO HMV | Iwara and when it comes to a bit like this:
I made the error to link the beginning and end of a stroke to two beats instead of one (corrected)
but correcting it this way by using the previously made keypoints (which might’ve just been off) looking at the waveform I feel like this would be the correct way
so that the motion begins prior to the visible beat and ends during it which is what I see in most audio based scripts
So I guess my question is: how do people align individual beats during busy sections of a track where the waveform is at least in my eyes not sufficient to determine a single action?
I’m not an expert when it comes to scripting
But there exists an ability to scale the waveform (in the right click menu) that should help when it’s so shallow like in the first picture.
And another thing that comes to my mind is using tempo mode to have consistent intervals.
However you can only use tempo mode if you know the BPM of the music.
It’s located in the “Mode” window and then instead of the default “Frame” you select “Tempo”.
All the ones shown here are scaled to the max with the action window greatly enlarged, the 10x scale is very helpful but as you’ve seen with the first screen even that can only go so far. I’ll give the tempo mode a try, sounds promising.
In case i have to rely on audio, i generaly slow down the video playback, and just listen and then simply press a single button to get the actual beat. By paying attention to just 1 beat at a time, matching it becomes easier. The slower speed also increases accuracy and makes faster beats easier to track. (note that even a slowdown of just 20% does a lot here!)
You can even match diffirent audio cueues by doing this multiple times using a diffirent button (bass as 1, highhats as 2, wave effect start by pressing 3 etc). While these wont be the action itself, they are perfect for alignment.
In ofs you can then just click on the cue and move it by pressing the diffirent digit, and even use arrow keys to get a position that is offset in a reliable way.
The only issue ofs has, is that when actions are within a frame distance, it often wants to remove another one that was close. But this can be compensated by knowing which order you should do such actions.
Also keep in mind, the brain is not perfectly accurate at interpreting information. Sound, sense and vision can actualy have a mismatch (especialy vision is a bad one here) which still feels normal because it autocorrects for these things. For example, even if the up movement happens before the actual kick sound, if the downward movement matches, it will still feel as matching. This is very well noticed in slower CH sections, it doesnt realy matter when the up stroke happens in that case, if the downstroke matches, it feels natural.
And this mismatch is actualy because of biology. Audio has travel distance, and even though a length of 1meter in audio might appear as nothing. Keep in mind that when you are talking to a person, it doesnt matter if he is at 1 or 10 meters, the lips sync for you. So dont worry too much about slight mismatches, people wont notice.