How to use Handy simple scripter (web app) motion tracking

Last update: Aug 2, 2022

Handy simple scripter is a web app scripting tool with motion tracking, all you need is a Chrome web browser. There is a post introducing the software and its features. It does have some bugs, please report them to @handyAlexander :slight_smile:

This is a little guide to how to used it to generate a script with motion tracking. For general scripting tips, see How to get started with scripting.

Loading a video
Go to Handy simple scripter. Here is what you will see:

In the top there are some different icons:

  • Components - show or hide different features - the white boxes in the main window. There are some components shown by default, like the video preview saying “No video loaded” and graph component. But the motion trackers for example need to be turned on to be used.
  • File - load files or export script
  • Settings - different display and other settings
  • Help - keyboard shortcuts and a tutorial video introducing the tool

Drag n drop a video file or click the floppy disk icon to load a video.

When it comes to the motion tracking, the tool contains two different motion trackers with different features.

Neural Network Pose Detection

This is a simple to use and pretty limited motion tracker. It is tracking a selected body part (for example a nose or a hand) automatically. The movement is tracked along the Y-axis (up and down) only. It is only tracking one body part at a time so only one person moving.

This tracker can work well for a POV blowjob vid where the nose is visible and the person being blown doesn’t move much (like in the screenshot below).

To activate it, click Components and show Neural Network Pose Detection. Let’s also change the order to 2 so it’s shown right beneath the video:


Load the tracker by clicking *Start pose detector inside the component. Here you can change the confidence score used, decide how often points will be tracked and which body part to track (default is Nose).

You can now navigate the video and see how well the tracker is tracking the body part. Use the slider above the graph or arrow keys to navigate:

The part tracked is marked with a green circle. If the tracker can’t find the body part well, you can try picking another body part (Tracking point) or consider using another vid.

Now let’s track. Navigate to the part of the vid where you want the script to start and click Track. The tracking will now begin and you can see the progress in the video preview. Graphs will be generated in the graph window.

Computer Vision Tracker

This is a more flexible tracker where you decide what to track yourself and you can track two objects moving in any direction.

To activate it, click Components and show Computer Vision Tracker. Let’s also change the order to 2 so it’s shown right beneath the video:

Load the tracker by clicking *Start computer vision tracker inside the component. Here you can change the size of the object tracked and decide how often points will be tracked and if several points should be tracked. If only one point is tracked, there will still be a second point that movement is tracked against. It will always be locked in the bottom center part of the video (the blue box in the video window).

In the video window, two boxes are visible. The red box is the thing being tracked and the blue box the reference point it is tracked against. Click in the video to move the red box. If you uncheck “Use single point” the blue box can also be moved, by right-clicking.

To track, navigate to the part of the vid where you want the script to start and click Start Tracking or press y. The tracking will now begin and graphs will be generated in the graph window. Look at the red and blue boxes to see that the tracker is following the objects you want to track. If the points are lost or drifting, pause, maybe go back and correct. It does take some experimenting and trial and error to find good points to track.

Tweaking the generated graph
After tracking is done with the entire vid (or you pushed Pause) two graphs will be generated:

raw is the tracked motion. main is a version of raw where mid points are removed and the graph is normalised (max movement is 100% and min movement 0%). Main is usually a better starting point for working on the script but you can use raw and process it yourself using the Process raw data component if you are not pleased.

Instead, we will use the Points in view component. Hide the tracker component you used and move Points in view up to order 7 so it’s shown under the graph.

Scroll to a graph section that you want to work with using the slider above the graph (Seeker) and under it (Zoom).

Use seeker and look at how the graph matches the video action. Often, a section will look like this:

The blue graph is max 75% but looking at the vid it should be 100% (the tip). Let’s fix it. Using Points in view, we click All and instead select Top to just move the top points. We move the top points up with the little arrow up. The same can be done with the bottom points or all points in view.

Doing this, it is handy to use keyboard shortcuts j and k to set start and end points for a section you are working with. Then, click the zoom button in the quick reference component to view the section between the points:

When happy, move into another graph section and do the same. Check the vid and the graph, select a section and adjust the points.

Or, proceed to do some tweaking manually.

Tweaking frame by frame
After making use of the the motion tracker, some parts definitely have to be tweaked manually. You will probably want to delete a lot of unnecessary points to simplify the script, move some points up / down and add some points.

Some useful shortcuts:

  • Left / Right - Move one frame back / forward
  • N / B - Select next/previous point
  • Z / X - Move selected point back / forward
  • 0-9 - Insert 10%*X point at current time
  • DEL - Delete selected point
  • ctrl + DEL - Delete all points in view

When pleased, click File and Export your funscript.

Some general tips

  • Save by exporting the script often. The web app has got some bugs and it is easy to mess up.
  • Read the tips in the getting started with scripting guide
  • Motion track only sections that allow for it, script the rest manually. Do not try to motion track when the movement is not visible, there are no good points to track or when movement is very subtle - you will only waste time.
  • One way is to just motion track the whole vid, look at the output and delete sections that did not work well to track. Another is to get familiar with the vid and motion track the sections that allow for motion tracking.
  • Remove unnecessary mid points generated by motion tracking. If there is a mid point between two points that doesn’t change trajectory for example the Handy might do a brief stop. Mid points might not be visible in the graoh so use n and b to go through the graph and clean up unnecessary points.
  • Be aware of too slow or too fast movements. There are hardware limitations. The Selected Point component can be used to see if the speed of a movement is too slow or fast. The Handy has a speed limit of 400 mm/s but can handle faster commands, the stroke will just be limited.
  • Don’t put too many points as the device will not be able to handle too many commands. One way is going to Settings > Div and set FPS to 10. Now when you navigate using left and right arrows you will skip 100ms. The distance between each point should not be closer than this. You can use b and n to select and delete any points that are closer.
  • When the app seems broken, save your script and go to Settings > Data > Delete all data and refresh application to start over and then import your script again.

is this better or worse than OFS motion tracking?

I haven’t used OFS motion tracking but it seems a lot more capable. You can define which areas to track yourself and track multiple persons not just one movement.

1 Like

Hej från norge (screenshot names gave you away)
Oh wow. People are actually using it. I made this tracker in a real hurry, and I expect that you have seen a lot of bugs and probably terrible performance? The tracker was an off-hour project that I made one weekend. If you like it, I will try to upgrade it when time. What about the rest of the tool? Any feedbacks? There is not a lot of documentation, so I am impressed that you have gotten this far.


The tracker works alright for some vids and I’m using it mostly to speed up frame by frame scripting. It is faster to add points manually when I can see the motion tracking graph.

Tried to generate scripts like explained in this guide which can work sometimes but downsample is not really enough to get well positioned points. I would love some more smartness built into auto-generating points. Maybe a smart downsample that will decrease points but set them where there is min and max, kind of.

Also, I would really like it to support horizontal movement or set other directions. Even better be able to set a reference point for movement (base of dick) so it could support any movement direction and be more accurate

The tool itself is pretty awesome and works well for frame by frame. Would be nice to be able to do more with the points inside the quick reference markers. Like moving everything up/down, moving forwards/backwards, increasing the distance between top and bottom points.


Let us see if I can make some smarter output of the tracking. Including setting the reference angle for tracking. I think I might be able to set that automatically as well.

For your request of modifying the points inside the quick markers. This is possible now. You can use the “quick reference” component to make points with j/k button and then zoom the graph between the two points. You can also select a area on the graph itself. Then use the “adjust points in view” component to transform points in Y-axis or move them horizontally.


Great, thank you!

Please delete this comment. I figured out what I was doing wrong

Made a few improvements. Let me know what you think: Handy simple scripter, if you like it I can make a video tutorial on how to use it.


Wow this is looking good! A lot of UX improvements and some additional filters in the math operations that look interesting, will have to try them out and see if they can help turning the motion tracking graph into a script. Do you have any recommendations, which filters to use for this?

I don’t understand how the filter works? The old Process raw data had input and output graph, Math operations only contain “source graph”. How do I select the output graph?

Thanks. Let me know if there are some UI changes you like.

For the output. Select source graph as RAW. Then copy this will copy all data from RAW to PROCESSED. Then you do all the changes you want.

Or are you talking about the filter that is applied after you do finish the tracking?

That looks awesome!

I was talking about the Math operations, but nevermind I figured it out I think.

The computer vision tracker though! :smiling_face_with_three_hearts: What I have always wished for. How can I move the blue box? (edit: uncheck single point, right click)

Need to play more with this but looks super promising.

Select two points then right click to set the second point.

1 Like

Let me know how the trackers work. They are pretty simple since it’s javascript (single-threaded) and on a website (but a lot of fun making them). We have python apps that we use for our scripting business that uses GPU, but I do not think I will be allowed to open-source those I’m afraid.

Hi @kinetics and @handyAlexander

Wow, both of these scripters are amazing, thank you so much for sharing them with us :grinning:.

I have one question - how do we clear a video or the motion captured action, when either wanting to start a new video or when making a mistake and needing to start over etc?

Thanks again!


Hi @kinetics and @handyAlexander

Alright, scrub that question - I figured out how to clear the video etc.

However, I have another question as I ran into a hiccup - are there any limitations for the video resolution, or the video length, that disrupts moving from raw to processed etc?

The hiccup I experienced is I have a high-ish resolution (3840x1920) video that is only 30 seconds long but after tracking and creating the two graphs, when I use Copy raw data to processed, the red processed line doesn’t appear. Then from that point none of the other functions work (I guess they wouldn’t as there is no processed graph to perform functions on).

Keen to find out if there is a fix for this :grinning:.

Thanks again for this great software!


Actually, scrub that question too - I’ve sorted it - LOVE this software :heart_eyes:.

1 Like

Oh wow, thanks. Ok, give me some time and I will make a video on how to use it (and clean up some messy code). I will have to do it during my after-work time, so not sure when I will find the time to do it. Hopefully before summer, but maybe after😬

1 Like

Sounds great, thanks!

After trying some more, I think it would be really useful if there was a way to move graphs between raw, processed and main like in the older version (maybe there is?)

I think I will want to first do motion tracking (Raw), then do some editing to scrap some parts of it (so move it to Main and edit I guess), then use Math operations to normalize and simplify (using the Processed graph) and finally edit points manually (Main graph).