Using Blender as a multi-axis script editor

Preface.
After writing a few single-axis scripts and wanting to try making my own devices (I’m still gathering money for a 3D printer, as the one I got for free has only 100x100x100 mm print area) it was interesting to try writing multi-axis scripts for interactive masturbators. As for me, just knowing how these scripts play and work on devices can give me a better understanding of how to make them better. Although a great tool like OFS can create multi-axis scripts, I decided to do something alternative for several reasons:

  • I definitely don’t like the recording format of real multi-axis scripts (lots of motion and rotation scripts).
  • I doubt that it’s mathematically correct to use Euler angles (it’s hard to use interpolation, what order to use when rotating - the result will be different, the presence of the “hinge lock”).
  • I didn’t like the toolkit that OFS has (maybe I’m just clumsy).
  • There must have been other reasons - can’t remember why :slight_smile:

I’ve been using Blender as a program for parametric drawing (similar to AutoCAD, SolidWorks) and for modeling 3D models for printing. Luckily this program can also do a lot of things with video and of course I couldn’t help but try to use it as a script editor.

Introductions.
I’m making scripts only for VR video so far and prepare flat video for scripting with ffmpeg command line:
ffmpeg -i video.mp4 -vf "crop=in_w/2:in_h:in_w:in_h, v360=input=hequirect:output=flat:pitch=-40:h_fov=100:v_fov=100:w=1024:h=1024, fps=30" output.mp4
Where:

  • ffmpeg - the utility itself (the full path may look like “C:\ffmpeg\bin\ffmpeg.exe”)
  • video.mp4 - incoming video file in VR format, for example the full path is “D:\VR\VRHush_From_The_Vault_Dani_Daniels_Oculus_HQ_3D_LR_180.mp4”
  • crop=in_w/2:in_h:in_w:in_h - trim the video, leave only the left half.
  • v360=input=hequirect:output=flat:pitch=-40:h_fov=100:v_fov=100:w=1024:h=1024 - converting video from the half of equirectangular projection with rotation by -40 degrees, angles 100x100 degrees and output resolution 1024x1024 pixels
  • fps=30 - number of frames per second
  • output.mp4 - outputs the flat video to a file, for example the full path “C:\Temp\VRHush_From_The_Vault_Dani_Daniels_Flat_30.mp4”.

Usually it’s the flat file that I then work with in OFS and create a per-frame script for complex scenes and for simple ones I use MTFG with manual correction of each movement.

Create a multi-axis script.

In Blender you can import our flat video as a sequence of frames:

  • add a Empty image
  • in the Property area select Object Data Properties to open Image
  • in the file selection window select our video made for OFS
  • look through the duration of our video in the file properties and specify how many frames we will import using the formula (minutes*60 + seconds)*frame count
  • in the Timeline area we also specify the desired number of frames:
  • switch to the Y-axis view by clicking on the necessary icon or by pressing the “Numpad 1” key
  • rotate our set of pictures by an angle of 90 degrees on the X axis and press the “Space” key to watch the video:
  • choose the units of measure we will use for positioning (I’ve set cm)
  • let’s create our Fleshlight, which will be in 3D emulate the movements of a real masturbator
  • I put the length of 20 cm, the discrimeter of 5 cm (which roughly corresponds to a full-size) and 5 vertices at the base (to understand where its sides), immediately put his beginning was in the center of coordinates (Z 10)
  • set the point for calculating movements and rotations to the origin of coordinates (where the 3D cursor is):
  • to import our script (which is a JSON file), I made a simple script, for this you can switch one of the areas to the text editor
  • insert the python code and run it:
import json
import bpy

funscriptName = "C:\\Temp\\Dani Daniels\\VRHush_From_The_Vault_Dani_Daniels.funscript"
scene = bpy.context.scene
obj = bpy.data.objects["Cylinder"]
obj_loc = obj.location

with open(funscriptName, "r") as funscriptFile:
    funscriptJson = json.load(funscriptFile)
    for action in funscriptJson['actions']:
        obj.location = (0.0, 0.0, float(action['pos']/1000))
        obj.keyframe_insert(data_path="location", frame=round((action['at']/1000)*30))

The resulting script in step 2 is only Z-axis motions. At a minimum, we need to add X and Y positions for each keyframe, and add rotations around the XYZ axes. And while adding motions and rotations around ready-made points is easier than writing scripts of 2 motions and 3 rotations from “0”, it’s time consuming and time-consuming to do 2-3 times as much as writing one motion axis as usual.
Very often rotations around Z do not correspond to keyframes (start earlier and end later), so such points need to be created separately.

Gamification.

For each keyframe with the Z-axis you need to set 5 different values. After spending a few hours on the script, the obvious idea was to use the gamepad’s analog stick and trigger axes for movements and rotations. No sooner said than done, and here is the script:

import bpy
import XInput
import mathutils
import time

obj = bpy.data.objects["fleshligth"]
key_A_pressed, key_B_pressed, key_X_pressed, key_Y_pressed = False, False, False, False
key_SHOULDER_pressed = False

class ModalTimerOperator(bpy.types.Operator):
    """Operator which runs itself from a timer"""
    bl_idname = "wm.modal_timer_operator"
    bl_label = "Modal Timer Operator"

    _timer = None
    
    def modal(self, context, event):
        if event.type in {'RIGHTMOUSE', 'ESC'}:
            self.cancel(context)
            return {'CANCELLED'}
    
        global key_A_pressed, key_B_pressed, key_X_pressed, key_Y_pressed
        global key_SHOULDER_pressed
        obj_pos_x, obj_pos_y, obj_rot_euler_x, obj_rot_euler_y, obj_rot_euler_z = 0.0, 0.0, 0.0, 0.0, 0.0
        if event.type == 'TIMER':
            state = XInput.get_state(0)
            if XInput.get_button_values(state)['A'] and not key_A_pressed:
                bpy.ops.screen.keyframe_jump(next=False)
                key_A_pressed = True
            elif not XInput.get_button_values(state)['A']: 
                key_A_pressed = False
            if XInput.get_button_values(state)['B'] and not key_B_pressed:
                bpy.ops.screen.frame_offset(delta=1)
                key_B_pressed = True
            elif not XInput.get_button_values(state)['B']:
                key_B_pressed = False
            if XInput.get_button_values(state)['X'] and not key_X_pressed:
                bpy.ops.screen.frame_offset(delta=-1)
                key_X_pressed = True
            elif not XInput.get_button_values(state)['X']:
                key_X_pressed = False
            if XInput.get_button_values(state)['Y'] and not key_Y_pressed:
                bpy.ops.screen.keyframe_jump(next=True)
                key_Y_pressed = True
            elif not XInput.get_button_values(state)['Y']:
                key_Y_pressed = False
            if XInput.get_button_values(state)['LEFT_SHOULDER'] or XInput.get_button_values(state)['RIGHT_SHOULDER'] and not key_SHOULDER_pressed:
                obj.keyframe_insert(data_path="location")
                obj.keyframe_insert(data_path="rotation_euler")
                key_SHOULDER_pressed = True
            elif not XInput.get_button_values(state)['LEFT_SHOULDER'] or XInput.get_button_values(state)['RIGHT_SHOULDER']:
                key_SHOULDER_pressed = False
            if XInput.get_thumb_values(state):
                obj_pos_x = (XInput.get_thumb_values(state)[0][0] / 20) * (obj.location[2] * 10)
                obj_pos_y = (XInput.get_thumb_values(state)[0][1] / 20) * (obj.location[2] * 10) 
                obj_rot_euler_x = -XInput.get_thumb_values(state)[1][1] * 0.5
                obj_rot_euler_y = XInput.get_thumb_values(state)[1][0] * 0.5
                obj.location[0] = obj_pos_x
                obj.location[1] = obj_pos_y
                obj.rotation_euler[0] = obj_rot_euler_x
                obj.rotation_euler[1] = obj_rot_euler_y
            if XInput.get_trigger_values(state):
                l_trigger_index_pos = XInput.get_trigger_values(state)[0]
                r_trigger_index_pos = XInput.get_trigger_values(state)[1]
                if XInput.get_trigger_values(state)[0] != 0:
                    obj_rot_euler_z = -XInput.get_trigger_values(state)[0] * 0.5
                    obj.rotation_euler[2] = obj_rot_euler_z
                elif XInput.get_trigger_values(state)[1] != 0:
                    obj_rot_euler_z = XInput.get_trigger_values(state)[1] * 0.5
                    obj.rotation_euler[2] = obj_rot_euler_z
                else: obj.rotation_euler[2] = 0
        return {'PASS_THROUGH'}

    def execute(self, context):
        wm = context.window_manager
        self._timer = wm.event_timer_add(1/30, window=context.window)
        wm.modal_handler_add(self)
        return {'RUNNING_MODAL'}

    def cancel(self, context):
        wm = context.window_manager
        wm.event_timer_remove(self._timer)

def get_override(area_type, region_type):
    for area in bpy.context.screen.areas: 
        if area.type == area_type:             
            for region in area.regions:                 
                if region.type == region_type:                    
                    override = {'area': area, 'region': region} 
                    return override
    raise RuntimeError("Wasn't able to find", region_type," in area ", area_type,
                        "\n Make sure it's open while executing script.")

def menu_func(self, context):
    self.layout.operator(ModalTimerOperator.bl_idname, text=ModalTimerOperator.bl_label)

def register():
    bpy.utils.register_class(ModalTimerOperator)
    bpy.types.VIEW3D_MT_view.append(menu_func)

def unregister():
    bpy.utils.unregister_class(ModalTimerOperator)
    bpy.types.VIEW3D_MT_view.remove(menu_func)

if __name__ == "__main__":
    register()

    bpy.ops.wm.modal_timer_operator()

(I’ll be finishing up)
I put moves on the left stick, rotations on the right stick, Z rotations on the trigger, keyframe navigation on the ABXY buttons, and a write script on the bumpers.
It looks interesting and the script creation speed has increased:
ezgif.com-gif-maker (6)
Script export
The code itself

import json
import bpy
import math

funscriptName = "VRHush_From_The_Vault_Dani_Daniels"
folder = "C:\\Temp\\"

funscript_orig = folder + funscriptName + ".funscript"

fs_pitch = folder + funscriptName + ".pitch" + ".funscript"
fs_roll = folder + funscriptName + ".roll" + ".funscript"
fs_yaw = folder + funscriptName + ".yaw" + ".funscript"
fs_x = folder + funscriptName + ".x" + ".funscript"
fs_y= folder + funscriptName + ".y" + ".funscript"

obj = bpy.data.objects["fleshligth"]

with open(funscript_orig, "r") as funscriptFile:
    funscriptJson = json.load(funscriptFile)
    fs_orig_inv = funscriptJson['inverted']
    fs_orig_metadata = funscriptJson['metadata']
    fs_orig_range = funscriptJson['range']
    fs_orig_version = funscriptJson['version']

obj_pitch = {"actions":[], "inverted": fs_orig_inv, "metadata": fs_orig_metadata, "range":fs_orig_range, "version":fs_orig_version}
obj_roll = {"actions":[], "inverted": fs_orig_inv, "metadata": fs_orig_metadata, "range":fs_orig_range, "version":fs_orig_version}
obj_yaw = {"actions":[], "inverted": fs_orig_inv, "metadata": fs_orig_metadata, "range":fs_orig_range, "version":fs_orig_version}
obj_x = {"actions":[], "inverted": fs_orig_inv, "metadata": fs_orig_metadata, "range":fs_orig_range, "version":fs_orig_version}
obj_y = {"actions":[], "inverted": fs_orig_inv, "metadata": fs_orig_metadata, "range":fs_orig_range, "version":fs_orig_version}

for frames in obj.animation_data.action.fcurves:
    for kf in frames.keyframe_points:
        frame = int(kf.co[0])
        bpy.context.scene.frame_set(frame)
        obj_pitch['actions'].append({"at":int((frame/30)*1000), "pos":round(math.degrees(obj.rotation_euler[0])*1.11)+50})
        obj_roll['actions'].append({"at":int((frame/30)*1000), "pos":round(math.degrees(obj.rotation_euler[1])*1.11)+50})
        obj_yaw['actions'].append({"at":int((frame/30)*1000), "pos":round(math.degrees(obj.rotation_euler[2])*1.11)+50})
        obj_x['actions'].append({"at":int((frame/30)*1000), "pos":round(obj.location[0]*1000)+50})
        obj_y['actions'].append({"at":int((frame/30)*1000), "pos":round(obj.location[1]*1000)+50})
        
with open(fs_pitch, 'w') as fp:
    json.dump(obj_pitch, fp, separators=(',', ':'))

with open(fs_roll, 'w') as fp:
    json.dump(obj_roll, fp, separators=(',', ':'))
    
with open(fs_yaw, 'w') as fp:
    json.dump(obj_yaw, fp, separators=(',', ':'))
    
with open(fs_x, 'w') as fp:
    json.dump(obj_x, fp, separators=(',', ':'))
    
with open(fs_y, 'w') as fp:
    json.dump(obj_y, fp, separators=(',', ':'))

Then, when I have free time, I need to put everything in order and make an addon for Blebder.

To be continued…

4 Likes

Pretty awesome. I could see blender being useful for 2d as well. Simply because things tracked move in 3 dimensions when we’re trying to estimate movement in one axis… can be a bit difficult with no visual aid.

1 Like

It’s hard to appreciate multi-axis movements - in Blender it seems easier to me.

In theory, I’m thinking of adding the ability to generate a point cloud from 3D video in Blender. But a little later :wink:

3 Likes

Thanks for the infos. Don’t you think this fits more in the #howto category?

You may be right. Although it’s a manual now, I hope it transforms into something more.

1 Like

Updated the topic.

this is really interesting when multiaxis toys become the norm. its a while since i last used blender maybe i should dl it again

1 Like

The global problem with devices is that they do not make stationary, but mobile, and a 6-axis device would be very difficult to make mobile. So it turned out no devices, no scripts, and vice versa.

So I did everything.
The link has both the video file, the blender file, the python scripts that I used and the scripts that came out in the end.
I don’t know what to do with the resulting data - I don’t have anything to test it on yet.

1 Like