Created a web-based audio sensitivity component but need help

I’ve built some Angular components (web app tool) which takes one of your computer’s audio devices (system audio / video) and when played a 16 band equalizer shows the decibels of each frequency. Each “band” has a vertical range slider so the user can adjust sensitivity PER band.

This way you can trigger a value when that frequency’s dB peaks into your set range. My thought was that this is the ultimate way to automate funscripting. Even if you fine-tuned it afterwards you’d have a audio-based script to start with.

Here’s how it’s working:

Also, there seems to be a need for web-based tools since MacOS is like left out of the mix entirely.
Question: are there any web-based tools out there to connect the “peak” score to a funscripter?


So how this should work to create a funscript if you have 16 different triggers?

Maybe the user can select a single band and use the current value as position. The louder the faster the strokes. You could record the values over time to create a speed graph and then use this to build a script.

Or you select one band that fits your sound best (knocking on wood or tapping the microphone) to create a beat in the script with intensity depending on the decibels.

I dont know if there is a app that can do it.

Maybe you can just record the values and create a csv file to download. Then someone can build an importer to create a funscript with it.

In case of OSR maybe 3 bands can be used so each axis gets its own value.

1 Like

Yes, so I’ve thought pretty deeply about this. First the bands are important because you may only want low-end frequencies to trigger. Or vice-versa. That’s why 16 bands are important. Less bands are problematic because the frequencies begin bleeding and you can’t target low vs high sounds.

Otherwise once I get the scripting part done there may need to be a way to set time based sensitivity so that part of an audio source can be sensitive to low and other parts are sensitive to high or whatever you want.

For music a user would almost surely want low-end triggers. For voice it becomes more complicated such as all frequencies depending on the voice/sound.

I should address your point about 16 bands. The band with the highest peak score would always have to win if you’re targeting a single haptic value. In the future we could also map a single frequency to one haptic value and another frequency to a different haptic if the device has two. Depending on the device of course but the code could easily do this.

Good idea about the CSV. That’s a great basic way to map this and potentially re-import it into a funscript maker. Oh, of I could simply learn the Funscript standard at that point and simply create it. I think it’s in JSON anyways… But now we’re talking about building the scripting part from scratch…

Either way, this project needs to add scripting to proceed to the next steps.

Check out buttplug.js

Sample here:

Yes already there so far no one knows anything about creating funscripts programmatically.

@raser1 I see that your app is PC only butt perhaps you can help us out here with guidance on how to create a web-based Funscript?

After some research it seems that it would be too much work to build a scripter. Do code-projects exist which could map any values to a working script? Since it’s JSON this should be easy…

TypeScript would be a huge win for the community web builders.

It’s taken a lot of work but this project has taken a positive turn.
Audio devices with dB peaks into a frequency by X% triggers 0.0 - 1.0 vibrations. Wow!
I’d like to get more help on this project though. Perhaps help to create automated scripting with a web-based timeline and keyframe editor for creating funscripts.

1 Like

Maybe start with something more ‘easy’ and just create a record and stop button. While recording you constantly collect one of the frequency values every time they trigger. You get an array with timestamps and position values. After the stop button is pressed you can create the JSON script from your collected values.

Each trigger will create an up and down stroke.

Trigger after 1,0s with 50% overshot = 1 medium speed stroke with 50% speed of target device
Trigger after 1,8s with 10% overshot = 1 slow speed stroke with 10% speed of target device
Trigger after 2,4s with 100% overshot = 1 fast speed stroke with 100% speed of target device

Overshot means that your trigger level is overshot by X%

If the triggers are faster than the stroketime this can get difficult…

I dont understand how your values are working so my idea might not fit.

1 Like

Thanks for the ideas and reply. I like your idea a lot. :slight_smile:
Could you show me an example of what the JSON would look like?

Does Funscript only do strokes or can it do vibrations on devices with one or several vibrator engines?

You could pull and run the repo very easily if you want to see the values.

I’ll update the readme with details so people can pitch in (hopefully)

Short version:

  1. Volume peaks into a frequency range you set
  2. How far into the range is calculated as .1 - 1.0 vib command

Longer version:

  1. The user sets range levels on each frequency band
  2. When audio levels rise the dB level peaks into that set range by a certain percentage
  3. The percentage is always between 0-100% or 0 - 1.0
  4. This value is what is used for vib messages and is working quite well

One TODO is assigning frequencies to other vib haptics that the device may have.

12 times per second the audio updates and triggers this math.
(not sure yet what the exact number should be)

const ratio = this.bands[i].dB - this.bands[i].sensitivity;
if (ratio >= 0) {

then the values are dived by 100 to get 0.1 - 1.0 scores for the device.

The original funscript can only handle a single axis with timestamp and stroke position but it could be expanded with own tags. But new tags then also must be supported by other apps.

ScriptPlayer adds RawData and HandyControl and others add MetaData to the script.

You could pull and run the repo very easily if you want to see the values.

Never done this before. I am also not into web programming…

This is a short funscript example with metadata which you can skip if you want. The rest is realy simple JSON.
Example.funscript (2.3 KB)

I have not used this package but it explains how the funscript works. You can ignore the math for now.

You only have to add your data here.

HandyControl also supports the TwinCharger. This toy is a vibration only toy and I use the calculated speed from a funscript as vibration level. If there would be a separate vibration tag with data in a script I could use that if we define a common standard tag.

1 Like

Just wanted to update.
After getting this working I think that I thought the web audio API would give a LOT better frequency isolation. Instead it seems that it’s basically a huge waveform visualized. Therefore audio is not a good way to do this on the native web. You just get volume control which is kind of lame.

It COULD be a great place to begin for scripting to automate a “first-pass” key placement at obvious highlights.

About scripting
It seems like we’re working at the very beginning of haptic scripting software.
Are you proposing that whatever key/value used for vibration should be uniform across all apps?
Yes! Awesome.
Also it should certainly default to the one and only vibration engine unless an optional key/value assigns it to a second or third vib engine. (should it exist)
I’ll follow your lead on what the keys should be.

I believe Lovense equipment has multiple engines but I could be wrong. Game Controllers too.

Still have no idea how to create keys to actually script other than the obvious D3/Canvas mess which would be an epic project. qdot called it “unmaintainable”. So I’m a little frustrated at this pt.

The vibration tag was just an idea. I have no plans to create another tag for now so feel free to create one. Tags are optional. So you can still open the script with the other apps even if they dont support your tag.

You could also make vibrations with the default actions tab by setting the vibration intensity as position value 0-100%. But this way the app must support this and a toy to forward the value. ScriptPlayer supports many toys but i dont know how scripts for vibrating toys look like. Maybe the app just translates the action to an intensity value.

I think we still need a working example to get a feeling whats possible and not to see where this will lead us.

Hi @highfiiv

Do you think its possible to get something like an average intensity graph from an audio file?
So you can move the frequency sliders and change the outcome of the shape.


With Photoshop I can get something that gets close to my idea but it takes some time to setup the right filters.
Marsh - Gjipe

With HandyControl and the Sequencer I can now create a script from the image where speed and stroke range is following the intensity of the audio file. It does not follow the beats as this is not possible with the sequencer. The script is only made of one channel for now. Maybe its possible to use both channels. One for stroke and one for speed.

Marsh - Gjipe.funscript (60.5 KB)
Audio File: Marsh - Gjipe

Maybe this is a nice use case for your new app.

Hi I wanted to reply real quick to your question.
(good concept I believe, avg intensity is probably a nice/usable option)
Yes what I built could likely easily get the average intensity. Why?

Your image exemplifies my issue with audio triggers. You’re almost always going to get really strong low end frequencies unless you’re explicitly separating the frequencies…

which is actually another project, an audio engineering project in itself.

But this way the app must support this and a toy to forward the value.

Arent you using Buttplug for this? It translates on the server so you can send vib messages to any supported toy.

Have I answered anything?

No, HandyControl is not using a Buttplug. I have written an own class to talk to Handy.

Any movement?

On what? I have not continued here. That was just some idea.

dB based movement - Set a minimum db to be the highest position. The maximum can also be set and if the audio dB pass that maximum, the travel speed from up or down is increased or can be anything else really.

and/or take some notes from Audiosurf, no pun intended.

The tracks are created thru Bass = bumps. No pun intended again, Track as in racing track.
The intensity, speed, lights of the race track looks like is influenced by how many dB the song has, full on dB = fireworks, low dB = calm. Can take that and translate it to the maximum rage of up and down movements.

The box pick ups on the three lanes are notes like on a piano, ie.
G5 E5 C5 E5 G5 C6
translates into those frequencies
G5 783.99hz - Middle lane
E5 659.25hz - Left lane
C5 523.25hz - Right lane
E5 659.25hz - Left lane
G5 783.99hz - Middle lane
C6 1046.50hz - Right Lane
I guess that can be translated to Up and down rather than Left to Right. Or leave it like that for those fancy 3-axis toys.
For Faphero Cockhero videos, take the specific Kick frequency to be used as keyframe.
Picking a range of low bass frequencies for more interesting on the go results.

1 Like

@Filt yeah exactly but that’s an audio engineering project. With web we can’t separate sound frequencies well enough to do what you’re saying. If the sound is channeled then of course… But where would youget audio sources like that?

How about syncing a generated funscript with a midi track using ableton link? HandyControl can generate funscripts and midi is super versatile.

This would be the first nsfw use of ableton link :grinning: