What if your AI chat partner could drive your Handy while it talks to you? I made that. Have fun!

Hey everyone,
I’ve been working on this and figured it was time to share. It’s called ChatStroker v1.4 — a single-page app that connects a local LLM (via LM Studio) directly to The Handy, so the AI you’re chatting with also drives the motion in real time.
No cloud. No API keys. No subscriptions. Nothing leaves your machine.
What it actually does
You open the HTML file in your browser, point it at your LM Studio server and your Handy’s connection key, set up your scenario/persona in the PreConfig panel, and start chatting. As the AI generates its response, ChatStroker parses it into HDSP commands and streams them to your Handy live. The story and the motion move together.

Features

Runs entirely in your browser — just one HTML file, no install
Works with any model loaded in LM Studio
PreConfig system for locking in persona, scenario, intensity before the session
Live funscript log you can watch or export

What you need

LM Studio running locally with a model loaded and the server enabled(Preferably a model that handles json formatted outputs reasonably well)
The Handy with your connection key
Any modern browser

Download / source

What I’d love from you
This is v1.4 and it works, but it’s absolutely not perfect. If you try it, I’d really appreciate:

Bug reports (especially weird HDSP behavior or LM Studio edge cases)
Model recommendations that work well for this kind of narrative-driven output - Larger models perform the best but it’s not realistic for most people to run a 120b or larger model. I have tested it thoroughly with a cracked Minimax M2.7 and it seems great. I am interested in how a cracked version of the new Gemma may perform.
Feature ideas — what’s missing?

Happy to answer questions in the thread. Have fun.

4 Likes

I had a look at the code. Looks fine - the only issue is that this would probably work better as something like a buttplug.io plugin so it can be implemented on multiple devices. I also like you used a local LLM system and not a cloud-based one.

1 Like

Sounds great. Would love to try but I don’t use a Handy, unfortunately. As Minty said, intiface/buttplug support might bring more interest? It would certainly peak mine.

I’ll have to look into intiface/buttplug support. This is a side project and I haven’t been a developer in quite a few years.

Can you point to it? Huggingface or someplace else?

MiniMax-M2.7-Abliterated-Heretic-GGUF on huggingface. Abliterated Gemma 4 seems to work well too

1 Like

Interesting, why use LMStudio instead of llama.cpp?

ease of use. it could be adapted to llama.cpp easily