Hey everyone,
I’ve been working on this and figured it was time to share. It’s called ChatStroker v1.4 — a single-page app that connects a local LLM (via LM Studio) directly to The Handy, so the AI you’re chatting with also drives the motion in real time.
No cloud. No API keys. No subscriptions. Nothing leaves your machine.
What it actually does
You open the HTML file in your browser, point it at your LM Studio server and your Handy’s connection key, set up your scenario/persona in the PreConfig panel, and start chatting. As the AI generates its response, ChatStroker parses it into HDSP commands and streams them to your Handy live. The story and the motion move together.
Features
Runs entirely in your browser — just one HTML file, no install
Works with any model loaded in LM Studio
PreConfig system for locking in persona, scenario, intensity before the session
Live funscript log you can watch or export
What you need
LM Studio running locally with a model loaded and the server enabled(Preferably a model that handles json formatted outputs reasonably well)
The Handy with your connection key
Any modern browser
Download / source
What I’d love from you
This is v1.4 and it works, but it’s absolutely not perfect. If you try it, I’d really appreciate:
Bug reports (especially weird HDSP behavior or LM Studio edge cases)
Model recommendations that work well for this kind of narrative-driven output - Larger models perform the best but it’s not realistic for most people to run a 120b or larger model. I have tested it thoroughly with a cracked Minimax M2.7 and it seems great. I am interested in how a cracked version of the new Gemma may perform.
Feature ideas — what’s missing?
Happy to answer questions in the thread. Have fun.