I wanted to share a project I’ve been working on for a while. I’ve always been interested in interactive stroker scripts, but felt they were missing a key element: genuine unpredictability. My goal was to create an AI partner that could make every session feel unique and organic, rather than just playing back a pre-made file. Not that scripts are freakin’m amazing!
The result is a Python script that uses a 7B local LLM (via Ollama) to drive the Handy in real-time. The AI’s personality, mood (fake obviously), and even its “kinks” all change and adapt during the session.
Here’s What It Can Do
Custom Personas: Instead of picking from a fixed list, you just describe who you want your partner to be. For example, you can type “shy goth college student” or “confident southern milf,” and the AI will adapt its personality and the way it talks to match what you wrote.
Fluid Moods: I built a “mood engine” to make the AI feel more alive. It has a range of emotions like Playful, Curious, Loving, and Passionate. The mood changes based on the flow of the session—both in reaction to your words and based on the actions the AI itself decides to take. You can see its current mood in the UI, so you always know how it’s “feeling.”
It Learns Your Preferences: This was the most important part for me. There’s a “Like” button (and a keyboard shortcut for it) that you can press when the AI does a move you enjoy during auto mode. The AI keeps score of your favorite patterns. Over time, it learns what you like and will use those moves more often, even mentioning that it remembers you enjoyed them.
Organic Auto Mode: All of these features come together in the auto mode. It’s not a script. It’s a real-time performance generated by the AI’s current mood and what it has learned about your preferences. It can get carried away with a pattern it “discovers” it likes, getting more intense on its own, which feels incredibly natural. And of course, it still stops immediately when you tell it to.
Here’s a video of the app itself running on windows with the AI toned down. It covers most features. I will also upload a vieo tomorrow of the Handy in action i realtime with the app, too.
IT IS NOT READY YET.
## What You Need to Run It
Because this all runs on your own computer, it’s completely private. The setup requires:
Python
Ollama with a model downloaded (e.g., Mistral 7B)
The script files.
An OK-ish rig.
It’s been a long project to get these systems working together, and I’d still ove to get some help with some aspects of it, but I’m excited to share the result. Thanks for checking it out. Love this community.
I am not wanting to make anything from this, by the way. I just want to make something cool that others can use and enjoy.
Ps. I can’t edit my posts so I hope i didn’t mess anything up lol.
As long as it’s okay with the Mods, I’ll post updates here as I progress with development.
The good news is after a ton of frustrating work I feel like I’m at the home stretch.
First, the AI’s “brain” got a major upgrade. I got sick of the “purple prose” crap it kept falling back into, so I put it on a much stricter leash (pun possibly intended). Now its replies are to the point, and actually react to what you just said instead of going off on some philosophical tangent while you just wanna’ have fun.
The biggest feature is the new “Dynamic Persona” system. I threw out the old iffy menu which didn’t always fuly change the AI’s persona. Now, you just type whatever you want into the text box—“goth guy,” “sentient cucumber,” whatever—and the AI just becomes that person and never breaks character. Its personality, pronouns, and the way it strokes you also completely adapt on the fly. This means, for example, a female wrestler would naturally be a little rougher with you than, say, a sexually frustrated nun. Probably not as fun, though
Finally, I added some “Safety Guardrails” to the motion control. It can’t just jump from super slow to 100% speed instantly anymore, and it won’t go jackhammer-fast on just the tip. It has to build up to the intensity, which feels way more natural and won’t whittle your dick into a very unhappy nub.
So yeah, the AI is smarter, safer, and its personality is now always 100% whatever you want it to be.
Quick update - I’ve added a video so you can see this thing work. It shows the UI and the Handy running side-by-side, demonstrating the real-time AI control for both regular commands and the full, organic Automode.
This is pretty amazing. I recently discovered janitor-ai and proxy myself over to deepseek for the character, the output is pretty amazing. I know the local LLMs aren’t as great but I know there are a lot of nsfw llms that I haven’t tried. But as I was playing I was thinking it would be amazing if the AI could add stroker commands and even build a history of what you like and bam look at this post.
I’m not sure how to get llms to learn things on the fly short of a growing prompt, so not sure if that is how this is going down. My best guess is a prompt have every message return with some stroke vars like speed/depth/stroke length based on mood, and then parse and strip that part out of the return message and translate it to funscript instructions to the handy, which if doing so is really clever.
I think it would be worth looking into adding some more buttons if you have a sequence where the user can mostly lean back and only talk if they want to, but could tap button for their arousal level, that way the AI could decide if they want to edge the user or some other things like that based on bot personality.
Anyway really epic product and was wondering when we’d start seeing the chatbots get hooked up to stroker toys, great stuf!
That’s pretty much how it works right now, yeah. Useful memory is a no-go on a lot of systems for what I want so it’s been fun to find lazy but working solutions lol. My goal is as little strain on complex system prompts as possible, since I’m already working with tons.
I’m actually working on something just like that, but due to the last public release being a bit of a mess, I’m currently stripping and re-building for a much better foundation, so there’s a lot on the backburner for now lol.
It’s 100% percent doable now. I just can’t run more than one model on my hardware, as Image gen models are super resource intensive (if you want half-decent images), especially alongside a LLM. If somebody wants to try and incorporate one themselves I’d be curious what the result is. Sadly, it something I don’t feel comfortable tackling since I’m just one hobbiest and won’t risk people creating illegal content on a system I made.
So, I gave this a try. I installed it on an Ubuntu container in Proxmox, as well as on a Windows desktop. Behavior was the same on both, though performance was much worse on the low-powered Windows PC.
I’m also not sure how to do the tip/base length thing. When I move the Handy manually, it doesn’t seem to register, so I have to click In or Out over and over to get it where I want and it takes a long time.
Yeah, it definitely won’t work as intended on a low end Windows pc as the model will likely time out. It’s Windows only.
If your Handy is on the latest firmware the min/max guide will nudge The Handy on its own when you press in and out. Moving it on its own won’t work, but you can simply edit your min/max depths in your my_settings file then the model will adjust in future.