[WIP] StrokeGPT - A Free, Self-Learning AI Partner for The Handy

Very inspiring idea! It would be great to connect MultiFunPlayer to StrokeGPT

I’ve added it to my list for you :slight_smile:

I did want to add, for those curious. Once I’m set on something I DO NOT stop lol. So this project will not stop being updated and worked on until it is 100% perfect. It’s okay now after a month of work, but imagine it after a few more. :grin:

The vast majority of issues right now are simple backend fixes, and because I was tired as hell when I packaged the 1.0 release haha.

2 Likes

No need to rush, but I actually can’t wait for him to learn how to drive an MFP or SR6.
Even if the Multiscript version will be available through patreon :wink:

as much as I want to try this, my computer wont let me open the download file because it says that it’s a virus.

I promise it’s 100% safe, but obviously always trust your own gut. The only way the app connects to the internet is when downloading the model, and pinging the Handy servers during use, and that’s only because the Handy devs made bluetooth a bitch to code for lol.

I understand your worries. I’d be the same. I can create a means of install that simply involves you downloading the pythion app file and HTML index if that’d be better? It’ll take longer as you’d have to download python libraries and pull the model yourself, but other than that it’s be the identical app.

No problems at all. I would be wary of downloading a .exe.

I’m trying to finish the first big patch right now which should be up in about an hour or two. Then I’ll get that turorial sorted for y’all. :heart:

UPDATE: v1.1 is here and packed with crucial fixes and intelligence boosts to make your sessions even better. Thanks for all the feedback and patience.

Here’s what’s new:

  • Intelligent Depth Control: The AI now truly understands what you mean by “tip,” “mid,” and “deep” strokes (mostly), even when you have a Max Depth set. It dynamically adjusts its understanding to your personal limit, so “going deep” means “as deep as you allow,” not full device stroke.
  • Perceptive Speed Adjustments: I have fine-tuned the AI’s understanding of “slow,” “medium,” and “fast” to better match actual device feel. So when you ask for slow, you’ll get slow.
  • Flexible Teasing: Removed a hidden restriction that prevented fast, shallow movements. Now, the AI can really explore the full spectrum of teasing.
  • Robust Max Depth Limiter: A new Max Depth slider in the UI.
    • It now persists across sessions, so your preference is always remembered.
    • Every single stroke from the AI is hard-capped by your set limit, ensuring maximum comfort and safety throughout the entire session.
  • Smoother Experience: Squashed a few pesky bugs that were causing UI elements to become unresponsive and fixed issues with preference saving on exit. Everything should feel much more solid now.

With all of the above said, this project is very complex and is far from perfected. There will still be issues for a while. But with your help it’s getting there.

Download the latest version here.

Obviously follow the current installion instructions in the OP.

I’m also working on an install that does not involve any .exe files .bat files or zip files. I get it, I’d be wary, too.

Thanks for the support over the last few days. :heart:

3 Likes

UPDATE: I know, a ot of these, eh?

Easier Source Code Setup Available!

For those who prefer to run StrokeGPT directly from source (no .exe files) or who like to see what’s under the hood, I’ve put together a simplified source code package.

This option includes a super easy Launch_StrokeGPT.bat file that automates most of the setup (installing Python parts, etc.), making it much more user-friendly than a manual command-line process. You still get full transparency, but with less worry about me sneaking into your house at nigt via WiFi osmosis.

Download the Source Code + Launcher version here: Download.

Detailed install instructions can be found in the README! file.

Now I’m off to bed lol.

@cakmcsak @almostolen

Any chance the source could be pushed to a git forge?

2 Likes

This is super fun! I am having some problems with the bot recognizing my commands but not actually following through with them. Telling them to Take Over works great generally, but a couple of times I’ve seen them just stop completely for no reason. It would be nice if you could configure the delay between new messages and pattern changes in the Take Over mode, they’re a bit too quick. A linux version would also be cool. Keep up the good work!

1 Like

Thanks for the feedback. I really appreciate it! :grin:

The bot not listening very well is partly down to the model I’ve been limited to testing with and how I currently throttle it with my core system prompt/ruleset. As soon as my new GPU comes on Wednesday I’ll have the new model integrated in no time and it’ll be like it jumped up 20 IQ points lol.

Stopping during dancer/milking mode is on me. I gave the AI a prompt to “take a breath” every so often and it is currently mistaking that as “Just become a plank of wood and give up” lol. It’ll be fixed in the next update.

The changes actually move so fast because my crappy GPU means I have to wait (until my new GPU arrives) up to ten seconds just for a reply :joy:. It makes testing soooooo slow, and sadly means the AI’s default timing windows are skewed. This will be fixed when I get my new GPU.

Thanks again! :heart:

A Linux version is on my list of things to add after IOS.

1 Like

Are you psychic? :flushed_face:

It’ll be set up on Wednesday. Fully open source is my goal. I mean, it already is now.

I’d do it sooner but I have a list of things to do and to deviate would break my brain lol.

2 Likes

Awesome!

I am mostly curious because I have tried on and off for about 6 months now to build something that was actually good. Curious to see how this project performs and if there is anything that can improve or I can learn from it.

Looking forward to seeing where this goes!

1 Like

Thank you!

All of the source code is available in the Source Code + Launcher version. Plus, I’ll be getting it all up on Github in a few days. What I’ve made is basically just a UI with a lot of intertwined systems for directing and interacting with language models, which has been my passion for quite some time. If anything, interaction with the Handy Device is the smallest part lol.

It’s been a bitch to work on, honestly. I consider myself to be OK-ish at what I love doing but this messy app is the result of a lot of nonstop work. I think having obsessive traits helps a lot lol. Seriously, though, I think a great experience comes from multiple core systems working together to create something greater. Like, here, I tried to focus on the AI’s pseudo-mood-engine, stroke generator, and pseudo-memory, in the hope of all three combining to create a sort-of organic experience. Instead of the typical “my AI has good memory, hur, dur” stuff you see around. I hope that makes sense. I tend to ramble.

The hardest aspect has been Bluetooth of all things. Mostly because I’ve never worked with it and the Handy is super obtuse in this manner. I haven’t implemented it yet, though.

I’d you have any questions, I’m here. :smiling_face:

1 Like

Some long term things I’d like (want vs need) are:

  1. Ability to add a photo for your partner. An additional layer (and more complicated) to this would be for the picture to change dynamically based on whats happening in the text. If the partner is giving a blowjob, perhaps you had a picture for blowjobs that the AI can display. When the AI is doing the act of cowgirl, the photo you chose for a cowgirl position would show. etc.

  2. Ability to train your partner on existing funscripts. Say for example you are creating an AI partner that’s based on a person (example Sasha Foxxx). You would be able to feed the partner existing Funscripts of Sasha Foxxx performing a blowjob. Somehow AI works it so whenever you ask for a Blowjob through text, the partner would perform a Blowjob similar to the ones you fed it during the training phase. With this, you can create something truly realistic to your target.

1 Like

For pictures. I already have that planned for the near-term where you can simply add a pic. I’m also implementing a system (almost done) where the AI can pick from a HUGE library of pics I’ve generated by itself. Meaning, you type “Shy, slim, introverted Korean Boyfriend” and it’ll choose the picture that best suits that persona. Sadly, anything further would require image generation which I won’t work with, yet, at least, for potential legal issues. But, the system you mentioned could work if the “blowjob” pics, for example, are also from a vast library of curated content.

I’m actually about 2/3rds through creating a tool that does exactly this. Once finished you’ll feed a script to the tool and it’ll create a file you’ll then feed to the AI. This, in turn, will influence your user preferences file. :smiling_face: Just to clarify, this is a tool I’ve been working on for a lot longer than StrokeGPT, and 2/3rds doesn’t mean nearly ready lol.

Thank you for the feedback. You’re awesome.

1 Like

I appreciate the insight. I’ll try to take a peek at the source this week if I have some time.

1 Like

The second item is one I have been thinking about but haven’t had the time or equipment to actually act on. What you want is to create a new model built purely for outputting funscripts (or even just sequential actions). I am fairly certain that if you trained from all the scripts hosted here that you would get a decent output. You could probably go a step farther and include video or even just transcript information along with the funscript input for training. Someone has probably already been doing this, but it takes some know-how, beefy equipment, and a large dataset.

1 Like

Great ideas!