Honestly, I have no idea. I’m on version 4, so I can’t really tell if it works with earlier firmware — sorry about that.
Also (just to stress this again), this version of StrokeGPT has been heavily reworked. It runs with LM Studio (before launching it I have to do cmd → lms server start), and StrokeGPT pulls the last loaded model from LM Studio.
It should work with the older Ollama setup too, but given how much the code has been modified… I can’t say that with confidence anymore.
Plus, I expanded the bot’s verbal recognition by installing spaCy dependencies — it should still work without them, but again… no promises.
TL;DR: I’ll post it when it’s ready, but y’all better cross your fingers and hope it runs on your end too, lol.
I’m not a dev, just a guy with a stupid chatgpt.
I’ve been toying around with ChatGPT to develop a few control programs for TheHandy and feeding it all the documentation it needs but I can’t seem to get any results to work and my brain is fried at this point so I can only imagine how hard it has been to get something working and working well
Is the AI capable of remembering certain cases such as what type of strokes to use when the user is close to orgasm for example or is that too much of a complex thought for the AI to parse and implement? I’m guessing this is limited by how good the LLM that we as the user are using at interpreting context is that correct?
Jokes aside, I’m basically trying to make the text and the script match up so I can have the right script… at the right time wink wink.
That’s why I wanted to create the three macro phases (warmup / active / recovery).
I think I’m getting there, but as you said, it’s more complicated than it might seem, even with ChatGPT.
Hahaha yeah no kidding but that does sound really intriguing and will be eagerly anticipating what you have in store. Definitely going to need to learn more about LM Studio to get the most out of the LLMs available
LM Studio isn’t really that important — I don’t think that’s the core issue.
When I first switched from Ollama to LM Studio, I was convinced that the slightly underwhelming message quality was Ollama’s fault, but in reality, I could’ve just handled it better by tweaking the "persona_desc" in my_settings.
Even that part was hell though — trying to get the right type of responses, the ideal text length, tone, etc.
At some point, I got totally obsessed with making sure the model passed from LM Studio to StrokeGPT 1:1, like, exactly how it was behaving inside LM Studio.
But something was always getting “filtered” — even if just a little — and I couldn’t figure out why.
I tried everything for days, literally, and nothing worked.
In the end, I kinda “solved” it by adding new rules to give more weight and priority to the "persona_desc", and that finally gave me the results I wanted.
Now I’ve been stuck on the script engine for like two weeks — I’ve probably rewritten it from scratch 2 or 3 times.
I’m getting closer to the result I had in mind, but damn… vibe coding only gets you so far.
Honestly, I totally get why @anon24848587 threw in the towel at some point.
Great points. Yeah, it’s quite a bitch. Don’t get me wrong, I’ve used AI to tidy up my code plenty of times (should be obvious to anyone), but getting scripts to work in this case was mostly all on me. Well, getting them working was the easy part. But getting a model/app to work in tandem with script creation on the fly is fucking nuts lol. Especially with all the different factors/caveats/user calibrations involved. It’d be easy if it were just for one dude lol.
I wish I could have done more with it, but at the end of the day, projects of this scale take teams to build and get up to snuff. It’s why I’m only working on small projects from now on.
I’m super proud of the work Y’all are putting in, though. It’s fantastic to see.
Thanks for the work on this! I am very intrigued, but will have to wait until we can get other toys involved, as the Funsr is all I got. Still excited to see where it goes.
Hello everyone. Quick update:
Between Saturday and Sunday, I’ll be publishing my version of StrokeGpt on GitHub.
This week I haven’t been able to work on it much—partly due to lack of time, partly because the frustration is starting to kick in.
I ran out of ChatGPT’s agent function uses last week, and since then we’ve taken more steps backward than forward. Right now, the advanced phase settings no longer work, and ChatGPT doesn’t seem able to fix them anymore.
This is what I’ll try to restore between Saturday and Sunday; if I can’t manage to, I’ll still release the version as it is, leaving the settings framework in the index file so I can maybe fix it later over time.
As the original author said, this is really a job for someone who actually knows what they’re doing. This project was born solely to provide an alternative to the growing number of subscription-based sites popping up, so don’t expect perfection.
hi this may be a repeats question but do you have any plan to make it script video you’re currently watching? i dont mind if it took its time to script the video first as long it script it well
I had already thought about it, actually. But I can’t say if I’m capable of doing it. Maybe in the future, if I feel like it, I’ll give it a try—but I can’t promise anything.
On the other hand, if I managed to make all the previous changes armed with nothing but ChatGPT, I think anyone else could do it too.
Finally, we’re here. I’m leaving the repo with version “1.5” of StrokeGPT for anyone interested.
Let me clarify upfront: the README was generated with ChatGPT, so there may be inaccuracies since even it can’t remember every single corner of the code. But to summarize how I did it:
Download LM Studio and the model Nous-Hermes-2-Mistral-7B-DPO (you can download any model you want, I’m using this one) and install dependencies as in the original StrokeGPT.
Every time you turn off/restart your PC, you’ll need to use cmd - lms server start (to start the language model; you don’t need to open LM Studio every time).
Botselfie contains the images/videos for the bot’s profile picture (videos without audio).
Images/GIFs/audio go into their respective folders inside static/updates (videos can be uploaded manually via the file browser).
The “heart” of the model (its character/responses) runs through my_settings - persona_desc. Be careful when editing it: right now it’s free of constraints, so modify it at your own risk.
There are 3 main phases: Warmup / Active / Recovery, each with precise rules and speed/range limits. Switch phases by typing “next phase” in chat. It stays in Recovery until you explicitly tell it otherwise.
Known issues (for anyone who wants to try fixing them):
Reply length setting doesn’t work correctly.
Control Actions section is currently disconnected / non-functional (I didn’t use it and it often got in the way).
Advanced Stroke Settings: the skeleton is there (see the code in the index file) but it stopped working after ChatGPT broke it and couldn’t fix it again.
In the Danger Zone commands, only “I’m Coming” and “stop” currently work. I could have reconnected post-orgasm but honestly didn’t feel like it, plus typing “I came” activates Recovery anyway (which basically is the post-orgasm button).
Sometimes the model doesn’t clearly understand instructions like “only the tip/base”, but if you write “tip is 100 / base is 0” it should move to the right section.
Script generation still has room for improvement, but I can’t do much more alone.
Future ideas (for anyone who wants to take on the challenge):
Funscript player
Free TTS without needing ElevenLabs API
TTS button under each chat bubble to read only that message
Image generator integration
Voice commands
A mobile version would be interesting
That said, my job here is done — passing the torch to people more skilled than me.
I can’t be bothered with a repo, much less properly integrated multi-service UI or anything like that. But for anyone who really wants a local voice option, I have here a drop-in replacement for audio_service.py in StrokeGPT meant to use Chatterbox as provided by TTS-WebUI. As in, set that up by following https://www.youtube.com/watch?v=_0rftbXPJLI up to 2:43, then run StrokeGPT with the audio_service.py’s content replaced with import requestsimport jsonimport loggingfrom collections import dequefro - Pastebin.com . This will need like 4GB VRAM on an Nvidia GPU.
I wanted to try a few things today with the code, that local llm thing got my curiosity, and I got kind of carried away.
So …
intiface support (stroke / vibrate)
you can use other ips as llm. so if your gaming pc is somewhere else and you are on your laptop or something like that. claude/openai could also work, but didn’t test that. Also listing all models that are in ollama and you can freely change.
personas can be created, switched, changed and develop over time. They are stored in a json file in personas. also their like experiences etc.
every device can be given a body part and the ai knows that and can control them separately
ai decides how long a auto prompt stays before a new one is coming
kind of messed up the prompt so new responses take a while erm I mean improvements
if something hangs: maybe intiface has to run
a lot of other stuff that came in my mind
I’ll just put the source here. I think I’m done with it. Anyone feel free to expand, integrate into git, whatever.