My handy won’t move or anything at all. I’ve done the .env file as well and still won’t work.
Heya thanks for this. I did all the steps installed all the stuff. its working, however I have a question, it seems the chat is really slow, especially in the Edging automode. Im running 64GB ram i-7 prc, and a 8gb rtx 3070 video card, is that “beefy” enough? Also, is that the only “personality” available for the strokegpt? thanks in advance
Hm, I’ve a 5080 and only tested it on that. Cant tell what a 3070 will do. But if the response is too slow, try a different model maybe, that changes a lot.
Personality: In the zip is only the default one.
But you can easily create new personalities. Just describe it in a few sentences and thats it. Maybe you have to restart for it to appear in the list.
So I figured out how to make new personas, and yes it needs to be restarted to show up in the list of personas. I got a couple new ‘models’ for ollama. Google recommends I use distilBERT with a 3070, but I havent gotten the right command yet to grab the pretrained model yet. Ill work on that later today. All 3 models I have so far for ollama feel extremely slow. it takes 30-60 seconds for the keon to move, but once it starts in the automodes it works as intended I believe. The commands come through in the chat bubbles, and it will say like (moving penis xx speed xx top xx bottom for xxxxx milliseconds) and lets say it says for 20000 miliseconds it will be 30 seconds before the next message or command comes in. it takes about 15 seconds for it to acknowledge i hit like. the Im close button in edge doesnt seem to do anything, and it wont listen to anything I type in the chat box. and for some strange reason my norton password manager wants me to log in on the chat bar, lol. Anyway, I know this is a work in progress and I appreciate all the hard work you all are doing.
This project has inspired me to play around more with locally hosted LLMs but even with the same model used in this project i am running into the model telling me it can’t do explicit things. How did you get around this for StrokeGPT? I didnt see anything besides prompt engineering in the codebase but maybe I am missing something obvious. Any advice would be appriciated
If you’re using the same model as the original project (Ollama), then you must have changed something, because in the base version of StrokeGPT you should be able to chat without major restrictions.
Usually, these kinds of limitations show up if you use models like ChatGPT, etc.
Personally, I switched from Ollama to a model downloaded from LM Studio (Nous-Hermes-2-Mistral-7B-DPO) but most of the work is actually handled by the “persona_desc” in my_settings.
To be clear, yes StrokeGPT works well and doesnt seem to have restrictions, but if i just ask questions directly to the model it is very restrictive. I’m just surprised that the restrictions seem to be bypassed just by writing the prompt in a specific way and i am looking to understand what about the prompt made it ignore the restrictions.
It’s restricting you because you’re telling it why it’s doing it, or it understands the context of what it’s doing. GPT doesn’t give a damn on sending data about funscripts if you don’t tell it what a funscript is.
Instead, formulate the context of your session as if it’s operating a machine, not pleasuring a person/doing lewds. Shape your statement so that it sees only the transmission of the depth of motion/length of glide/speed as elements of a mechanical object.
The AI doesn’t need to know the ‘why’ it’s doing a thing. Just tell it ‘how to do it well’ instead.
Divorce your operation from the lewd bit, and focus on how to do its job well.
Hey there, really interested in checking this out, im at step 2 of the install process and the command
“pip install -r requirements.txt” (without quotes)
is not working. says that “pip” is not recongnised as a command. could someone explain what im doing wrong?
It just means Windows doesn’t know where your Python installation is. When you installed Python, there’s a little checkbox at the bottom that says “Add Python to PATH.”
If you missed that, pip (the tool that installs Python packages) won’t work in the terminal.
-
Uninstall Python completely.
-
Go to python website and grab the latest version.
-
When you run the installer, make sure you check the box that says “Add Python to PATH.”
It’s literally the most important step. -
Let it install, then open a new Command Prompt and type:
python --version
pip --version
If both give you a version number, you’re good to go.
- Now jump back into your StrokeGPT folder and run:
pip install -r requirements.txt
That should finally work without errors.
- If you really don’t want to reinstall Python:
You can also fix it manually by adding Python to your PATH yourself.
- Find where Python is installed — usually it’s something like:
C:\Users<yourname>\AppData\Local\Programs\Python\Python311\
- Copy both of these folders:
C:\Users<yourname>\AppData\Local\Programs\Python\Python311
C:\Users<yourname>\AppData\Local\Programs\Python\Python311\Scripts\
-
In Windows search, type “environment variables” and open that.
-
Click Environment Variables → Path → Edit → New, and paste both of those paths.
-
Save, close everything, and open a fresh Command Prompt.
Try typing:
pip --version
If it shows a version number, it’s fixed.
- Quick shortcut if you’re lazy
If you don’t want to mess with any of that, you can just use this command:
python -m pip install -r requirements.txt
That runs pip through Python directly, so you don’t even need PATH set up correctly.
I really like this concept of having an AI playmate that you can interact with. Is there also an option or extension/mod to get this to work over Buttplug.io? I know this has been asked before, but that was a while ago and I thought, maybe there’s some more info available. If not, I’ll keep my mouth shut ![]()
Maybe I’ve missed if this question was already answered, if so I’m sorry, but I had tried the package and got it to run and got stuck at Connecting the Handy screen.
If there was a way to bypass this and let the program think it was controlling a handy, but then converting the commands to speed/levels to a different device, that would work fine for me ![]()
I admire the work you put in to this, and to release it to us kinky mortals for free as well! Thank you for that <3
I’m waiting for buttplug support too, don’t worry you aren’t alone!
if you scroll up a bit, you’ll find my version with bp support.
Will there ever be support for the Keon?
![]()
Messing around with the Grok sexy AI chat B, OT and I have to say if this could get mixed in with that. It would be unbelievable. Unfortunately, I’m not techie enough. and running a GPT on my old ass laptop seems unlikely. So maybe one day. I’ll get to try this.
I tried getting it set up but I kept getting errors. Probably did something wrong. It is what it is
I feel like I’m really close to getting this working right. I have it installed, auto works, but sending messages without auto doesn’t and it doesn’t seem to respond to me. In auto it also repeats the intro message over and over. Any advice? I’m using the version you made here for intiface support.
I had the same problem in trying to make it work with LM Studio. I had to change line 20 in app.py to be:
LLM_URL = "http://127.0.0.1:11434/v1/chat/completions"
Then there was still a problem, and I had to change the _talk_to_llm() function in llm_service.py to be:
def _talk_to_llm(self, messages, temperature=0.7):
try:
response = requests.post(
self.url,
json={
"model": self.model,
"messages": messages,
"temperature": temperature,
"stream": False
},
timeout=60,
)
data = response.json()
# --- Try standard OpenAI-style structure ---
content = None
if "choices" in data and len(data["choices"]) > 0:
content = data["choices"][0].get("message", {}).get("content")
# --- Fallbacks for other API styles (LM Studio older, Ollama, etc.) ---
if not content:
content = (
data.get("message", {}).get("content")
or data.get("response")
or data.get("content")
)
if not content:
raise KeyError("No content found in LLM output")
# Try parsing if JSON text was returned
try:
return json.loads(content)
except json.JSONDecodeError:
return {"chat": content.strip(), "move": None, "new_mood": None}
except (requests.exceptions.RequestException, ValueError, KeyError) as e:
print(f"⚠️ Error processing LLM response: {e}")
try:
print("🔍 Raw LLM response:", response.text)
except Exception:
pass
return {"chat": f"LLM Connection Error: {e}", "move": None, "new_mood": None}
`Preformatted text`e here
Then it started working.
For some reason whenever I try to install the requirements in the cmd prompt as per the directions all I get is:
Traceback (most recent call last):
File "D:\Python\lib\site.py", line 167, in addpackage
exec(line)
File "<string>", line 1, in <module>
File "D:\Python\lib\site-packages\_distutils_hack\__init__.py", line 34
f"Register concerns at {report_url}"
^
SyntaxError: invalid syntax
Remainder of file ignored
Traceback (most recent call last):
File "D:\Python\lib\runpy.py", line 170, in _run_module_as_main
"__main__", mod_spec)
File "D:\Python\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Python\Scripts\pip.exe\__main__.py", line 5, in <module>
File "D:\Python\lib\site-packages\pip\__init__.py", line 1
from __future__ import annotations
SyntaxError: future feature annotations is not defined
I have no clue what I fucked up, but clearly I did something wrong. I would appreciate any and all help from here.