StrokeGPT - A Free Customisable Chatbot for The Handy that Invents Funscripts and Fucks You in Real Time

Hi! I am trying to connect Kiiroo Keon, it shows up in the device list and even reacts to the ''Test" button, but when I press the ‘‘Select’’ button, it answers that “Error selecting device”, and this is in the cmd:
127.0.0.1 - - [15/Sep/2025 17:13:35] “GET /robotics/status HTTP/1.1” 200 -
127.0.0.1 - - [15/Sep/2025 17:13:37] “POST /robotics/select HTTP/1.1” 400 -
[DEBUG] get_available_devices() called
[DEBUG] TheHandy not explicitly connected - skipping automatic check
[DEBUG] Checking Buttplug devices - service running: True
[DEBUG] Buttplug service ready: True
[DEBUG] get_available_devices() returning 1 devices: [‘Kiiroo Keon’]
[DEBUG] get_available_devices() called
[DEBUG] TheHandy not explicitly connected - skipping automatic check
[DEBUG] Checking Buttplug devices - service running: True
[DEBUG] Buttplug service ready: True
[DEBUG] get_available_devices() returning 1 devices: [‘Kiiroo Keon’]
127.0.0.1 - - [15/Sep/2025 17:13:40] “GET /robotics/status HTTP/1.1” 200 -
[DEBUG] get_available_devices() called
[DEBUG] TheHandy not explicitly connected - skipping automatic check
[DEBUG] Checking Buttplug devices - service running: True
[DEBUG] Buttplug service ready: True
[DEBUG] get_available_devices() returning 1 devices: [‘Kiiroo Keon’]
[DEBUG] get_available_devices() called
[DEBUG] TheHandy not explicitly connected - skipping automatic check
[DEBUG] Checking Buttplug devices - service running: True
[DEBUG] Buttplug service ready: True
[DEBUG] get_available_devices() returning 1 devices: [‘Kiiroo Keon’]
127.0.0.1 - - [15/Sep/2025 17:22:20] “GET /robotics/status HTTP/1.1” 200 -
[BUTTPLUG] Sent ping: 14
[BUTTPLUG] Received message: [{‘Error’: {‘Id’: 14, ‘ErrorCode’: 2, ‘ErrorMessage’: ‘{“ButtplugPingError”:“PingTimerNotRunning”}’}}]
[BUTTPLUG ERROR] Server error: {‘Id’: 14, ‘ErrorCode’: 2, ‘ErrorMessage’: ‘{“ButtplugPingError”:“PingTimerNotRunning”}’}
[DEBUG] get_available_devices() called

Maybe u know the probable solution of this?

1 Like

Hello, it is possible to add Autoblow AI Ultra?

I wanted to check out your contribution, but I keep getting a “could not download, virus detected” error :confused:

hi

  • I tested intiface with handy and a few lovense devices, no Idea about anything else, sorry :X. There is a select button? Did you press the save devices button? it creates a required json file. @makivalex ?
  • I definitively did not add a virus. maybe it dislikes the exe files in .venv → not required anyway, try a zip without them: 3.89 MB file on MEGA
1 Like

Tested it with intiface and keon, auto modes work fine, AI write messages and do moves correctly. But when I message anything in the chat, I am not getting any response. Like the model do not get my messages.
I am using Ollama server (somehow couldn’t connect to LM Studio). Not sure how to fix.

1 Like

Weird. It still keeps getting flagged as Trojan:Script/Wacatac.B!ml Thanks anyway for the additional download!

ah ok, that has nothing to do with the devices.

in the my_settings.config there is
“sequential_messages_enabled”: false,
“max_sequential_messages”: 5,
“min_sequential_message_duration”: 10000,
“max_sequential_total_time_normal”: 60000,
“max_sequential_total_time_auto”: 180000

→ try setting sequential_messages_enabled to true

I was trying to generate multiple messages at once and reverted it, because AI is too weak for that atm to be an improvement. But I have hopes with it enabled it it should work again.

I’m a different user but running into the same issue with my messages being ignored but auto working perfectly fine.
This config change didn’t really do anything, sadly.


I pressed the “Save Body Part Assignments” button, but there is the message “Saved 0 assignments, 2 errors”, no new json files created btw
cmd show this:
127.0.0.1 - - [16/Sep/2025 16:44:30] “GET /robotics/status HTTP/1.1” 200 -
127.0.0.1 - - [16/Sep/2025 16:44:33] “POST /robotics/assign-body-part HTTP/1.1” 400 -
[DEBUG] get_available_devices() called
[DEBUG] TheHandy not explicitly connected - skipping automatic check
[DEBUG] Checking Buttplug devices - service running: True
[DEBUG] Buttplug service ready: True
[DEBUG] get_available_devices() returning 1 devices: [‘Kiiroo Keon’]
[DEBUG] get_available_devices() called
[DEBUG] TheHandy not explicitly connected - skipping automatic check
[DEBUG] Checking Buttplug devices - service running: True
[DEBUG] Buttplug service ready: True
[DEBUG] get_available_devices() returning 1 devices: [‘Kiiroo Keon’]
127.0.0.1 - - [16/Sep/2025 16:45:10] “GET /robotics/status HTTP/1.1” 200 -

Is it possible to upload that required json file and copy that to the folder? Could it help maybe?

Didn’t change much, just AI start to stutter. Still no reaction to my messages.

ok, I fixed the things that did not work.
chat messages, Develop Character Button (creates new traits based on the old ones) and automode on/off and hopefully also the device problems.

hopefully even the false positive, because it seems i included the older zip in the old zip :X.

3 Likes

I’m so lost cause I’m looking between this and the github to get this working and I think I have hit a road block. I keep receiving this error.
image

Maybe you could make a video showing the setup process? To me this is way more confusing than the original StrokeGPT. That would greatly appreciated. Edit: Now I have this issue and also my site doesn’t look how everyone elses looks like. Mine just looks like the old one


.

You’ve been pretty vague about your setup. Are you using Ollama? Are you using LM Studio like I do?
If you’re using LM Studio, do you run lms server start in the command prompt before launching StrokeGPT?

Main error:

LLM Connection Error: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded...
Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it

This means the app is trying to connect to 127.0.0.1:11434 (your local machine), but nothing is actually running on that port. In other words, the LLM backend (Ollama, LM Studio, etc.) isn’t started, or it’s running on a different port.


Other error:

Unexpected endpoint or method. (GET /v1/chat/completions). Returning 200 anyway

This usually happens when the app calls an endpoint the backend doesn’t recognize, or the backend says “OK” but doesn’t actually know what to do with the request.


Likely causes

  1. The LLM server isn’t started → start Ollama, LM Studio, or whichever backend first before running StrokeGPT.
  2. Wrong port or endpoint → maybe your code calls /api/chat, but the backend expects /v1/chat/completions (or the other way around).
  3. Firewall or antivirus blocking port 11434.
  4. Wrong config in your settings → the API URL might be incorrect or pointing to the wrong port.

How to quickly check if the backend is running

1. Test the port
Open Command Prompt (Windows) or Terminal (Mac/Linux) and run:

curl http://127.0.0.1:11434
  • Connection refused → nothing is running on that port.
  • Any response → the port is open.

2. Test the exact endpoint

curl http://127.0.0.1:11434/v1/chat/completions
  • 405 Method Not Allowed → endpoint exists but needs a POST request with data.
  • 404 Not Found → endpoint doesn’t exist, wrong URL.
  • Connection refused → backend not running.

3. Make sure the backend itself is running

  • LM Studio → start the local server and check the port in settings.
  • Ollama → run:
ollama serve

By default, this starts the API on port 11434.

Here’s some cmd’s commands to check the port with lm studio

3 Likes

Hello, I have an issue, where it won’t connect to LM Studio, the error message being:

2025-09-18 19:49:08 [ERROR] Unexpected endpoint or method. (GET /api/tags). Returning 200 anyway

Meanwhile the LMS window shows me this:


Apparently it keeps using the wrong endpoint. How do I make it use the right one?

Screenshot 2025-09-19 001425


Tried both and couldn’t connect to either. At this point just give me a detailed guide like video. Just walking through the set up. I’m tired of trying i feel like a chicken with it’s head cut off trying to do this.

There’s no video tutorial. I’ve written in almost all my posts that the entire build was done through vibe coding and ChatGPT.
A problem like this? I’d fix it with trial and error using ChatGPT (and that’s exactly how I built the whole thing, as I said).
Unfortunately, I don’t have a magic solution — I can only suggest you do what I did until you fix the endpoint error.

There are just too many variables that can cause unexpected errors, I’m afraid.
As a final tip, I can suggest a better alternative to ChatGPT that I discovered just last night: Cursor.
It’s a bit of a hassle, but if you upload your build to GitHub (desktop version), you can open it with Cursor (it works like Claude/Anthropic). Then you let Cursor read your entire codebase and ask it to fix the error for you.
Right now, it seems to be far superior to ChatGPT — I’m already making interesting progress on a new build using it.

P.S. There are no shortcuts. Everything I did in the build was completely through trial and error. Ten, sometimes twenty code changes in a row that didn’t work, then finally one that did. Hours — WAY too many hours — feeling exactly like you said, “a headless chicken.”

2 Likes

Got it working but have of the words for the settings are in a different language.

Those are inside the file “Index”. You can easily translate it.