I tested intiface with handy and a few lovense devices, no Idea about anything else, sorry :X. There is a select button? Did you press the save devices button? it creates a required json file. @makivalex ?
I definitively did not add a virus. maybe it dislikes the exe files in .venv → not required anyway, try a zip without them: 3.89 MB file on MEGA
Tested it with intiface and keon, auto modes work fine, AI write messages and do moves correctly. But when I message anything in the chat, I am not getting any response. Like the model do not get my messages.
I am using Ollama server (somehow couldn’t connect to LM Studio). Not sure how to fix.
in the my_settings.config there is
“sequential_messages_enabled”: false,
“max_sequential_messages”: 5,
“min_sequential_message_duration”: 10000,
“max_sequential_total_time_normal”: 60000,
“max_sequential_total_time_auto”: 180000
→ try setting sequential_messages_enabled to true
I was trying to generate multiple messages at once and reverted it, because AI is too weak for that atm to be an improvement. But I have hopes with it enabled it it should work again.
I’m a different user but running into the same issue with my messages being ignored but auto working perfectly fine.
This config change didn’t really do anything, sadly.
ok, I fixed the things that did not work.
chat messages, Develop Character Button (creates new traits based on the old ones) and automode on/off and hopefully also the device problems.
hopefully even the false positive, because it seems i included the older zip in the old zip :X.
Maybe you could make a video showing the setup process? To me this is way more confusing than the original StrokeGPT. That would greatly appreciated. Edit: Now I have this issue and also my site doesn’t look how everyone elses looks like. Mine just looks like the old one
You’ve been pretty vague about your setup. Are you using Ollama? Are you using LM Studio like I do?
If you’re using LM Studio, do you run lms server start in the command prompt before launching StrokeGPT?
Main error:
LLM Connection Error: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded...
Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it
This means the app is trying to connect to 127.0.0.1:11434 (your local machine), but nothing is actually running on that port. In other words, the LLM backend (Ollama, LM Studio, etc.) isn’t started, or it’s running on a different port.
Other error:
Unexpected endpoint or method. (GET /v1/chat/completions). Returning 200 anyway
This usually happens when the app calls an endpoint the backend doesn’t recognize, or the backend says “OK” but doesn’t actually know what to do with the request.
Likely causes
The LLM server isn’t started → start Ollama, LM Studio, or whichever backend first before running StrokeGPT.
Wrong port or endpoint → maybe your code calls /api/chat, but the backend expects /v1/chat/completions (or the other way around).
Firewall or antivirus blocking port 11434.
Wrong config in your settings → the API URL might be incorrect or pointing to the wrong port.
How to quickly check if the backend is running
1. Test the port
Open Command Prompt (Windows) or Terminal (Mac/Linux) and run:
curl http://127.0.0.1:11434
Connection refused → nothing is running on that port.
Any response → the port is open.
2. Test the exact endpoint
curl http://127.0.0.1:11434/v1/chat/completions
405 Method Not Allowed → endpoint exists but needs a POST request with data.
404 Not Found → endpoint doesn’t exist, wrong URL.
Connection refused → backend not running.
3. Make sure the backend itself is running
LM Studio → start the local server and check the port in settings.
Ollama → run:
ollama serve
By default, this starts the API on port 11434.
Here’s some cmd’s commands to check the port with lm studio
Tried both and couldn’t connect to either. At this point just give me a detailed guide like video. Just walking through the set up. I’m tired of trying i feel like a chicken with it’s head cut off trying to do this.
There’s no video tutorial. I’ve written in almost all my posts that the entire build was done through vibe coding and ChatGPT.
A problem like this? I’d fix it with trial and error using ChatGPT (and that’s exactly how I built the whole thing, as I said).
Unfortunately, I don’t have a magic solution — I can only suggest you do what I did until you fix the endpoint error.
There are just too many variables that can cause unexpected errors, I’m afraid.
As a final tip, I can suggest a better alternative to ChatGPT that I discovered just last night: Cursor.
It’s a bit of a hassle, but if you upload your build to GitHub (desktop version), you can open it with Cursor (it works like Claude/Anthropic). Then you let Cursor read your entire codebase and ask it to fix the error for you.
Right now, it seems to be far superior to ChatGPT — I’m already making interesting progress on a new build using it.
P.S. There are no shortcuts. Everything I did in the build was completely through trial and error. Ten, sometimes twenty code changes in a row that didn’t work, then finally one that did. Hours — WAY too many hours — feeling exactly like you said, “a headless chicken.”