[WIP] StrokeGPT - A Free, Self-Learning AI Partner for The Handy

I like the idea, but I keep getting a “read timeout” error. The exe is running, ollama is running, the window says it sets a personality, but when I type something it just kicks out that error after a minute. Any ideas where I’m going wrong?

1 Like

Hey, thanks for your feedback!

The “read timeout” error occurs because the StrokeGPT application has a built-in timer to stop things going wacky. When you send a message, it gives the local AI model (running in Ollama) 60 seconds to process your request and send back a reply. If the AI takes longer than 60 seconds, the application “gives up” waiting and produces the “read timeout” error.

The most likely reason for this error is that your computer’s graphics card (GPU) is struggling to keep up with the demands of the AI model.

What’s your handware? The minimum required (still quite slow and prone to errors) is 4GB of VRAM.
The app will not run on integrated graphics (e.g., Intel HD Graphics).

One solution you could try if it is a hardware issue is is to reduce how much information the AI has to process with each message.

This would mean:

  • Turn Off Long-Term Memory: The simplest way to do this is by using the UI. Click the “Memories: ON” button to toggle it to “Memories: OFF”.

  • Close Other GPU-Intensive Programs: Make sure that no other programs that use the GPU (such as games, video editing software, or other AI tools) are running at the same time.

Again, your hardware is the most important thing here. Sadly, local models are very hardware intensive right now. We’ll go from there :grin:

I have a GTX1080 with 8gb of VRAM. I turned memory off and it seems to be better, though I still get the errors a little. Task Manager says about 5gb of memory is being used and a whole lot of cpu

Thanks, I appreciate it.

Sadly, right now, this is an Ollama issue that I’m not 100% sure how to tackle. But I am working on it and will try and have a fix in place for the 1.3 patch.

Thanks for your patience. :heart:

1 Like

Hey everyone, a quick update on what’s new in the v1.2 release!

This is a big one focused on user experience and new features.

  • Project Now on GitHub: StrokeGPT has a new home! The project is now hosted on GitHub for easier downloads and tracking updates.
  • Complete UI Overhaul: The interface has been redesigned with a two-column layout, featuring a much larger chat window and a dedicated sidebar for all settings.
  • ElevenLabs Audio Support: You can now enable a fully voiced experience. The sidebar has a new panel to set up your ElevenLabs API key and select a voice.
  • Smarter In-Session Memory: The AI’s memory has been significantly upgraded. It can now recall context from the very beginning of a long session without slowing down.
  • New Recommended AI Model: The recommended model is now llama3:8b-instruct-q4_K_M, which is much faster and better at following instructions.

Check out the main post for the full details and the new GitHub link!

Thanks for all the support. I apprectaie y’alls.

@bilbyion hopefully this update will run better for you, especially the new model :heart:

Oh! I will be pushing a small update tomorrow (I’ll have the new GPU! :heart_eyes:) that’ll add an easy to use UI element where you can view all the AI’s invented patterns, play them on demand, and delete any you don’t like.

1 Like

Really cool, any plans for making it work on Linux?

1 Like

Yup! :grin:

It’ll come after the IOS version.

Unless people want to mess with my code now (on Github) and try it themselves which I’m super happy with.

1 Like

Damn, you are going all out. :fire:

1 Like

My blood is made of code and cheap coffee. Being a disabled shutin has it’s benefits lol.

1 Like

Here’s what happens when Skynet tries to break my dick off because it so much as sniffed that I’d removed its guardrails :rofl:

oops

Quite the example of how to break a dick.

Last update for the night, v.1.2.5. Added to the OP. Easy update, just replace the app file and read the changelog if you want.

The AI now understands max depth on a much deeper level and should adhere to it and work all kinds of dicks with the skill of a 6,000-year-old porn star.

That’s it for today.

Updating your app will be super easy from now on.

Thanks for the support. :heart:

@sushimaster

Install ollama however, then:

conda create -n strokeGPT python=3.10
conda activate strokeGPT
python -m pip install  -r requirements.txt --upgrade
ollama pull llama3:8b-instruct-q4_K_M
python app.py

This project is pretty cool. I created something some months back using NomiAI since I had a potato gpu.
It sucked but technically worked. This is a little better. Glad you finally threw it on git.

An idea I would like to push, is to separate the toy activity from the conversation. Because of all of the technical stuff, you can’t really have a true conversation with the bot. It ends up being kind of clinical with the bot narrating motions and whatnot. What I attempted was to have a separate AI conversation (AI B) parsing my conversation with AI A and deciding what/if to move the device.

Food for thought.

2 Likes

Heyl, thanks!

Well done with Nomi! :grin:

StrokeGPT actually handles messaging and device commands as separate processes in the backend, though it might sometimes blend the two to keep the vibe going. The goal’s definitely a natural flow while still controlling the Handy. The AI, in auto-mode and milking-mode, also decides the movements on the fly by using a base set of patterns for inspiration, all the while capable of chatting as normal. Sadly, it’s not 100% there yet lol.

Though, I am looking into having two systems working in tandem to surmount this problem. The issue is then running two capable models locally, though, I’ve experimented with a few that do OK-ish.

I wanted to share this as I’ve been noodling it over for a while.

The next major update will “hopefully” change how the AI treats patterns and movements as a whole. It’s not fully planned out yet but it’ll go like this,

  • The model will understand the Handy device itself on a much deeper level.
  • The AI will simulate a “pressure” level (that feeling you get when you know something’s going well). This will be based on depth and speed over time. When combined with “mood”, it (should) help the AI produce much, much more realistic movements. My tests so far have been friggin’ awesome.
  • Working in tandem with the above, the AI will be able to generate “Mini-scripts” when certain requirements are met (with the two variables mentioned above) which will also improve the overall feel of device movements.

Thanks for the kind words, feedback, and support. This project is coming along so well! :grin: :heart:

2 Likes

You’re putting out updates faster than I can test them! A couple thoughts:
I would like to see an option to run the voice synth locally as well.
It would be cool if there was a way to give simple instructions without having to type full messages, just something quick that you can click while it’s in auto mode. Something like ‘faster’ ‘deeper’ or ‘steady.’

Are there any plans to support the Autoblow AI Ultra?

1 Like

Ah, very nice! I’ve been experimenting with a fork of a nice real-time voice chat repo to do JOI – drives the handy, voice input – it works surprisingly well, the LLMs really help.

Hi @StrokeGPT , I came across this discussion and wanted to ask something specific.

Is this AI able to recognize the moment of orgasm and switch to a dedicated pattern created just for that purpose, one that doesn’t just follow the main script but is designed to support only that phase with specially tailored movements?

One of the major issues with regular automatic scripts is that they can’t detect that moment and often end up ruining it with timing or motion that’s completely off.

Also, if someone is using only an iPad and The Handy, can this AI still be used or is a computer strictly required? I need to stay in a room without access to a PC.

Thanks in advance!

1 Like

If it has an accessible API, I could likely work on it in the future, yeah. The main issue is actually having any specific device to develop with.

I’ll definitely keep it in mind, though.

Currently, the AI infers the approach of orgasm from subtext or explicit cues (e.g., “make me cum” or a ton of exclamations etc), then switches to a dedicated, high-intensity “milking mode” for climaxing movements. The system is still quite new, though, but all movements are invented on the fly and not pre-determined scripts.

Yeah, a computer is required to run StrokeGPT, as it hosts the local AI (Ollama) and the Flask server. An iPad can only act as the web browser to access the interface, though. Meaning you could visit the StrokeGPT interface (by going to http://127.0.0.1:5000 on the Ipad’s browser).

Hope this helps! :grin: :heart:

1 Like