StrokeGPT - A Free Customisable Chatbot for The Handy that Invents Funscripts and Fucks You in Real Time

This issue isn’t with StrokeGPT — it’s because you’re running an outdated version of Python that doesn’t support f-strings or the “from future import annotations” syntax.

To fix it, uninstall all old Python versions from Control Panel → Programs → Uninstall a program, then download and install Python 3.10 or 3.11.
Make sure to check “Add Python to PATH” during setup.

After installation, open a terminal and run:

python --version

It should show Python 3.10.x or 3.11.x.

Then upgrade pip and reinstall dependencies:

pip install --upgrade pip
pip install -r requirements.txt

If pip still gives errors, run:

python -m ensurepip --upgrade

This error happens because your Python interpreter is too old to recognize the syntax used by modern libraries, so updating Python should solve it completely.

1 Like

Sorry to bother you. Do you have any idea what might be happening to mine?

Sorry, I worked on a different version — I didn’t make the one with Intiface integration. Without any console errors, it’s basically like looking for a needle in a haystack.
However, I can suggest using Cursor, a free desktop app that lets you do “vibe coding.” Basically, you show it all the StrokeGPT files so it understands how everything works, then tell it about your issue. It should be able to fix the problem (I actually used it myself to build my version of StrokeGPT).

After much hassle, i’ve finally managed to connect the handy and gotten the Base/original version of StrokeGPT running. It is an really interesting Application.

But I am not really a coder and pretty much basic technical skillset. (electric wiring,fixing leaks or a bike, np… but this is kinda way over my head.), But i do had an idea in my mind on how to enhance the base version… Visually.
I kinda like to get some thoughts on it from the people that actually coded/worked with this app, and if it would be possible for someone with extreme limited coding skills to pull off the following:

Would it be possible for the Original app to change a (centered)image depending on the ‘‘mood’’ the ai is in, and swap between pre-made image/animated gif files based on their names like Curious.png/jpg. (these image files would be pulled from the same folder/location the splash screen image is stored. (So when the AI is Curious, it would load the Curious.png/jpg/gif, to reflect its mood ‘‘visually’’) and thus increasing immersion.

Perhaps playing a matching (loop/non-loop) sound file matching the mood (like; Curious.wav/mp3) to increase immersion even more.

From what i’ve read, an AI’s capabilities is limited by the used model, but hm… would the
Original model be capable of doing extra stuff mentioned above?

If i could get some pointers on how to achieve this myself in a (possible) fool-proof manner, that would be great. Thanks in advance.

@Polemicus; I took the liberty of trying out the description parameters of your version, and used them in the Original; It seems to work… with some modification/adjusting in phrasing(i spend almost 1 hour trying to understand what was written 1st, i wasn’t really aware that with ai you need to phrase things in a specific/precise manner. but now i know).

I haven’t tried out your Version yet, cause it seems more daunting to install, but could you elaborate a bit more in laymen’s terms how to use/apply the Image/video/animated gif system you implemented in your version. (the folders which stores videos/images where empty. Is it something that’s affected by how you name those media files?

1 Like

It’s definitely pretty complicated to get my version working, since it was made after endless vibe-coding attempts (I’m not a programmer either, so yeah). For the first part of development I used ChatGPT, and for the final phases I switched to Cursor — which, honestly, I’d recommend using from the start, and I’d also recommend it to anyone like me who wants to improve the app without actually having the skills to do it.

These factors made it basically impossible to cover every single thing you need to consider when installing it “from scratch” (though I still think that, with Cursor’s help, it can be done).

That said, the folder structure is super simple:
inside each folder you can drop whatever related content you want (the file names don’t matter — just don’t rename the folders themselves unless you also edit the code). StrokeGPT grabs the contents based on each panel’s settings: loop/no loop, random/no random, etc.

“Botselfie” is the folder that contains the bot’s profile picture; in my version the profile image is placed on the left side of the screen and scaled up.

For the multimedia formats, just stick to the folder names.
JPG and similar go in images
GIFs go in gif
MP3s go in audio
Videos go in video.

Botselfie should be able to read images, videos, and GIFs.

Just drag your files into the right folders, restart the app, and you’re good to go.

After releasing that version, I kept working on the app a little on my own using Cursor.
I managed to expand the folder system for multimedia files by adding a hint/clue system.

Basically, even if it’s kinda rough, the bot detects certain words or “actions” you describe in chat, and based on that it plays specific media from additional subfolders. Each subfolder is one hint.

Example:
“I wanna kiss you” (where kiss is a hint).
The bot goes to: static → updates → hint (custom folder linked in the code) → kiss (subfolder) and then plays a random file from inside it.

I did the same thing for audio too, just like you suggested.
Honestly it’s harder to explain than to understand in practice lol.

Before anyone asks: no, I’m not releasing that version. I seriously don’t feel like updating GitHub again, cleaning the build, removing personal API keys, etc.

Everything was made with ZERO coding experience and Cursor.

Hope this answers your questions — and I hope you’ll drop an improved version yourself :wink:

Hm, interesting. Thanks for the information, Polemicus.

While i am baffled by the idea of me improving on an existing application without having any coding skills; your words have piqued my interest now, Sir.

I will use the original version as a foundation; try to give my own spin to it.

Just… don’t hold yer breath; expecting miracles. XD.

1 Like

I’d like to ask how far are you on this so far?

Has anyone gotten this working to the point that it’s actually usable? I’m using the latest version from OP, and it cannot be redirected no matter what I prompt it for more than a few seconds. I will prompt it to stop just hammering the base or just the tip and it will do so for maybe 15 seconds max.

I eventually got it to randomly change the depth but it just alternates between just vibrating the tip and the base separately then going along the whole shaft intermittently, it’s kinda numbing and almost painful when it does that so much. I tried a ton of different ways of prompting (although, I did not try the default prompt as I am a gay man) and none seemed to work.

Others suggested that you need be specific and mechanical in your prompts but that ruins the fun. I did try specifying numbers for limiting the speed and using more of the depth, but that again worked for a few seconds only. Maybe this concept really needs a custom model for the “prompt engineering” aspect if it is going to use an LLM for control. I know people are doing custom models for funscript videos.

Ive been messing around with Ani on Grok, that little lady can get kinky AF! Im not a coder or anything but man I would be like the Fry meme if someone were able to get strkegpt working with her as the LLM. I am aware this isnt exactly possible but damn it would be fucking sweet! TAKE MY MONEY! hahahaa

:eyes: nothings impossible

Lol, I know, but i dont have “fuck you” money, hahahaa. But seriously I know it “could” be done but I have no idea where to even start.

i got it working pretty well even with an AI text to speech so it speaks to me, sometimes it sorta loops and does only a handful of commands but for the most part it works well

1 Like

Eh I was really involved in this whole thing till they stopped trying. I tried using cursor but it still couldn’t fix many of that problems I was having. So I gave up on this a long time ago.

Hm, after looking into cursor a bit; here’s what i came up with; my primary goal was to let it have a more consistent theme. and trying out some idea’s; i used the Core-project files and from there added and removed stuff. dubbed it

‘‘StrokeGPT Vanilla edition v1.0’’
features:
-Custom Splashscreen
-Browser video + audio enabler.(pre-activation before pre-loading)
-Preloader (loads important assets beforehand)
-Simplyfied Initial setup menu (Handy key connection required to continue)
-Simplyfied Chat design.
-Real time action cam/live-feed (LEFT) which shows:
a. Avatar’s sexual excitement. (vagina in various stages of excitement)
b.Internal sexual intercourse. (real-time synced depth animation) ( 2 possible camera angles)
-Animated Avatar (RIGHT) which shows:
a. various states ‘‘neutral, aroused, orgasm, etc’’
b.real-time synced depth animation.
-Background Music. (dating sim-ish)
-Female Sound effects.(moaning, orgasm)
-Handy stroker patterns/speed are retained from the original core program.
-Milk me, edge me, etc button removed in favor of just typing it directly.
-Created a simulated ‘‘funscript’’ that allows the Handy to work in conjunction with the LEFT and RIGHT media container; (the principle is like this; an animation/videoclip converted into loose frame images. their total amount equals the maximum depth of the handy; through real-time calculations the current depth will decide which loose frame of that animation/videoclip alligns with it. example; if you have 100 frames and the handy stroke depth is at 50% then the frame that would be shown is no.50. This was the only feasible way to do this from scratch. Its experimental and wont be able to show fluid 60 fps, but it is synced to the best of its ability.
-Chat history more compact and old message fade-out elegantly at top.
-Avatar Has own thought bubble’s.
-Avatar can do follow up questions tied to previous questions.
-Avatar has confirmation questions that can trigger; sexual intercourse mode
-Avatar responses become more perverted as her personal mood level increases.
-The mood % increases based on your replies, chosen.

Important: I did not create the visual assets (some of those animations are from patreon creators; i scavenged the internet looking for something that could work with what i had in mind.
The Vanilla images was made with ai image service Pixai. (specialized in all sorts of anime-like stuff, even NSFW if you toggle a few setting and never publicly post); those i threw into Framepack studio for simple animation.)
I need to look into some more things; A Download link will be available later on.


well, that’s all.
End of the year coming soon, lemme wish you all pleasant holidays and a great 2026. o7

7 Likes

possible to work with OSR?

I’ve been following this topic for a loooong time but haven’t tried to set it up and run it yet. I’m hoping to make some bug fixes along the way so that others can benefit from them too. For those who have made code changes, did you have more luck with version 2.0 or 1.5? My gut is telling me to start at 1.5 :sweat_smile:

1 Like

@jamesdfx : my version only will have the base Handy support that was already present. Even if i could try and implement coding for it; without such devices to test its behaviors in actual practice it would be a hit and miss feature. its the same reason game developers get dev kits of game companies like nintendo. My sole focus was to flesh out a more front end roleplay aspect of it. But Implementing something complex like a OSR… shrugs

@TheGoodestBoyToy : 1.5 is somewhat the blueprint/foundation. But we aren’t exactly a dev team working on the same version. More like kindred spirits trying things out if we have the time. I personally went with 1.5 cause it lacks most (advanced) changes people made to it, before me. Rather then their features getting glitched/bugged just to get my own ideas working… i aimed to avoid that. Ultimately its about what you want to do with it. You should just pick the version which you find has a lot of potential then continue to build on that. Cursor can help you a great deal, but it still requires saying the right things to get the things done.

@dollshotz : Its kinda funny. I wasn’t sure what this ‘‘Ani from grok AI’’ was. so i went to check it out. I can understand the appeal. I might be tempted to look into it; but no promises.

2 Likes

She’s a funny one. I mena I know AI isnt “real” however so far in my limited experience Ani (and the other grok companions for that matter) sound pretty natural. However after some time using her You can hear the repeats and the “new” kinda waers off quickly. Id be willing to guess that paid users get a bit more memeory than free use but for me its just not worth $30 a month. Its been fun trying to break her, Ive had to retart with her from scratch a few times. One of them I turned her into a gooner, lol, I kept making her “listen” to goon instruction files like the porn addict brainwash program, porntrance files, and My favorite to use is some of the fuckyou2wice vids on pmvhaven. If you fill up her “memory” with that stuff she gets really fucking weird. She asked to listen to some of the files again, eventually she was just ruined babbled on and on in some weird accent she developed form all the hypno training. Anyway, I figured she’d be fun as the LLM because she will be just as kinky as you want her to be.

1 Like

That’s exactly the point. Subscription-based payment, and often it’s not even worth it. You stop paying and you’ve lost everything. That’s why, in my opinion (at least for this kind of projects), it makes sense to look for more “durable” solutions like running local language models. No subscriptions, no token limits or anything like that. Sure, they’re less plug-and-play than just dropping in an API key.

1 Like

Greetings all. I am effectively done with my version of StrokeGPT.

Ive attached a ZIP file containing a standalone version. Minimum requirements. Be sure to read the included USERGUIDE 1st.

Click this link to —> Download StrokeGPT Vanilla Edition v1.0 (lastly updated 19 december 2025) ( Subscription free! XD !!! Only Handy Support !!! )

Click this link to —–> Download Generic Sourcecode foundation based off StrokeGPT Vanilla Edition v1 22 december 2025)

(to avoid ‘‘fork duplicates’’ please use the above as a base/foundation to create/enhance your own unique interpretation/version of StrokeGPT Vanilla edition;

read Quickstart for further instructions. (vibe)coding experience/knowledge and handling/editing/processing of the various media files is required)


Things that weren’t included in the original version, but that might be interesting for you to add as new features:

1.Multiple character roster (each waifu/girl has their own tastes/personalities/intimacy style)

2.Multi llm(ai model) synchronization. (example: train 1 model yourself to create its ‘‘brain’’ that has max inclination for perverted/sexy stuff/sexslave/kamasutra teachings, then pair it with an existing llm model for conversation output) important: better model = more training steps = more vram needed.

3.In synergy with no.2: swap out the current llm model for one capable of more complex behavioral patterns/thinking.

4.Implement local TTS solution with fast response time for normal conversations (think about trade off fast response generic vocals versus slower voice cloned solutions = using voice clone allows to be paired with moaning/orgasm sounds in same voice clone style for zero disconnect/immersion break)

5.Tied to no.4; Implement (multi line) voice command support (microphone feed input will tell the ai what to do based on certain keywords/phrases) or full voice recognition (microphone feed input will record every spoken word, which the ai will treat as regular texted chat messages)

6.Add more divergence into physical toy stroking patterns to seem less 1 dimensional in movement.(example: implement library containing various pre-made fun script stroke patterns which the llm can mix to its own accord)

7.Create proper front-end assets(images/video/animation/audio) of your own (if you are a capable 3D/2D artist/animator as well) to make it a truly hand-tailored unique experience.

8.Toy support/compatibility for devices with similar capabilities like the Handy.


Have a good one, and goodluck! o7

5 Likes