StrokeGPT - A Free Customisable Chatbot for The Handy that Invents Funscripts and Fucks You in Real Time

For some reason whenever I try to install the requirements in the cmd prompt as per the directions all I get is:


  Traceback (most recent call last):
    File "D:\Python\lib\site.py", line 167, in addpackage
      exec(line)
    File "<string>", line 1, in <module>
    File "D:\Python\lib\site-packages\_distutils_hack\__init__.py", line 34
      f"Register concerns at {report_url}"
      ^
  SyntaxError: invalid syntax

Remainder of file ignored
Traceback (most recent call last):
  File "D:\Python\lib\runpy.py", line 170, in _run_module_as_main
    "__main__", mod_spec)
  File "D:\Python\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "D:\Python\Scripts\pip.exe\__main__.py", line 5, in <module>
  File "D:\Python\lib\site-packages\pip\__init__.py", line 1
    from __future__ import annotations
SyntaxError: future feature annotations is not defined

I have no clue what I fucked up, but clearly I did something wrong. I would appreciate any and all help from here.

This issue isn’t with StrokeGPT — it’s because you’re running an outdated version of Python that doesn’t support f-strings or the “from future import annotations” syntax.

To fix it, uninstall all old Python versions from Control Panel → Programs → Uninstall a program, then download and install Python 3.10 or 3.11.
Make sure to check “Add Python to PATH” during setup.

After installation, open a terminal and run:

python --version

It should show Python 3.10.x or 3.11.x.

Then upgrade pip and reinstall dependencies:

pip install --upgrade pip
pip install -r requirements.txt

If pip still gives errors, run:

python -m ensurepip --upgrade

This error happens because your Python interpreter is too old to recognize the syntax used by modern libraries, so updating Python should solve it completely.

1 Like

Sorry to bother you. Do you have any idea what might be happening to mine?

Sorry, I worked on a different version — I didn’t make the one with Intiface integration. Without any console errors, it’s basically like looking for a needle in a haystack.
However, I can suggest using Cursor, a free desktop app that lets you do “vibe coding.” Basically, you show it all the StrokeGPT files so it understands how everything works, then tell it about your issue. It should be able to fix the problem (I actually used it myself to build my version of StrokeGPT).

After much hassle, i’ve finally managed to connect the handy and gotten the Base/original version of StrokeGPT running. It is an really interesting Application.

But I am not really a coder and pretty much basic technical skillset. (electric wiring,fixing leaks or a bike, np… but this is kinda way over my head.), But i do had an idea in my mind on how to enhance the base version… Visually.
I kinda like to get some thoughts on it from the people that actually coded/worked with this app, and if it would be possible for someone with extreme limited coding skills to pull off the following:

Would it be possible for the Original app to change a (centered)image depending on the ‘‘mood’’ the ai is in, and swap between pre-made image/animated gif files based on their names like Curious.png/jpg. (these image files would be pulled from the same folder/location the splash screen image is stored. (So when the AI is Curious, it would load the Curious.png/jpg/gif, to reflect its mood ‘‘visually’’) and thus increasing immersion.

Perhaps playing a matching (loop/non-loop) sound file matching the mood (like; Curious.wav/mp3) to increase immersion even more.

From what i’ve read, an AI’s capabilities is limited by the used model, but hm… would the
Original model be capable of doing extra stuff mentioned above?

If i could get some pointers on how to achieve this myself in a (possible) fool-proof manner, that would be great. Thanks in advance.

@Polemicus; I took the liberty of trying out the description parameters of your version, and used them in the Original; It seems to work… with some modification/adjusting in phrasing(i spend almost 1 hour trying to understand what was written 1st, i wasn’t really aware that with ai you need to phrase things in a specific/precise manner. but now i know).

I haven’t tried out your Version yet, cause it seems more daunting to install, but could you elaborate a bit more in laymen’s terms how to use/apply the Image/video/animated gif system you implemented in your version. (the folders which stores videos/images where empty. Is it something that’s affected by how you name those media files?

1 Like

It’s definitely pretty complicated to get my version working, since it was made after endless vibe-coding attempts (I’m not a programmer either, so yeah). For the first part of development I used ChatGPT, and for the final phases I switched to Cursor — which, honestly, I’d recommend using from the start, and I’d also recommend it to anyone like me who wants to improve the app without actually having the skills to do it.

These factors made it basically impossible to cover every single thing you need to consider when installing it “from scratch” (though I still think that, with Cursor’s help, it can be done).

That said, the folder structure is super simple:
inside each folder you can drop whatever related content you want (the file names don’t matter — just don’t rename the folders themselves unless you also edit the code). StrokeGPT grabs the contents based on each panel’s settings: loop/no loop, random/no random, etc.

“Botselfie” is the folder that contains the bot’s profile picture; in my version the profile image is placed on the left side of the screen and scaled up.

For the multimedia formats, just stick to the folder names.
JPG and similar go in images
GIFs go in gif
MP3s go in audio
Videos go in video.

Botselfie should be able to read images, videos, and GIFs.

Just drag your files into the right folders, restart the app, and you’re good to go.

After releasing that version, I kept working on the app a little on my own using Cursor.
I managed to expand the folder system for multimedia files by adding a hint/clue system.

Basically, even if it’s kinda rough, the bot detects certain words or “actions” you describe in chat, and based on that it plays specific media from additional subfolders. Each subfolder is one hint.

Example:
“I wanna kiss you” (where kiss is a hint).
The bot goes to: static → updates → hint (custom folder linked in the code) → kiss (subfolder) and then plays a random file from inside it.

I did the same thing for audio too, just like you suggested.
Honestly it’s harder to explain than to understand in practice lol.

Before anyone asks: no, I’m not releasing that version. I seriously don’t feel like updating GitHub again, cleaning the build, removing personal API keys, etc.

Everything was made with ZERO coding experience and Cursor.

Hope this answers your questions — and I hope you’ll drop an improved version yourself :wink:

Hm, interesting. Thanks for the information, Polemicus.

While i am baffled by the idea of me improving on an existing application without having any coding skills; your words have piqued my interest now, Sir.

I will use the original version as a foundation; try to give my own spin to it.

Just… don’t hold yer breath; expecting miracles. XD.

1 Like