[WIP] StrokeGPT - A Free, Self-Learning AI Partner for The Handy

I’m well aware, just limited by my hardware right now. Thank you! :grin:

Applaud the efforts here, and thank you @StrokeGPT. Personally, I’ve decided to wait until there’s a browser-only version of this or something like this, because I found the installation process to be difficult. I’m using a Macbook, and it was all too complicated for me and ultimately didn’t work for me. But thanks for the hard work, and best of luck.

1 Like

Hey, thanks. I appreciate the kind words.

Are you using IOS? It’s only designed for windows right now. I’ll make sure to update the documentation to mirror this! :grin: There will be an IOS version eventually, though (I don’t own any apple products so can’t dev well for them).

And, yeah, I’m hoping to package it all into a singular program once it’s stable enough.

1 Like

I tried it today and I think it has great potential :slight_smile:

Setup
Setup was quite easy, the readme file described eerything well.
What was a little sad is, that personality and handy key need to be created again each time and are not saved (or at least i could not find out how to save it).

In-Play Configuration
Min and Max Depth setting was a bit confusing for me.
I think it means the minimum position of the handy (like 0 is in the lowest position and 100 would he the highest possible position), but i am not sure.
With values of 30% minimum and 80% maximum, i only got very short strokes with varied speeds so my assumption might be wrong.
Yet, even if i asked for long slow strokes, i got 10% speed but only short strokes.
I would like to have a setting for maximum handy speed as well, since the handy can get really, really fast and that is not to everyones liking :slight_smile:
=> Maybe a slider setup with 2 sliders (1 with min + max position, 1 with min + max speed) would be easier to understand for users.

Chat
Chatting was a bit… strange. The AI very often responded with very short answers. Including things like “elaborate, immersive long answers” or similar in the personality did not work.
But maybe that is how it is supposed to be, i am not sure.
Overall, i could not get the AI to describe things immersively.
It really teased the tip, when it said teasing the tip - so this worked great!
when i had the word “stop” somewhere in a line, it triggered a full stop of everything even if that was not the intention. Not really bad but maybe needs a look into.

Idea: In the setup ask for a safeword and let the “Stop everything” Button send the safeword? Yes, might be a playful approach but could work :smiley:

Auto Mode
Aborted since it kept various high speed but short strokes, which was not very pleasurable for me.
The Chat kept rolling in very fast, with maybe 3 seconds before the action changed again. That was too fast for ElevenLabs to keep up. Personally i would like more times before the action changes again or not a new speec line with every action (both would work so the chat does not roll so fast).

Overall, this idea is right down my alley since i tried looking for some kind of AI that will be able to stroke and/or edge for a long time and provides that little tickle of the unknown.
Also the idea of learning mode is awesome, so it can adjust to various people.

Overall a very nice surprise.
Hope that feedback helps and lookign forward to follow this project :slight_smile:

1 Like

What was a little sad is, that personality and handy key need to be created again each time and are not saved (or at least i could not find out how to save it).

same here, file gets created after exit, but on a new start it overwrites with a new blank one

I think it means the minimum position of the handy (like 0 is in the lowest position and 100 would he the highest possible position), but i am not sure

thats what i also thought but i think it is reversed here, 0 is the tip and 100 is at the bottom

With values of 30% minimum and 80% maximum, i only got very short strokes with varied speeds so my assumption might be wrong.
Yet, even if i asked for long slow strokes, i got 10% speed but only short strokes.

same here, just a “jackhammer” move at different positions, but no full strokes
movement was a lot better with 1.25 for me, 1.3 i cant get to work “normally”

also a lot of default patterns seem to be missing since 1.3, i dunno if on purpose or by accident

Maybe a slider setup with 2 sliders (1 with min + max position, 1 with min + max speed) would be easier to understand for users.

thats what i think would be the best solution

i was going to complain that audio does not work since 1.3 but then found out i blew through all my credits :smiley: the audio feature is really something

i have been playing with the app since version 1.0 and i love what OP has created so far, just needs a bit of polishing here and there.

1 Like

I got everything up and running but it just stays in one position and moves really fast? I really like what you are doing here, its truly unique! Any help would be awesome.

So where should the depth be if i don’t want short strokes?

Seems like v1.3 broke something. Getting the same short strokes several others.
Settings are saved in a JSON, but not loaded for new session.

Suggestions:

  • Move the depth-slider back into the interface - Easier to make adjustments on the fly.
  • Allow change of personality during session.
  • The wording of MIN/MAX depth is a bit confusing. Maybe add some wording in relation to user’s body (“Near/far”, “closest/furthest”)
  • Setting up depth during setup could use a slider, or ability to input numbers directly.

I realise this is early days, and development takes time, but I’m looking forward to how this develops.
Keep up the good work, brother!

just a quick fix in the meantime for the short stroke/jackhammer problem

in app.py

find

span_abs = (calibrated_range_width * 0.20) / 2.0

and replace it with

span_abs = (calibrated_range_width * 1.0) / 2.0

Hmmm even with this change the jackhammering is still an issue

Thank you for the kind words!

On setup, the Handy key input each time is an issue on my part due to user pref files being overwritten constantly instead of added to and updated. This will be resolved fully in the next update as I already have most of it in place.

With regards to chat, sadly it’s down to my shitty hardware lol. I can’t test anything better than what’s currently in these builds, aside from the models being far quicker on better GPUs. We are making great progress from my original builds, thoough. :grin:

I’m in the process of adding a min/max speed to the onboarding process.

Yeah, I broke automode lol. It shall be fixed in the next update.

Again, thank you so much for the kind words. I’m just one dude and feedback like this makes me super happy with the project.

User pref files will be fixed in the next update. Automode will also be fixed, and mix/max settings will be more easily explained during setup. :grin:

Not gonna’ lie. This feature is like crack lol. To get much better once I move onto fully utilising v3 voices.

The AI should do as you ask (mostly) during regular chat but Automode/milking mode is broken right now. It wil be fixed in the next update.

Thank you so much for the kind words!

I’ll be moving persona choices into the UI again, same with depth (later as on-the-fly changes to this currently cause issues). The wording will also be much more clear on what exactly everything means in the next update. This was my bad (I mean, it’s my app it’s all my bad lol) for rushing to get a release out.

I will be looking into how to make inputting mix/max etc easier (the jog feature used right now is great for MM measurement, but I fulfy get your point.

I appreciate you for this! :eyes: :heart:

Automode(s) will be fixed in the next update :smiley:

Okay. I think that’s everything answered. Now a few things.

-From now on, no new features will be implemented until ALL current features are fully stable and complete. I work in an obscure part of game dev, this is all quite new to me lol.
-The next update will fix automodes, /mix/max depth confusion, and a rare issue with Elevenlabs support.

Once the next update is out, I will be working on getting all current features up to scratch. This may take a while because I don’t always have that much time to work on this project. Sadly, the new GPU that arrived was not only pre-used but DOA, so I’m waiting for the replacement. Which will hopefully arrive on Tuesday.

I truly appreciate your kind words and feedback. This project is going to become a beast in time!

Finally, while I do suggest sending me feedback and bug reports via email (included in the project documents), remember I’m just one dude and can’t actualy provide full tech support on why something is wrong with your personal computer that isn’t related to my app. This is not my job. I do it for fun and because I love what this community does.

I’m not available on weekends.

Also, your support on ko-fi has made my day. :heart:

5 Likes

Sorry for the late reply, I’ve been away!

Processor: Intelcore i7-8750H
16 GB RAM
NVIDIA GeForce GTX 1060

Figured as much. No big deal. Runs great on a new GPU though. Just didnt think it would time out at 60 seconds. I was a little confused why it wasn’t working. I expected it to be slow. But not work at all, only giving errors I didnt expect. Took me a while to figure it out.

Maybe you can add a small line in the timeout message making it more clear? Just a thought.

1 Like

Yea thats rough. A 1060 running AI calculations, not a great combo.

1 Like

Wow this seems truly like a next-gen project, kudos! I would like to ask if there’s a future possibility to support toys which can connect via intiface central, or natively with bluetooth. I don’t have a handy, not even a stroker toy, but would be very interested to see what a gpt could do with my vibrating toys..

Thank you!

It’s still early days but, yes, definitely. As soon as I’ve finished the Handy version (feature complete and stable) I’ll start saving for a broader toy to develop with as I only own a Handy right now.

I’m going to add the ability to increase the timeout window in a future update. It’s mostly there to stop users queuing responses and slowing their system to a chug lol. I will also add a clear notification that the AI took too long to respond. Thanks! :grin:

1 Like

A few updates on v1.4 for anyone following (I’ll keep editing this as I go):

  • Added min/max speed to the onboarding guide & clarfied mix/max depth: The model will now take your min/max speed and calculate a final, relative mix-max speed based on your set min/max depth. :white_check_mark:
  • Automode: Fixed the issue where it got stuck on quick, repetitive moves :white_check_mark:

Automode wasn’t really inventing moves on the fly— (yes, a human that uses em-dashes) it was falling back on its built-in examples. That’s fixed. It now actually creates new moves dynamically and only uses examples as a backup.

  • Persona choices moved back into the UI :white_check_mark:
  • User-prefs “memory” file still WIP :cross_mark:
    This one’s more complex than it looks. When I say the AI has “memory,” I don’t mean it legit stores info between sessions, and I DO NOT want to add to current LLM misinfo/anthropomorphism. We’ve got enough people already thinking computers contain brains just because they can mimick sycophancy lol. Anyway! I use a local JSON file as a kind of diary. After each session, the AI writes stuff like your Handy key, min/max depth (tip ↔ base), max speed, your favorite moves, and all past personas into that file. When the next session starts, the system just loads parts of that file back in and feeds them to the AI as context. That’s what I mean by “memory.” It’s just a structured way of remembering your settings, moves, and history, deciding which ones are worth keeping, which ones to overwrite, and how to merge near-duplicates.The hard part is teaching it to manage this file properly:
    • Two moves that are 95% the same—keep both or merge?
    • If you give it new info that slightly contradicts old info, which one wins?
    • How do we avoid the file bloating with trash data? It’s basically building a living, self-cleaning notes file that updates as the AI learns what works for you.

These are rhetorical questions, by the way. I know the solutions, but it’ll just take a bit to get there. :grin:

My new GPU just arrived… :smiling_face_with_sunglasses:

FINALLY. While hardware caps always choke bigger language models, I’m not about to knife half the user base here by demanding a monster GPU. The default stays llama3:8b‑q4_K_M. It’s good enough and runs on a potato.
Once StrokeGPT is rock‑solid (v2.0), I’ll chase the heavy‑duty stuff I want with fatter checkpoints. It’s the best of both worlds. Everyone can run the app, power rigs can crank it higher, and I get to see how far the project can be pushed. Obviously I want a vesion for T-code etc, but that isn’t going to happen until I can afford a device or until somebody decides to translate the project which is a MAMMOTH task.

Thanks for all the feedback, bug reports, and kind words. :heart:

2 Likes

Dang, i might have to sit this out for now then

It’ll get better in time. LLMs are still in their infancy on home rigs, and there’s a ton of misinformation around about them that skews people’s views on just how complex/resource intensive they are.

1 Like

Great update!
Looking forward to v1. 4 :slightly_smiling_face:

1 Like