StrokeGPT - A Free Customisable Chatbot for The Handy that Invents Funscripts and Fucks You in Real Time

Hey, thanks! :grin: :heart:

The voice issue has been fixed since then as the app has changed quite a lot over the past few days. It was down to remnants being left in from me testing with Elevenlabs API v3.

I don’t instruct the AI to do anything first. It’s based on mood/last movement it remembers, and your min/max speed. The way the move_handy function works means movements the AI invents (simple up/down at varying speed at varying places) loop until another is added via a new reply. The model is not capable of sending one movement that then stops of a series of movements right now due to how the Handy servers handle requests.

Yes, the stop thing has to stay as it is for now, due to safety seasons. It definitely makes saying ā€œdon’t stopā€ ironic :joy:.

@Dmillin1990 Thanks. I appreciate the info.

3 Likes

I’ve got invented patterns working… :eyes:
Took all night but I fucking did it!

To put this into perspective. Any movements the AI currently invents consist of a single up/down jig at a set speed/depth.

Once I get this properly implemented everything the AI designs will be the equivalent to a personalised, fully loopable funscript. Basically, if you say ā€œlick the tipā€ it’ll actually try to simulate the motions of licking the tip. :eyes:

Anyway, the next uodate will take a bit as I get stuck in implementing this.

11 Likes

hi there.

i do the same but with voxta and the logic is done in node-red (Noxy-RED.core)

difference between it is a eco system that is a bit more open let’s say. i use a lot more toys than just a stroker.

i just released the implementation for Multifunplayer yesterday, which will accept MQTT so you can drive the MFP: Release MultiFunPlayer v1.31.3.networked.v0.1 Ā· eglische/MultiFunPlayer Ā· GitHub

you where saying you are not looking for coop - so i did not bother to contact you, but i need to say; before you go all in further, you might want to think about my approach: have a program like multifunplayer instead of your python script. the reason because is:
its so much easier to let mfp drive your toys, not only can you use almost all commercial toys, including estim with it - you can also use pre made pattern and custom looping funscripts for different scenarios and load/unload them on the fly.

i think a llm can and will not be able at the moment to do ad-hoc pattern creation. it needs rails to choose from to make a good experience.

it needs tight constraints and a way to follow. decisions yes, but in stages and the outcome needs to be clear for each of it.

full control is not going to be a pleasant outcome for everyone every time.

just my two cents :wink:

1 Like

How did you modify the background?

modified the index.html to include a rotating gallery for the background, trying to figure out if it is possible to change pics by mood. As at the moment it is just randomly picking an image out of a folder.

Loving the progress, brother!

The invented patterns sounds like a fucking game-changer (literally :grin:).

Thanks! MFP’s great for coverage and scripts. My goal’s different as this is hobby R&D into ad‑hoc, LLM‑made patterns. With light rails (bands, zones, target Hz, A→B→A phrasing, novelty) it invents coherent motions live. I did experience collapse early on but got past it. I may add MFP/MQTT export later, but I’m keeping the core experimental. Nice release! :grinning_face_with_smiling_eyes:

Hells, yes, it’s gonna! Cant’t wait to share it later this week.

1 Like

Small update for those interested.

Today, I’ll be working on fuly replacing the current movement engine with the new pattern-based one. It’s not an easy task but it’ll get there. If I had to guess I’d say I might have a build ready by Friday, but since it’s such a large change, it’ll be a seperate build you can try at your own risk lol.

I know some still struggle to understand exactly what the mimimum and maximum depths are for during the start guide, so I’m also making them as easy to understand as possible. If that doesn’t work, I can always have the birds and bees talk with you lol.

Thanks for all the support and feedback. :heart:

4 Likes

Good stuff my man, eagerly awaiting.

Saw on Reddit you had posted about a fix for V1.5 for speeds/lengths. Is this fix going to come out or is it scrapped in favor of the new pattern generation? If not are you able to point me in the direction of what needs to be updated so I can fix it on my own while waiting for the new major update?

Thanks for the kind words! :heart:

The fix will come included with this next update. I’ll never scrap any fixes, don’t worry. I keep track of everything. I’m surprisingly tidy for an amateur lol. :grin:

2 Likes

Very interesting work you did here! Would like to use ai for my stroker osr2. Is there a chance you will create a function to integrate with Intiface central (buttplug) soon? :slight_smile:

1 Like

Yes for me is good but sometime she go in short stroke i dont like it i tell at she but after a while return on that mode i have adjust the problem that be only short stroke but now she do what she want i dont really like the short frenzy one

A small update.

Obviously I’m still working on this project but I’m taking my time with updates from now on. There’s no point in rushing a passion project I’m working on for fun.

At the end the day, I’m just one dude and don’t want to burn out treating one app like it’s some sort of job lol.

5 Likes

That’s the attitude, brother!

Don’t forget to enjoy (:wink:) the fruits of your labour.

1 Like

I haven’t even used any of my apps yet lol.

1 Like

I’ve got patterns fully working and implemented the new engine. It’s pretty awesome if I don’t mind saying so myself lol. There’s some other stuff I need to fix/implement but hopefully the new update will drop some time early August.

Secondly, since there’s likely a wave of similar apps incoming (which will probably try and make money from them), here’s a super brief guide to help you navigate what we currently call AI, in the hope of you not being lied to or tricked with bullshit.

What we call ā€œAIā€ right now isn’t alive, self-aware, or feeling anything. It predicts what to say next based on patterns in huge piles of text it’s been trained on. It’s impressive and the scale at which it does this is nuts, but that’s all there is to it. It doesn’t ā€œthinkā€ or ā€œrememberā€ like you do, and any claims about one service or another being ā€œsmarter,ā€ ā€œemotional,ā€ or ā€œlearning by themselvesā€ are usually just marketing spin or prompt tweaks. Treat it as a clever tool, not a mind, and you’ll avoid falling for the hype. LLMs are awesome but services keep users subscribed by promising/touting upgrades over time that don’t technologically exist yet. Especially using terms like ā€œimproved emotional intelligenceā€ or ā€œimproved overall intelligenceā€ which is utter bollocks. It’s like saying your new iPhone can better taste the atmosphere on Venus.

Anyway, that’s all. From the contact I’ve recieved from various companies, I can see this becoming a trend and y’alls desevere to know how to navigate techno-hype. :heart:

7 Likes

You are truly the hero we need, brother :heart:

It’s great seeing someone trying to educate people in this ā€œAI is magicā€-age.
You seem like an awesome dude :smiling_face_with_sunglasses:

2 Likes

Thanks! I appreciate you.

On a small side note, I’ve almost got the entire StrokeGPT project converted to a single portable JS desktop app. No more python comfort zone for me lol. :sob::sweat_smile:

4 Likes

uuuh, promising :smiley:

if you need any beta testers give me a shout :joy:

2 Likes

Thanks! It’s still got a bit to go as I broke a ton of shit translating stuff lol and I’m learning a lot of stuff on the go.

1 Like