The voice issue has been fixed since then as the app has changed quite a lot over the past few days. It was down to remnants being left in from me testing with Elevenlabs API v3.
I donāt instruct the AI to do anything first. Itās based on mood/last movement it remembers, and your min/max speed. The way the move_handy function works means movements the AI invents (simple up/down at varying speed at varying places) loop until another is added via a new reply. The model is not capable of sending one movement that then stops of a series of movements right now due to how the Handy servers handle requests.
Yes, the stop thing has to stay as it is for now, due to safety seasons. It definitely makes saying ādonāt stopā ironic .
Iāve got invented patterns workingā¦
Took all night but I fucking did it!
To put this into perspective. Any movements the AI currently invents consist of a single up/down jig at a set speed/depth.
Once I get this properly implemented everything the AI designs will be the equivalent to a personalised, fully loopable funscript. Basically, if you say ālick the tipā itāll actually try to simulate the motions of licking the tip.
Anyway, the next uodate will take a bit as I get stuck in implementing this.
you where saying you are not looking for coop - so i did not bother to contact you, but i need to say; before you go all in further, you might want to think about my approach: have a program like multifunplayer instead of your python script. the reason because is:
its so much easier to let mfp drive your toys, not only can you use almost all commercial toys, including estim with it - you can also use pre made pattern and custom looping funscripts for different scenarios and load/unload them on the fly.
i think a llm can and will not be able at the moment to do ad-hoc pattern creation. it needs rails to choose from to make a good experience.
it needs tight constraints and a way to follow. decisions yes, but in stages and the outcome needs to be clear for each of it.
full control is not going to be a pleasant outcome for everyone every time.
modified the index.html to include a rotating gallery for the background, trying to figure out if it is possible to change pics by mood. As at the moment it is just randomly picking an image out of a folder.
Thanks! MFPās great for coverage and scripts. My goalās different as this is hobby R&D into adāhoc, LLMāmade patterns. With light rails (bands, zones, target Hz, AāBāA phrasing, novelty) it invents coherent motions live. I did experience collapse early on but got past it. I may add MFP/MQTT export later, but Iām keeping the core experimental. Nice release!
Hells, yes, itās gonna! Cantāt wait to share it later this week.
Today, Iāll be working on fuly replacing the current movement engine with the new pattern-based one. Itās not an easy task but itāll get there. If I had to guess Iād say I might have a build ready by Friday, but since itās such a large change, itāll be a seperate build you can try at your own risk lol.
I know some still struggle to understand exactly what the mimimum and maximum depths are for during the start guide, so Iām also making them as easy to understand as possible. If that doesnāt work, I can always have the birds and bees talk with you lol.
Saw on Reddit you had posted about a fix for V1.5 for speeds/lengths. Is this fix going to come out or is it scrapped in favor of the new pattern generation? If not are you able to point me in the direction of what needs to be updated so I can fix it on my own while waiting for the new major update?
The fix will come included with this next update. Iāll never scrap any fixes, donāt worry. I keep track of everything. Iām surprisingly tidy for an amateur lol.
Very interesting work you did here! Would like to use ai for my stroker osr2. Is there a chance you will create a function to integrate with Intiface central (buttplug) soon?
Yes for me is good but sometime she go in short stroke i dont like it i tell at she but after a while return on that mode i have adjust the problem that be only short stroke but now she do what she want i dont really like the short frenzy one
Obviously Iām still working on this project but Iām taking my time with updates from now on. Thereās no point in rushing a passion project Iām working on for fun.
At the end the day, Iām just one dude and donāt want to burn out treating one app like itās some sort of job lol.
Iāve got patterns fully working and implemented the new engine. Itās pretty awesome if I donāt mind saying so myself lol. Thereās some other stuff I need to fix/implement but hopefully the new update will drop some time early August.
Secondly, since thereās likely a wave of similar apps incoming (which will probably try and make money from them), hereās a super brief guide to help you navigate what we currently call AI, in the hope of you not being lied to or tricked with bullshit.
What we call āAIā right now isnāt alive, self-aware, or feeling anything. It predicts what to say next based on patterns in huge piles of text itās been trained on. Itās impressive and the scale at which it does this is nuts, but thatās all there is to it. It doesnāt āthinkā or ārememberā like you do, and any claims about one service or another being āsmarter,ā āemotional,ā or ālearning by themselvesā are usually just marketing spin or prompt tweaks. Treat it as a clever tool, not a mind, and youāll avoid falling for the hype. LLMs are awesome but services keep users subscribed by promising/touting upgrades over time that donāt technologically exist yet. Especially using terms like āimproved emotional intelligenceā or āimproved overall intelligenceā which is utter bollocks. Itās like saying your new iPhone can better taste the atmosphere on Venus.
Anyway, thatās all. From the contact Iāve recieved from various companies, I can see this becoming a trend and yāalls desevere to know how to navigate techno-hype.
On a small side note, Iāve almost got the entire StrokeGPT project converted to a single portable JS desktop app. No more python comfort zone for me lol.