PLEASE READ: I’m trying to add more options so both you and Linkle can be set up the way that fits you best. You’d choose a gender and (if you want) select a body. Nonbinary people could be trans too. Basically, you’d pick whatever applies or skip anything you want. Representation matters, and I want everyone to feel like they can use these apps.
Looking forward to the release. While I liked StrokeGPT, I missed some visuals. I thought how great it would be if the AI could generate pictures (short videos in the future) on the fly, while chatting based on the topic. This seems to be the first step into it. Very interesting and I am curious how this will turn out.
The pictures are not generated by the local LLM. The model chooses a relevant (to what’s happening) picture from a pre-build local library and sends them when it feels appropriate to.
Local image generation can definitely be done alongside the local model (with a beefy enough GPU to run both at once as it’s still very taxing on hardware) but it won’t be included with any of my apps.
I see, cool system. Understandable that there are limitations (for now) when it comes to the image generation. But it’s cool to have some sort of images in there. Would it also be possible to include videos?
What I liked about StrokeGPT, is the ability to roleplay with it. I bet a lot of users tried this and it works quite well. It’s not as good as the other LLM models like ChatGPT, but for what you want to do with it, it works really well. I like the whole concept and I am looking forward to this one and what you may bring out in the future.
Yeah, sadly, capable local LLM’s require a lot of compute and my apps aim for a wider spectrum of users who may not have great hardware. Plus none can compare to massive, billion dollar services like ChatGPT right now. Somebody could probably get videos working yes as it’s just getting them to display in the UI (pre-rendered unless they have their own servers/access to an API to handle the immense process) but the app isn’t built to do this, so they would have to add the fucntionality themselves. Hope this makes sense
Having a local AI running is already a cool thing. I just found recently that this is possible. Until then I always thought it all would be online based. And if you take in consideration that OpenAI used ~ 200k GPUs to train GPT5, you see how demanding this is.
I’ll try and have the app finished as soon as I can. Due to health issues and my time being very limited, I’m removing the ability to play funscripts and instead relying on a context aware (shaft aware?) system instead.
This means Twinkle will stroke you whereever you/they want same as normal, but it won’t be via complex movements.
Honestly, I’m feeling very happy with changes like this. Mostly because creating apps exactly how I want them has streamlined things quite a bit.
At the end of the day, I’m creating apps I want to create first, and releasing them is simply an extra.
I just wanted to clarify that. I am only sharing hobby projects. I am not a content creator.
Small update. Things are doing well. Just ironing out some issues with movement guardrails.
Twinkle can now send (pre-rendered) selfie’s whenever you ask, as well as the pic’s they’ll send during chat when they think it’d make sense to send one. Should probably change their name in the UI too lol.