Hello, I’m aware of the other AI and ML tools for funscript that exist, but have not seen one that fulfills my primary pain points:
SUCH AS:
I do not want to send detailed logs and metrics about the granular aspects of every element and detail of my porn, sucksleeve10,000, and moment of ejaculation to unnecessary websites, companies, and APIs which exist for the pure purpose of nefariously milking more data from me than cum.
I do not want to deal with the latency that comes as a part of #1, making it difficult to or impossible to sync to one of a preprocessed finite number of videos to my device, rendering it a bit useless when you consider that’s the only reason to even use this interface.
I don’t want to code just to get off, nor do I want to preselect and curate a menu of scripts to store around and curate, adding yet another clearly neurotic and overly logical aspect to my life, which used to be one of the fortunately normal elements of my psyche, until I got this silicon slime slurper and really considered the possibilities of what a truly optimized generative pipeline could do if wholeheartedly employed as your personal cum calculator.
That being said, I’ve trained some draft models already:
Running well under 50,000 params
In a continuous manner (i.e., not a discrete set ‘chunk’ of outputs but an array of arrays of overlapping windows(or both upper and lower triangles for those of you who know the secrets of torch)
*Approximates the quality of some lazy hand done examples well enough for me to have considered it worthwhile to orchestrate a more organized effort and so I’ve now constructed a pipeline I’m proud of which appears to go some distance to adequately address the issue of what movements to mirror from the videos and when in uncommon or complex scenes
Device and API
I’m not incredible with JavaScript, but fortunately/unfortunately, my personal probabilistic penis pleasurer is the Autoblow AI Ultra.
The API (and the historical caches of the initial release of said API and dependencies) are incredibly straightforward and demonstrate how a local network can be run under an API to trick the thing into thinking it’s still cataloguing the generations of lost children it consumes with the corporate ai fucksleeve masters using some manually sent signals to allow it to actually work with the device in a privacy-secure manner, solving our latency issue.
The model we need for this should run thousands of times faster than real-time on half of a toaster and genuinely requires no server overhead.
###Whether or not that implies X or Y party is guilty of a lack of ability to be efficient or lack of honesty about their actions in regards to the need to collect so much data (I may as well send a sperm sample), I couldn’t say.
I’ve been aware of this branch of the technology for roughly 48 hours. This puts me in the position of needing to solve this problem before I can actually use the technology, and needless to say, this has put a bit of haste under my feet.
So, to get to my point the TL;DR is thus:
For myself personally, I can build the pipeline to run on my local network and give me my seamless on-the-fly experience for which I don’t want to even have to hit a button to tell the thing go, and have this problem solved.
I’m building an AI to do extremely fast very high quality streaming of instructions (fun streams?) based on a completely local and offline stream of your monitor outputs to infer when and how to Fourier transform your nads, If anyone else is interested in leveraging the fruits of my labors here, then I’m interested to know if there are any among you who can:
Compile the model and tooling into something for Android if I build and train it in my native language and frameworks.
Produce a UI, saving the end user from the same problem we as “cum coding goblins” seek to address
Whether there’s a financial incentive or it’s a widely deployed open-sourced tool, is something which I leave to the general community’s discourse and the decision of the individuals who help me out with this project.
Thank you for your time
And yes, of course I designed it so that we can use it to teach llms to give us hand jobs, I’d be crazy not to.
— Tbb
I work in that industry. Locally run LLMs require hardware that a lot of people do not have. Then if you want to use it on a phone, restraints become even tighter. The only option is using a hosted model and their api to get the speeds youre thinking of on a phone. I know people say you can run a model on a phone but they mean tiny models that produce incoherent responses. They are useless.
No, youre not going to train you’re own model and no 50,000 parameters wont be enough. You could fine tune maybe a 3 billion param model that potentially could maybe work. But you will have some limits.
you mention torch, im assuming you mean pytorch, but what follows that doesn’t make any sense. Pytorch is also an extremely complex library. If youre claiming you somehow solved something within it… Ok, show me…
Beyond all of that, what youre suggesting would be a lot of work. I also dont think making it work on the phone should be an initial target. I would make it run on a PC then and only then port to android if you want.
Please make a list, because half of people are not aware, and some baselines would be great
Me who spent more time on rewriting EDI in MFP then using it: thatfeelsman
Eeeerm is that even possible?
There’s a thread where OP is trying YOLO and that’s heavy AF
Well he’s already scaled down the vid to 1080p so it’s less AF heavy but still
If you gonna think to think you may think on a proper chatbot a la Virtual Succubus but with a heavier backend?
Yolo is one of the cheapest most efficient models available in the ecosystem it can run on less than 15 megabytes of CPU ram at 30fps.
If I get a chance I’ll find the thread and see if I can’t polish the implementation a bit if it would be useful to do so, there’s a lot of optimization in general that there’s just not much good documentation for
Sorry for the double response here but I’m not sure what your implying re: the torch/pytorch thing, I didn’t claim to fix the pytorch API, I was just indicating that was the framework I chose to build the neural network in, it’s just a small cnn, I just used perplexity to measure the training data for sparsely represented features and then convoluted those examples until there was a good balance of representations overall.
Iim not only here seeking collaborators, but the data is literally already tables of small tuples and it’s not like it’s going to be incredibly hard to find enough pornography to train on, the model is very small due to the very non-resource-intensive task (input image, output label) I mean this isn’t very far from mnist in general, only mnist has way less preprocessed data available if anything
You can disagree all you want, that doesn’t make you correct. The best evidence we have against your current claim is simply the fact that we do not see LLM’s of any size running locally on any phone. Sure, you can dive through github and find some potential applications that are trying to do this. But github is where these things are developed so everything you find will not be production ready in one way or another.
Yes, some models are smaller and easier to run, but the issue is when we decrease the models size it significantly impacts the models ability to produce useful, competent, and coherent output. Couple this with the fact that you want to analyze videos. Lets say you want to look at a 30fps video. So thirty frames PER SECOND, this ends up being a big task very very quickly.
This is why all of the video processing models today you only see output around 10 seconds or less. It’s too computationally heavy and the GPU’s vram will OOM and it will just crash. Analyzing porn on a phone in ten second intervals is not going to be fun and it will be very very time consuming.
Currently my RTX 3090 will generate with the fastest models about…~10 seconds of video in about 1 minute. With the slowest models it can take upwards of an hour. A 3090 has 24gb of VRAM. The most any consumer level GPU has on the market (tied with 4090 and excluding datacenter GPUs which are just too expensive). A phone does not have an nvidia GPU and a phone will not have such large amounts of VRAM. So while we most certainly can cut a ton of corners and get a tiny LLM to run on a phone, it is going to be incompetent and mostly useless. It will not be able to do what youre asking it to do. Hence, my suggestion is to do it on a computer first then scale down.
Right,
Did I mention I’m a computer scientist?
I don’t find these interpretations compelling or entirely thurough as far as an analysis.
Though your dismissive attitude and lack of ability to consider the possibility you haven’t solved this most esoteric and complex of fields indicates that there’s nothing really I can say that won’t make you upset unless I validate your opinions,
But there’s always more to learn, and this technology is improving faster than anything I’ve ever seen otherwise, performance and scalability broadly speaking is still more than doubling every couple of months and as a society, we haven’t even begun to scratch the surface of how much these models will compress. They’re still getting smaller, and the rate of performance to compression is still improving by the day, once we’re not still accelerating the rate of improvement then we can start to work out our relative distance to optimal, but even when we do, the software will just catch up to the hardware and then we will just be back at square one.
You personally hit a wall, you chose to accept it as a limitation, that’s fine, it’s a choice though, and not much more than that.
Also Mistral 7b will run on a phone just fine if you have like a pixel six, it’s like 4 gigs of ram at fp5
no one is upset. disagreement does not equal anger. What you said was a word soup. No one is negating progress of technology.
You directly disagreed with me and gave examples of how and why that was wrong. You keep following up with disagreement but youre also unable to show me any examples of how I am wrong or why.
Like I said, there are edge cases or very new instances of LLMs running on a phone but they are going to be small and mostly incompetent. Like mistral 7b. This is a text model though, it will be easier to run then an object VIDEO detection model. Sure, you could in theory be advancing that field of technology. This is highly unlikely though given what information youve provided and the simple fact youre trying to do this on a phone before its even really viable on a computer. Makes no sense