Heya! First of all i should start by saying i apologize for the lack of activity and communication on this. Both me and Sirius are still here and we still have a plan to continue working on the upgraded version of the converter. 2024 for me was literally a huge leap in my personal life, i moved from a student room to a proper apartment and completed my first year as a career embedded dev after graduation. (Also i spent most of my free time after the move personally soft-renovating the apartment even though i’m renting it, because landlords/housing agencies are garbage, and getting my partner moved in with me, and other stuff), so it was as packed a year as it got. And i believe Sirius also had a ton of personal stuff to get to this year.
All of which means the current state of the v2 transition is right about where we left off at the start of last year… I know, disappointing. However, I am feeling much more comfortable with the amount of time i can dedicate to hobbies in 2025 (in fact i even plan to focus on them more greatly). And this project is among the list of things i want to get done.
Now, given that you’re even offering to help out with it (thank you so much btw!), it would be reasonable to give at least a broad idea for what we were planning to do with this:
- First of all, obviously finalize the v2 converter engine. Sirius may explain more about it, but basically it currently is written in octave and somewhat still broken into many script pieces and full of debugging overhead. That needs to be cleaned up and documented a little.
- We then are faced with a decision: do we continue to support what admittedly is a useful and widely used but slowly dying funscripting platform (scripting Lua for OFS which is abandonware for a while now), or do we pivot and make this into it’s whole new standalone Python GUI app? (Not to say the converter code may never be backported to OFS after it has been fleshed out, but focus would be placed on the python code until it reaches maturity).
Assuming the transition to Python, we would have much more freedom to experiment and iterate, which i find harder to do with the constraints of Lua and the OFS API.
The new converter would introduce what we like to call “difficulty modes”, because the challenge of physically simulating the machine’s oscillation behavior, motor response and communication limitations without real-time feedback has also led to us finding different ways to optimize the script generation to tailor the user experience to be more gentle/rough by adjusting the degree to which we allow the simulation to deviate positionally from the intended action. So in short, trying to control the uncontrollable (and unmeasurable, because let’s be real, literally nobody wants to make embedded mods and contraptions to communicate rotary encoding in real time on chinesium fuck machines and try to PID control the power delivery to get that thing marginally more accurate than this simulation would) xD
And if this is done, why stop here? I have a more moonshot idea to build an entire funscripting IDE in as much as possible Python PyQt, with maybe some C++ if really needed for stuff where Python may bottleneck. I want to analyze and take the best parts of OFS and other funscripting tools, make it super easy to extend with community Python plugins and hopefully easy enough to contribute to with good documentation and code architecture. And by extensions i mean I’d love to see if someone can make DL funscript copiloting plugin right into this bitch as well as this here extension and more, and to hopefully make it easy for other python projects that have been popping up lately on this forum to integrate with it too. Again, very moonshot, and i don’t know if there’s already a project that attempts this, but i personally dream of a future where i can open up a scripting tool and script a huge video in no time with user-friendly or maybe even fully automatic motion extraction and post processing. This whole ecosystem of pre-computed sex tech sync to content is just a deep learning video motion/context tracking problem in my view and it shouldn’t require manual slavery of hours per minute of video depending on action complexity.
Anyway, moonshot aside, the main objective remains clear, so what do you think?