[F8Studio MAJOR UPDATE] Real-Time Node-Based Signal Processing Framework

Hi everyone,

Major update here. The project has grown a lot since the first announcement, and this post now contains the full current overview.


MAJOR UPDATE (What Changed)

  • Distributed multi-process, multi-language runtime (Python + C++).

  • Real-time process scheduling, cross-service communication, and runtime monitoring.

  • Cross-platform support: Windows and Linux.

  • Fully open source under AGPL-3.0-only.

  • End-to-end real-time graph processing.

  • High-performance C++ nodes for performance-sensitive workloads.

  • C++ 8K player node with VR-to-2D mode and free zoom/pan ROI viewing.

  • PyQt-based visual graph editor for editing all node/service properties.

  • Monaco editor for code fields (LSP integration is not ready yet).

  • Very strong extensibility with existing infrastructure + Python script nodes.

  • Multi-axis signal routing and TCode generation.

  • Intiface / Buttplug integration.

  • Lovense local/mock API workflow integration.

  • theHandy integration is in progress.

  • Active game-MOD integration testing with promising early results.

  • Experimental Lovense traffic interception for some embedded-SDK games, converting packets into stroker-oriented signals (for devices such as theHandy, SR6, etc.).

  • Planned next features: funscript playback and funscript recording.

  • Headless Runner mode: run saved blueprints without opening full Studio UI.


What F8Studio Is

F8Studio is a visual + programmable pipeline framework for real-time interactive signal processing.

You can build node graphs that ingest different media/sensor/game inputs, process them in real time, and map outputs to control protocols/devices.

It supports both:

  • Visual graph workflows for fast iteration.

  • Scripted workflows (Python nodes) for advanced custom logic.


Real-Time Input Coverage

The runtime is designed to work with many content sources in real time, including:

  • Local video files.

  • Live streams.

  • Running games.

  • Screen capture pipelines.

This also includes non-video use cases such as functional/algorithmic pattern generation.


Architecture (Core Design)

F8Studio uses a hybrid communication model:

  • Shared memory for high-throughput frame sharing between services.

  • NATS pub/sub messaging for decoupled signal and control routing.

Benefits:

  • Modular service boundaries.

  • Better fault isolation.

  • Mixed-language service composition.

  • Easier integration of new community algorithms/tools.


Positioning (Important)

To keep expectations realistic, current stage is best for:

  • Developers.

  • Users with coding experience.

  • DIY builders who want custom control pipelines.

For beginners, more packaging/productization is still needed.

But with well-automated blueprints + headless mode, practical one-click style usage is already possible for some workflows.


Showcase Videos

You can get blueprint shown in video from HERE

  • [F8Studio] OpenCV Tracker → TCode
  • [F8Studio] Game (Skeleton and Spine animation) → TCode
  • [F8Studio] Audio Driven TCode
  • [F8Studio] Functional TCode Generation

Links


Collaboration Request

We are expanding game-MOD integration and would love help from people with MOD development experience, especially:

  • Unreal Engine

  • Ren’Py

  • RPGMaker

If you are interested in building reusable blueprints (including headless-ready workflows), please join us.

26 Likes

This looks sick, but my first thoughts after just reading all that and watching the video (both from a developer and end user perspective):

  • On first impression, this looks similar to how shader editors in 3D engines work, where you can choose to either download existing shaders made by others with pre-exposed options such that you can use them directly without dev experience, or to make them yourself from scratch if you know your shit. Except instead of manipulating 3D object surfaces it’s analyzing media and turning it into motion outputs. I really like the idea, and i’m very glad to see the immediate openness for community contribution and desire to have this work with existing community projects.

  • As a developer, this will require a hell of a lot of documentation for every exposed API, processing node and interaction type to become even remotely usable. Also, it needs a lot more demos or explanations as to what are the logical blueprint steps/workflows for various use cases, because you’re basically developing the first tool of this kind (and with it, you’re establishing the foundations of the intended ways of working). This all takes time, i agree, but it’s very important to do this early and emphasize on it as you progress with development for it to have any chance of sticking and becoming more used in the community.

  • As a user, i think you really need to narrow down your scope for this, especially while you’re still early on. It shows promise so far, however there’s bound to be tons of roadblocks when you start going into every use case individually and building blueprints for them. Don’t start over-promising from the get-go, instead focus on use cases that you know will work well and that you have already tested and benchmarked, and expand that list with every feature update that unlocks more opportunities. Also, don’t advertise this as a beginner friendly tool until you’ve reached far into the beta stages of development. Until you get to the point where someone can download your tool, run it, load a community made blueprint, point it to ANY source you advertise as compatible, and have it seamlessly connect to any supported device and start working with minimal or no settings tweaking, it should not be considered a beginner-friendly tool. That’s a loooooooong journey ahead. If you ignore this, people will complain about how difficult it is to use and you’ll end up half your thread comments being pleas for more usability.

Also i’m personally curious to know how much of this is vibecoded and how much is actually designed properly, even in this early stage. It’s just a pet peeve of mine to ask for this to be transparent whenever i see large-scale apps like this being developed in months. Although i’m glad to see that you’re already openly discussing the actual architecture, and it does confirm that you at least know what you’re doing moreso than your average cursor vibecoder bro.

2 Likes

If the development environment becomes more refined, I’d be interested in participating in the integration work.

However, in the case of DeepFunGen, it was designed more for generating scripts rather than real-time inference due to its heavy processing load.
I’m curious whether your system also supports generating funscripts by running a pipeline — not just real-time inference.

Also, I’d like to know which body pose estimation model you’re using or considering.
From what I can see on the website, it seems to lose focus easily when there are intense movements or multiple people in the frame.

2 Likes

Haha, I have tried the video to script for months but cannot have satisfied result. Your work is impressive. I have tried many heavy networks like SlowFast, Hiera, etc. Maybe my test scene is too varying. Most of them are casually taken, short videos, live videos ,etc.

My tool is more like a developer-oriented debugging tool, visualize the motion signal and build the complex pattern & logic for verification, once a good blueprint is investigated, we can implement them in a more efficient way. like FPGA for verification and then make a real product.

I do think in the future, the model could run faster and stronger to be able to use as a realtime.

For pose estimation, I am using MediaPipe by google.

1 Like

I’m really excited to connect with someone who can provide a different point of view, especially one focused on a production mindset.

You’re right that the project is in a very early stage, and I admit my long-term plan is massive. My core motivation is to build a “universal blueprint”—something inspired by Blender’s compositor—that allows me to reuse code and quickly build new pipelines.

The idea is to have a flexible system that could eventually handle everything from live video and regular videos to 2D/3D games. Everytime I saw a sexy character from a video, game, I want to share the feeling with my buddy.

I completely agree with your advice about over-advertising and scope. For now, the current work is primarily designed for developers and engineers who are comfortable with these concepts.

Developer Experience (APIs): To your point about documentation, I’m trying to build a good developer experience from the start to fill my personal usage, which I hope is a first step. I’m trying to make the APIs as simple as possible (e.g., passing all messages as stringifiable dicts). I’m also integrating the Monaco editor (the core of VSCode) with LSP support, so developers can get auto-completion and live documentation directly in the editor.

Future Work: You’re 100% correct that formal documentation and workflow demos will be critical for adoption. Not only for human developer, but also for agent.

Vibe Coding is great! Without help from VSCODE Copilot + Claude + GPT-5-Codex, this project will take me a whole year and finally lost of interest. I’d say most of the UI and frontend work is generated by AI (I started with Claude 4.5 and am now using GPT-5-Codex). It has saved me a massive amount of time. My philosophy for this demo project is to use AI for quick trials: rapidly building UIs, investigating the best tech stack, and verifying the architecture’s efficiency.

For all the “core stuff,” I still do it by myself or have to intensively revise any AI-generated code. This includes the shared frame buffer, video decoders, the 3D visualizer, highly customized UI components, and the core CV algorithms.

Thanks again for the insightful comments. It’s really helpful to get this kind of reality check.

2 Likes

Sounds good! I can agree with vibecoding for more front end stuff, because that’s always a lot more boring for us and somewhat of a good use case for AI work anyway because of the unimaginable amount of front ends crunched while training it. By contrast, it is not capable of creating large scope architectures by itself and have them be optimized properly and without many bugs. So the way you work is good enough for me. I just don’t want to see vibecoded slop in backends or internals at all, especially as an embedded software dev. It can be helpful to ask it about how it would think about certain designs and concepts but never actually write them down itself.

But yeah this is looking good so far, and good to see you’re receiving feedback well. Looking forward to this!

2 Likes

Any chance to have the handy as supported device and exporting script?

The thing that really stood out for me is the ability to track skeletons in Unreal 4 games. If you are able to do the same for Unity and other engines, it would make it possible to integrate toys into pretty much any game.

For adult games, you could set up the logic to track the distance between joints to determine stroke position. For example, during a handjob, it can use the distance between the root joint in the penis and the hand joint.

Thank you for advice. Yeah, I am thinking about the eactly same. But unfortunately, it is not a universal solution. Mod maker need to takesometime to figureout how to grab the correct data and send them out to my program. But yeah, it is possible!

The idea about tracking distances, I strongly suggest you to take a look LoveMachine. Which is a great project doing the exactly same as you said. Although it is only support a few games

Other devices like Handy would be definitely supported in the future. We are planning to integrate Initface Central, which has a very broad device support list.

I tried LoveMachine a few years ago, though I wish there were a solution that allowed you to integrate a game yourself.

With your current solution for tracking the skeleton in Stellar Blade, did you mod the game to pass the joint positions to your tool?

I recently started thinking about a universal solution for any game or video, using pose estimation. However, the models I tried were not accurate enough and struggled with most angles. So I came to the conclusion that it most likely would require tracking joints in the games. But then it seems like you have to create mods for each individual game, which makes it rather inaccessible.

I vibecoded using AI, a plugin that seems to mostly work for most 3D unity games that tracks distance between two bones you can specify (similar to lovemachines prototyper idea) and sends the data to the python plugin (someone uploaded a python script on here that sends data to funsr1 2.0 so I used that as a base).
I wasn’t planning on releasing it cause I it was the lovemachines dev idea first and it’s kinda janky with slower movements but when the movements are moderate to fast, boy it is accurate with the timing at least.

Yeah, vision models are not stronge enough to provide accurate pose like directly streaming from the game. For my stellarblade video, is a demo wrote by VibeCoding with a few hour. Luckily it worked.

Vision would be a fallback model, if you can tolerate the noise level.

That sounds promising. Thank you for sharing the experience.

As long as game author follow the common naming convention with their assets. Oh, but yeah, in Unity. If you can use RuntimeUnityEditor, you can figure out the name by human intelligence. Would be a great help, like the prototyper.

I didnt find anytool can do that in unreal, to visualize a skeleton.

May i know what game you have tried? I don’t know too much 3d ero games in unity.

I am curious about skyrim sexlab, and baudlers gate mod.

Yeah I use RuntimeUnityEditor made it easier. It works mostly on every game I tried Koikatsu Studio which I use all the time, VAM, (G.O.A.T, Yiffalicious I’m not a furry but…) I was planning on seeing if I can make the GUI work with VR next.

I made versions that tracks 2D Unity (Tested with random 2D games like NTRMAN AOTG and Mad Island) and rpgmaker live2D too (This one is tedious to use though) I’m still improving these to make them less a pain in the ass to use.

Unreal engine was my next target, I had the idea of using UEVR (which I think can see ingame objects maybe even bones? Can’t remember) to inject a plugin but since you found a way and you got that down already I’ll skip it. I was planning on seeing if I could get skyrim to work in the near future.

And I don’t mind releasing the code to anyone who can actually code and make these work even better, I’m tired of yelling at AI every night anyway and imagine with someone who can actually code can do. (I didn’t use anyone’s code as AI made it from scratch except for the python)..

2 Likes

Wow, 2d, the i remember they use a different engine, like spine? I tried to check mad island but didnt find a solution.

Woud you mind share something about 2d?

Not really but I did manage to track spines in one rpgmaker game not sure if it will work in others.

Sure, not really a secret as I plan to release it eventually anyway.

For the 2D unity games I noticed that the “meshrender component” was very common with moving values such as X, Y in many 2D games and I told AI to target the X and Y transform values and send em. This isn’t super complex like stroking tip accurate but it follows the general movement and my cope and slogan for this plugin is “Better than nothing” lmao.

I’m probably gonna make a thread and post this stuff soon and be done with it. Trying to use the 3D version of this plugin in VR is a little obnoxious, I’ve been working on it for 2-3 months now making it user friendly for myself. Someone who knows what they are doing can make it more optimized and remove most of the unnecessary “vibe” coding roughness.

Right now I’m trying, malding and failing to get the 3D version GUI to work in VR.

1 Like

I think this is a very great project, I have tried many AI script-making tools, but from the perspective of a user who is not very familiar with coding, their interfaces are usually quite complex. I have watched your demonstration video, and I think a graphical interface will make it easier to use, which will make script creation and generation faster and more accurate. I am very much looking forward to your work and the progress of your project!

1 Like

Yeah, that is my initial motivation. Makes motion generation accessible to public (including me). I just want to generative some motion for fun.

2 Likes

Love machine was banned? I can’t open the github page after 10 Sept.