Introducing Feel8.Fun - A Real-Time, Node-Based Signal Processing Framework 🚀

Introducing Feel8.Fun - A Real-Time, Node-Based Signal Processing Framework :rocket:

Hey everyone! :waving_hand:

I’m excited to share a project I’ve been working on called Feel8.Fun (Feel it) - a flexible, real-time signal processing framework designed to bring visual scripting capabilities to our community, similar to how Unreal’s Blueprints or ComfyUI work, but specifically tailored for interactive content generation.

NOTE: https://Feel8.Fun website is another toy project, I haven’t update the project page yet.


:bullseye: What Makes Feel8.Fun Different?

Real-Time Processing for ANY Visual Content

The system can process virtually any video source in real-time:

  • Local video files - your entire library
  • Live streams - support streaming content on the fly
  • Running games - capture and process gameplay directly
  • Screen capture - literally anything on your screen

Thanks to shared memory architecture and optimized pipelines, the Blueprint Canvas achieves <5ms processing latency for signal routing and transformation.

Community-Friendly & Extensible

Feel8.Fun is designed to embrace the amazing work already happening in this community:

  • Easily integrate algorithms from FunGenAI, DeepFunGen, and other community projects
  • Language-agnostic node services (C/C++/C#/Go/JavaScript/Python) - use whatever works best
  • NATS messaging backbone ensures seamless communication between components
  • Modular architecture means one service crash won’t bring down the entire system

For Users of All Levels

Beginners: Once a blueprint is designed, it can be packaged and distributed to other users who can simply execute it in the background - no technical knowledge required.

Advanced Users: Full visual programming environment with:

  • Signal processing nodes
  • Python scripting nodes
  • Real-time debugging tools (inspector, line plots, visual plots)
  • Compare and analyze intermediate signals at any stage

Beyond Video - Generative Scripting

Feel8.Fun isn’t limited to video analysis! Think of it as an alternative to projects like ayva-stroker:

  • Generate motion patterns algorithmically based on parameters
  • Create custom rhythms and patterns with node-based logic or Python code
  • Combine multiple signal sources for complex, responsive behaviors
  • Perfect for creative pattern design and experimental scripting

:building_construction: Technical Architecture Highlights

The system uses a hybrid communication model:

  • Shared Memory (zero-copy) for video frame distribution → multiple algorithms can access the same video with zero latency overhead
  • NATS PUB/SUB for signal routing → loosely-coupled, distributed, highly scalable

This architecture provides:
:white_check_mark: Hot-swappable modules
:white_check_mark: Fault isolation (service failures don’t crash the system)
:white_check_mark: Language flexibility (pick the right tool for each job)
:white_check_mark: Easy integration of community algorithms and tools


:artist_palette: Why This Matters

I believe this framework can stimulate creativity in our community and empower creators to:

  • Experiment with new scripting approaches
  • Rapidly prototype and test ideas
  • Share reusable blueprints with the community
  • Lower the barrier to entry for script automation
  • Build on each other’s work more easily

:construction: Current Status

The system is still under active development, but the core architecture is stable and functional. I’ve prepared a preview video demonstrating its capabilities (see below).

I’m sharing this early because I’d love to hear your thoughts, feedback, and ideas. What features would be most valuable to you? What use cases should I prioritize? Are there specific community tools you’d like to see integrated first?


:video_camera: Show Cases

  1. Dancing Video + Mediapipe + Wave Generation
  1. Streaming 3d skeleton from UE4 Game (Stellar Blade)
  1. Embrace webUI

:speech_balloon: Looking Forward

I’m hoping Feel8.Fun can become a collaborative platform that brings together the best ideas and algorithms from our community. Whether you’re a developer looking to contribute nodes, an algorithm creator wanting integration support, or a user with creative scripting ideas - I’d love to hear from you!

What would you like to see in a tool like this? What workflows would you want to support?

Drop your thoughts, suggestions, or questions below! :backhand_index_pointing_down:


Feel8.Fun - Making real-time interactive scripting accessible, flexible, and fun! :tada:

NOTE: https://Feel8.Fun website is another toy project, I haven’t update the project page yet.

Reference

19 Likes

This looks sick, but my first thoughts after just reading all that and watching the video (both from a developer and end user perspective):

  • On first impression, this looks similar to how shader editors in 3D engines work, where you can choose to either download existing shaders made by others with pre-exposed options such that you can use them directly without dev experience, or to make them yourself from scratch if you know your shit. Except instead of manipulating 3D object surfaces it’s analyzing media and turning it into motion outputs. I really like the idea, and i’m very glad to see the immediate openness for community contribution and desire to have this work with existing community projects.

  • As a developer, this will require a hell of a lot of documentation for every exposed API, processing node and interaction type to become even remotely usable. Also, it needs a lot more demos or explanations as to what are the logical blueprint steps/workflows for various use cases, because you’re basically developing the first tool of this kind (and with it, you’re establishing the foundations of the intended ways of working). This all takes time, i agree, but it’s very important to do this early and emphasize on it as you progress with development for it to have any chance of sticking and becoming more used in the community.

  • As a user, i think you really need to narrow down your scope for this, especially while you’re still early on. It shows promise so far, however there’s bound to be tons of roadblocks when you start going into every use case individually and building blueprints for them. Don’t start over-promising from the get-go, instead focus on use cases that you know will work well and that you have already tested and benchmarked, and expand that list with every feature update that unlocks more opportunities. Also, don’t advertise this as a beginner friendly tool until you’ve reached far into the beta stages of development. Until you get to the point where someone can download your tool, run it, load a community made blueprint, point it to ANY source you advertise as compatible, and have it seamlessly connect to any supported device and start working with minimal or no settings tweaking, it should not be considered a beginner-friendly tool. That’s a loooooooong journey ahead. If you ignore this, people will complain about how difficult it is to use and you’ll end up half your thread comments being pleas for more usability.

Also i’m personally curious to know how much of this is vibecoded and how much is actually designed properly, even in this early stage. It’s just a pet peeve of mine to ask for this to be transparent whenever i see large-scale apps like this being developed in months. Although i’m glad to see that you’re already openly discussing the actual architecture, and it does confirm that you at least know what you’re doing moreso than your average cursor vibecoder bro.

2 Likes

If the development environment becomes more refined, I’d be interested in participating in the integration work.

However, in the case of DeepFunGen, it was designed more for generating scripts rather than real-time inference due to its heavy processing load.
I’m curious whether your system also supports generating funscripts by running a pipeline — not just real-time inference.

Also, I’d like to know which body pose estimation model you’re using or considering.
From what I can see on the website, it seems to lose focus easily when there are intense movements or multiple people in the frame.

2 Likes

Haha, I have tried the video to script for months but cannot have satisfied result. Your work is impressive. I have tried many heavy networks like SlowFast, Hiera, etc. Maybe my test scene is too varying. Most of them are casually taken, short videos, live videos ,etc.

My tool is more like a developer-oriented debugging tool, visualize the motion signal and build the complex pattern & logic for verification, once a good blueprint is investigated, we can implement them in a more efficient way. like FPGA for verification and then make a real product.

I do think in the future, the model could run faster and stronger to be able to use as a realtime.

For pose estimation, I am using MediaPipe by google.

1 Like

I’m really excited to connect with someone who can provide a different point of view, especially one focused on a production mindset.

You’re right that the project is in a very early stage, and I admit my long-term plan is massive. My core motivation is to build a “universal blueprint”—something inspired by Blender’s compositor—that allows me to reuse code and quickly build new pipelines.

The idea is to have a flexible system that could eventually handle everything from live video and regular videos to 2D/3D games. Everytime I saw a sexy character from a video, game, I want to share the feeling with my buddy.

I completely agree with your advice about over-advertising and scope. For now, the current work is primarily designed for developers and engineers who are comfortable with these concepts.

Developer Experience (APIs): To your point about documentation, I’m trying to build a good developer experience from the start to fill my personal usage, which I hope is a first step. I’m trying to make the APIs as simple as possible (e.g., passing all messages as stringifiable dicts). I’m also integrating the Monaco editor (the core of VSCode) with LSP support, so developers can get auto-completion and live documentation directly in the editor.

Future Work: You’re 100% correct that formal documentation and workflow demos will be critical for adoption. Not only for human developer, but also for agent.

Vibe Coding is great! Without help from VSCODE Copilot + Claude + GPT-5-Codex, this project will take me a whole year and finally lost of interest. I’d say most of the UI and frontend work is generated by AI (I started with Claude 4.5 and am now using GPT-5-Codex). It has saved me a massive amount of time. My philosophy for this demo project is to use AI for quick trials: rapidly building UIs, investigating the best tech stack, and verifying the architecture’s efficiency.

For all the “core stuff,” I still do it by myself or have to intensively revise any AI-generated code. This includes the shared frame buffer, video decoders, the 3D visualizer, highly customized UI components, and the core CV algorithms.

Thanks again for the insightful comments. It’s really helpful to get this kind of reality check.

2 Likes

Sounds good! I can agree with vibecoding for more front end stuff, because that’s always a lot more boring for us and somewhat of a good use case for AI work anyway because of the unimaginable amount of front ends crunched while training it. By contrast, it is not capable of creating large scope architectures by itself and have them be optimized properly and without many bugs. So the way you work is good enough for me. I just don’t want to see vibecoded slop in backends or internals at all, especially as an embedded software dev. It can be helpful to ask it about how it would think about certain designs and concepts but never actually write them down itself.

But yeah this is looking good so far, and good to see you’re receiving feedback well. Looking forward to this!

2 Likes

Any chance to have the handy as supported device and exporting script?

The thing that really stood out for me is the ability to track skeletons in Unreal 4 games. If you are able to do the same for Unity and other engines, it would make it possible to integrate toys into pretty much any game.

For adult games, you could set up the logic to track the distance between joints to determine stroke position. For example, during a handjob, it can use the distance between the root joint in the penis and the hand joint.

Thank you for advice. Yeah, I am thinking about the eactly same. But unfortunately, it is not a universal solution. Mod maker need to takesometime to figureout how to grab the correct data and send them out to my program. But yeah, it is possible!

The idea about tracking distances, I strongly suggest you to take a look LoveMachine. Which is a great project doing the exactly same as you said. Although it is only support a few games

Other devices like Handy would be definitely supported in the future. We are planning to integrate Initface Central, which has a very broad device support list.

I tried LoveMachine a few years ago, though I wish there were a solution that allowed you to integrate a game yourself.

With your current solution for tracking the skeleton in Stellar Blade, did you mod the game to pass the joint positions to your tool?

I recently started thinking about a universal solution for any game or video, using pose estimation. However, the models I tried were not accurate enough and struggled with most angles. So I came to the conclusion that it most likely would require tracking joints in the games. But then it seems like you have to create mods for each individual game, which makes it rather inaccessible.

I vibecoded using AI, a plugin that seems to mostly work for most 3D unity games that tracks distance between two bones you can specify (similar to lovemachines prototyper idea) and sends the data to the python plugin (someone uploaded a python script on here that sends data to funsr1 2.0 so I used that as a base).
I wasn’t planning on releasing it cause I it was the lovemachines dev idea first and it’s kinda janky with slower movements but when the movements are moderate to fast, boy it is accurate with the timing at least.

Yeah, vision models are not stronge enough to provide accurate pose like directly streaming from the game. For my stellarblade video, is a demo wrote by VibeCoding with a few hour. Luckily it worked.

Vision would be a fallback model, if you can tolerate the noise level.

That sounds promising. Thank you for sharing the experience.

As long as game author follow the common naming convention with their assets. Oh, but yeah, in Unity. If you can use RuntimeUnityEditor, you can figure out the name by human intelligence. Would be a great help, like the prototyper.

I didnt find anytool can do that in unreal, to visualize a skeleton.

May i know what game you have tried? I don’t know too much 3d ero games in unity.

I am curious about skyrim sexlab, and baudlers gate mod.

Yeah I use RuntimeUnityEditor made it easier. It works mostly on every game I tried Koikatsu Studio which I use all the time, VAM, (G.O.A.T, Yiffalicious I’m not a furry but…) I was planning on seeing if I can make the GUI work with VR next.

I made versions that tracks 2D Unity (Tested with random 2D games like NTRMAN AOTG and Mad Island) and rpgmaker live2D too (This one is tedious to use though) I’m still improving these to make them less a pain in the ass to use.

Unreal engine was my next target, I had the idea of using UEVR (which I think can see ingame objects maybe even bones? Can’t remember) to inject a plugin but since you found a way and you got that down already I’ll skip it. I was planning on seeing if I could get skyrim to work in the near future.

And I don’t mind releasing the code to anyone who can actually code and make these work even better, I’m tired of yelling at AI every night anyway and imagine with someone who can actually code can do. (I didn’t use anyone’s code as AI made it from scratch except for the python)..

2 Likes

Wow, 2d, the i remember they use a different engine, like spine? I tried to check mad island but didnt find a solution.

Woud you mind share something about 2d?

Not really but I did manage to track spines in one rpgmaker game not sure if it will work in others.

Sure, not really a secret as I plan to release it eventually anyway.

For the 2D unity games I noticed that the “meshrender component” was very common with moving values such as X, Y in many 2D games and I told AI to target the X and Y transform values and send em. This isn’t super complex like stroking tip accurate but it follows the general movement and my cope and slogan for this plugin is “Better than nothing” lmao.

I’m probably gonna make a thread and post this stuff soon and be done with it. Trying to use the 3D version of this plugin in VR is a little obnoxious, I’ve been working on it for 2-3 months now making it user friendly for myself. Someone who knows what they are doing can make it more optimized and remove most of the unnecessary “vibe” coding roughness.

Right now I’m trying, malding and failing to get the 3D version GUI to work in VR.

1 Like

I think this is a very great project, I have tried many AI script-making tools, but from the perspective of a user who is not very familiar with coding, their interfaces are usually quite complex. I have watched your demonstration video, and I think a graphical interface will make it easier to use, which will make script creation and generation faster and more accurate. I am very much looking forward to your work and the progress of your project!

1 Like

Yeah, that is my initial motivation. Makes motion generation accessible to public (including me). I just want to generative some motion for fun.

1 Like