hey not really into how slr is handling ai passthrough, no downloads or local processing, only streaming. There are numerous benefits to local playback which i wish to have ability with ai pt. I dont get it, why is this technology only on slr? is there some kind of program that can simply insert video and it processes it to ai pt and it does at least as good a job as slr to do on our own? i would be happy to pay for such a program to processes as many vids as i want and not be limited to streaming only
As far as I know, there’s not an off-the shelf-program that’ll do what the “AI Passthrough” that’s being done streaming from SLR. I imagine SLR will want to hold onto their means as long as they can.
You’ll need to tinker and figure out what AI background removal techniques are best for which videos. No one size fits all at the moment, but I’m no expert. Maybe @blewClue215 can give you some insight from this post.
Your local options for passthrough at the moment are:
- Videos that have the alpha channe
- Videos that are intended for chroma-key and have recommended settings
- Videos that you can tinker with chroma-key and get it as close as your can
- DIY
yeah you can use something like SEGM with stable diffusion. Nothing is perfect yet. The issue is that the resolution of VR videos is very high and you most likely you do not possess a machine capable of doing segmentation/background removal in a reasonable amount of time.
If youre willing to pin your GPU for a day or two depending on what gpu you do have then sure you can do it.
Probably need an RTX 3090 or an RTX 4090. Anything else will be insufficient unless you move into the data center class gpu’s. Will also requiring a good amount of developer knowledge, working with github and python.
I guess to second @IIEleven11 post, yes you could do it. But it’s not as easy as you put it:
simply insert video and it processes it to ai pt
i see, thats unfortunate i dont have the knowledge or computing power for that(didnt know was so gpu intensive to process), guess stuck with just slr streaming for now, thanks anyways
Thinking a little more about it, processing a video like that to have the ai pt applied to a video im guessing wont be practical for the near future considering its computing requirements. What would be feasible i think is a player or ai integration into a current player that just processes and applies the mask similar to how slr is doing it, but without slr’s limitations. Like having a player that can do this locally, rather than needing to be uploaded to cloud storage. Hopefully someone creates it.
thanks
well, for example I can do it, with my 3090. to do a 30 minute video it would probably take 6 hours or so. A 4090 would halve that.
SLR is most certainly doing the same thing. Either by using cloud compute on something like AWS, Runpod, etc, where you can rent powerful GPUs to do such jobs.
So the mask youre seeing is the AI wll detecting the correct character to track and then it will remove the rest of the video, leaving you with the alpha and enabling passthrough. This is the GPU intensive part.
This means that to do it locally one MUST have the necessary compute. The only other option would be to rent that compute from a cloud service.
Hey guys I just own a Quest 2 for now and don’t want to purchase the 3, I wait for the next version then I’ll dive into PT. But I’ve been checking your conversation and with After Effects / keylight it’s pretty easy to remove background.
With keylight you can really well setup the chroma key in terms of softness and all, I know that H264 and H265 dont embed alpha channel but maybe if you export a Matte like that :
There maybe would be some players that can process both videos without being too GPU heavy, I guess the definition of Matte would make a major impact on such processing. Hope I helped a bit
OK I think I’ve spoken too fast and you’re talking about videos that weren’t shot in the purpose of chroma key removal, that quite complicates the process indeed
Thanks for the mention!
Funnily enough I’ve tried a lot of the methods discussed here and ran into a few problems:
- Not enough compute power to do this quickly and painlessly
- AI Segmentation models i’ve tried can’t seem to produce high-definition masks that could handle hair or fine details, and models that could would take too long to produce just 1 frame. Masks generally looks Blobby
So I started on a journey to DIY my own offline passthroughs in a somewhat cheap way without requiring AI/ML or crazy expensive GPUs
https://discuss.eroscripts.com/t/how-to-make-your-own-passthrough-videos
And the post above illustrates how to set it up yourself and get started making your own Passthrough videos even on 8K resolution VR
(I’ve not considered using stable diffusion to create masks though, that’d be interesting!)