Does (Walkable) Room Scale VR porn videos exist?

So today I had a sudden realization…there’s VR porn videos that’s POV and Non-POV but there is no room scale VR Porn which as a result allows you to move around the scene yourself. (and so isn’t POV either) Or that’s the impression I gained from a quick search around on here and a few other places. I’m excluding games since there are many of them although mostly if not all of them that do so are sandbox like I believe.

Anyway I wondered a bit as to why this was till I donk’ed my head and remembered that file size is probably the issue after cost/production time but I felt that since I know nothing about what goes into it I would make a thread since I would be wrong. Though is there any video players/formats to do it off of?
Also I feel that Animation porn would be easier to do compared to real life porn since then you’d need more than just one VR camera. Although I also have no idea if VR animation porn is the made by completely redoing the video at a slightly different angle or something else.

So what are your thoughts on this like: does it exist, would the file size be ridiculous, which is most likely to appear first, do you think that it would be something you’d watch, is there any way to watch it?

yeah its done in 3d world all the time. You can also scan your room or area and then import it as a scene. its not the same thing though. I havent seen anyone do it in real life videos though. I think its because so many different parts of the scene would need to be tracked, requiring a lot of compute.

By chance do you have a link or two or a site of a few examples?

i have my own 3d model i do this with. its pretty cool Virt-A-Mate Hub

2 Likes

Ah I understand now…I had honestly forgotten to check over VaM since I found it to be a bit more of a collection of assets to make a video from along with not having much time to look at getting to grips with the program nor the site. Plus that any scripts on here that I’ve noticed which are VaM aren’t linking to the scene or seemingly usable in the scene either while also being short video wise.

I’ll have to make the time to do so and cough up the cash though I do still wonder if there other sorts too than VaM. (Edit just before hitting reply I found the plugin VaMlaunch that allows sex toys like the Handy to connect to VaM although only found two free scenes I think that use the tag)

It can only be done in games/CGI because in real life you would need to be filming from every spot at the same time, while simultaneously not blocking the view of each camera with all of the other cameras. In other words it’s just physically impossible to record something from every single angle at the same time.

I actually made a 3d model with a custom voice model that i also powered through a custom text model and i think i got movement down too with classification. But yeah i connect my handy and talk and fuck her all the time. its pretty nuts. its quite intense the first time you experience too. i wasnt ready for it lol

3 Likes

Care to elaborate on how you did it? I actually wanted to do something similiar but im overwhelmed with the complex vam world.

Uhm i have the beginning of the teaser here if you want to take a look. Eleven.mp4 ~ pixeldrain. Its taken me like a year to do it all. What I am doing is different to VAM though. I am creating autonomy using three large language models for voice, thought, and movement.

You can use VAM and kind of get something similar. its buggy and slow though. While i do not open source the code for this as it will turn into a paid service I can tell you where I started and the following process.

First I set off to make the voice model. This was probably the most difficult part so far. While it is easy to create a model to a mediocre standard, slightly robotic, cracks in the voice, etc. It is a whole different story to make the model sound as good as mine. It was ultimately a 5 month process of failure after failure. A lot of it has to do with dataset preparation and curation.

Next was the brain, the thoughts. This is another LLM where I took an uncensored open source base model and fine tuned on top of that, things like sentience, saying “i love you”, shit like that. Also took some trial and error but wasnt as hard as the voice. Follow that up with a memory. This is essentially a few tricks using vector databases and “Memory fragment” injection.

Finally the movement. This is where I am at now. It is a whole host of new problems. Mostly with the control of the extreme edges of body language and facial expression. How to mirror a natural ability within humans where we know exactly how to control our body as to not look extreme or exaggerated. You can see in the video i am struggling a bit with this. She should be showing a mixture of 5 expressions but I have them all set very low and in turn this kinda makes her look emotionless.

Using VAM is another hurdle. It is something that will take hours and hours of dedication, it is tedious, and painful it is also extremely heavy. VAM can bring any system to its knees without a sweat. You cannot begin to tackle such a project without having high levels of compute. I have to use two machines to run all of that at once. But I do do it all on my own systems. No API’s or paid services.

Sorry this turned into a wall of text. Ultimately i would say that without significant developer experience you cannot accomplish this. You can however just play around in VAM and use the community assets as theyre provided. You will be more limited and have to pay but it;s an option. If you do have some coding experience then I would suggest the voice model or text model first. Just be prepared to fail a lot, its going to suck at first. But it can be overcome.

2 Likes

I was thinking about this problem before; and I think it is doable to use 4DGaussians to create something that’s room-scale walkable ( and not just like the faked 6DOF we currently have)

It’s almost like a Braindance in Cyberpunk now that i think about it

And to apply that:
You’d need to build a VR App/Game that could scan your room (like some of the new Quest 3 Apps) and then build scenarios automatically based on some form of AI segmentation to identify areas of sexual interest:
like on a chair you can get a blowjob, or near a counter, you could fuck her in doggy style
and the game/app will position the girl for you to get started; just get into position and trigger the action

Or you can position the girl however you like in whatever pose or action you want to do

All doable with a little bit of work in Unreal Engine but the hard part is to actually generate the Gaussians, you’d probably need a multi-camera rig recording each “action” and a beefy rig to output all these Gaussian splats

I was planning to use VAM to generate training data to output these splats and see if it’s viable and if it is; maybe hire a real professional to record these actions and create the first true XR porn :joy:

But so far I’ve been distracted by scripting AR Porn scenes

2 Likes

Oh that looks amazing btw!

Are you using Unreal Engine as a base for this? ( I cant imagine VAM looking that good and still be performant :joy: )

What about sexual actions? Or would that also be handled by the movement LLM? ( such as having some kinda procedural blowjob simulator that’s then controlled by the output from the LLM?)

You can do that with a few different phone apps actually. Scanning shouldnt be a problem assuming you have a capable camera

1 Like

Uhm the sex stuff is animated, but the talking and idle movement and a couple other things like walking around are all setup within a state machine. I mostly use VAM and Unity. I dont like dealing with unreal its a pain. If you think that looks good you should see it in VR. Its intense lol. I’ll try and get a recording of her, ive never tried to record in VR though. I think its kind of tricky. I dont know how to handle an LLM that can do actual autonomy. Mines basically just a few tricks. It would need to be aware of a lot more than its currently capable of.

1 Like

That’s awesome!

I wonder if it’s possible to apply AI agents to take control of certain functions and feedback towards each other to get a more sophisticated semblance of autonomy?

so instead of simulating multiple agents of a company; it’s simulating multiple functions of a human consciousness and maybe it will appear to be autonomous :exploding_head:

Just a crazy idea :smiley:

EDIT: It’s also hilarious to me right now how we’re discussing developing near-artificial intelligence because hornyy

oh yeah for sure, scanning the room is the easy part; hard part is getting the 4D Gaussians working; I’m put off by the VRAM requirements to train something right now, and having to build a dataset to train with :sweat_smile:

Lol, so I actually I am a freelance dev and most of my business is making chatbots for companies wanting to integrate AI. This stuff is right in my wheelhouse.

AI agents aren’t actually effective. They sound fancy and are a great concept to create hype. But ultimately they do not perform better than a single agent. The issue, in my opinion, is the unintentional obfuscation of embeddings while the query is passed between the agents. Each agent will perform a semantic search with its own set of embeddings and then pass on a new set. Assuming the original vectors aren’t retained they end up with incorrect results or dont understand how to properly retain the right context including it with the generated response.

She does appear autonomous. This comes from the text classification model. The brain model spits out a response in text. In this case she said this:


Then another model designed to classify that text ranks the emotions expressed within in it. That model gave me these emotions:
image.
This then goes to the game engine where the 3d morphs can be manipulated in specific ways to give what was once only text an ability to expressing emotions and body language. This same concept can be linked to movement on a bigger or broader scale as well its just a bit more open ended as to what she does or doesn’t do.

So yeah multi agent frameworks aren’t the key in my opinion. The biggest evidence I have for this is to look towards nature, as we always do. If we do this then our reference is “us”. The human brain, the apex or pinnacle of intelligence as we know it. It is not a multi agent system. It is a single, efficient, and most importantly elegant solution to a very complex “thing”. Multi agents could always be reduced down or iterated upon until only a single agent remains. It was a good experiment but a failed one. It is only a stepping stone in our progress towards actually creating intelligence. Synthetic intelligence.

1 Like

That’s actually super cool to know! Thanks for the explanation! :smiley:

The emotion scoring system directly affecting blend weights is legitimately cool too!

This is inspiring me towards another, more SFW tangent though; this multi-llm system could perhaps be used to build a so-called “Game Director” in an action-rpg that could send outputs to a procedural quest system, modify the game world and quest line in real-time as the player progresses; to ultimately create a unique experience for each player ( much like what a DM in DnD would do ) so thanks!

Now the only problem is; how to make it infer fast enough in real-time on a consumer PC? :thinking:

Well its all just a trick at this point. The term “artificial intelligence” was probably applied pre maturely. They are not actually intelligent. They cannot actually “think” or there is no cognitive traversal through time. Think of it like how we can think into the past and the future. This type of “time travel” is essentially for an intelligence because it allows for real time problem solving, logic, and most importantly creativity. There have been emergent properties that surprise us of course and we keep finding them but as of right now, hoping it can make actual decisions through logic is kind of a stretch.

It is just the appearance of logic. So, like with my project it comes down to being a complex state machine or several of them that work amongst each other to give the appearance of autonomy. If you were to make the game you wanted to make you would at some point have to code in some degree of decision making for the AI model/s. But this type of thing has been going on in games forever. Where the player makes decisions that effect the outcome of the game and therefore unique to that player

1 Like

Ah that makes sense; can’t be autonomous without also having a past that influences an intrinsic, future motivation and so lead to all the good/bad but human decisions we make

:exploding_head:
Wow describing it as a complex state machine blew my mind; now it all makes sense