Hello everyone,
I’m creating this post because I’d like to get feedback on my upcoming attempts at VR creation. Those who follow me know that I previously tried to get into traditional animation, but today I want to explore animation in virtual reality.
After several hours of experimenting with different techniques, I finally managed to achieve what I was aiming for: a 360-degree video with one image per eye to create depth (each image is 1920x2160, resulting in a total of 3840x2160).
In the examples I’m sharing, I think the depth effect works quite well. I’m using an HTC Vive (the first version), and although I’m not a VR headset expert, I assume image quality has improved over time.
That’s why I’d like to share two examples with you that differ only in the number of samples generated per image: one with 1024 samples, the other with 32. This should make a big difference (4 minutes and 30 seconds of rendering for a 1024-sample frame versus 15 seconds for the 32-sample one), but on my HTC Vive headset, I don’t really see much of a difference
For those who will test it, here are my questions:
- Is there a noticeable difference between the two examples?
- If so, are we talking about a significant drop in quality?
- Does my video work properly on your headsets?
Video with 32 samples (MP4 h.264)
Video with 1024 samples (MP4 h.264)
I forgot to mention that the video is 3 seconds long (72 frames). It’s intended to loop, although it’s not perfect yet. For the 1024-sample version, my PC took around 6 hours to render it, compared to just 17 minutes for the 32-sample version.
That’s why I’m curious to know if the quality at 32 samples is good enough — so I can consider making longer and more technical videos.
As for the video itself, there are still adjustments to make: the size of the body representing us, the bed, probably also the camera placement, etc. I used what I had available, since my main goal was simply to produce a VR video and see — with people who have better headsets than mine — whether the quality difference between the two versions is really noticeable.
I’m also sharing a third clip, which doesn’t show the video in front of you — just the room — to better appreciate the depth effect. It’s 5 seconds long, and the idea is to pause and observe your surroundings.
Video with 1024 samples (MP4 h.264)
I’m waiting for feedback from anyone who wants to test it. I know these are just short clips, but I’m working on something more serious (Tifa taking care of you with her hand), which should be around one minute or longer — if I have time to compile everything by this weekend.








