Blog
3DOF, 6DOF, RoomScale VR, 360 Video and Everything In Between
Published on
February 25, 2018
3DOF vs 6DOF
DOF means Degrees Of Freedom, the number of different “directions” that an object can move in 3D space. 3DOF VR headsets can track your head orientation, i.e., it knows where you are looking. The 3 axis are roll, yaw and pitch. 6DOF headsets will track orientation and position, the headsets knows where you are looking and also where you are in space. This is sometimes referred to as roomscale or positional tracking.
Rotation is relatively easy to detect and track, most cellphones have the hardware required – an Accelerometer and Gyroscopes sensors. Positional tracking on the other hand, is very difficult. Especially at the fidelity and sensitivity required for VR. Doing it inside-out, without external cameras (like the Rift) or markers (like the Vive), is even harder still. Hence, the vast majority of standalone / mobile VR headsets on the market are 3DOF. In fact, we are only now (Feb 2018) seeing 6DOF standalone headsets coming to the market (like the Vive Focus, Oculus Santa Cruz and Pico Neo).
The problem with 3DOF
3DOF headsets are… problematic. OK I’ll just say it, 3DOF sucks. Big time.
Humans (and a few other animals) use a combination of different visual ques to infer depth and distance. Stereo vision is just one them, we also heavily rely on occlusion and parallax. Lets compare a 3DOF and a 6DOF VR experience. 6DOF is actually easier to visualize, so we’ll start with that.
A person with a 6DOF VR headset can move freely and naturally in the virtual environment. Everything behaves the same way it does in the real world. You can look at objects from different angles, you can bend over or under them, you can walk around to the other side. It’s just.. natural.
3DOF is a different story. You can look around, but the position of your virtual viewpoint is fixed. Imagine that the entire universe is glued to your head, and moves with you. Everything is always the same distance from your eyes.
I know what you’re about to say. Many apps and games are a seated experience, the user doesn’t move around the virtual environment. Surely 3DOF is good enough for that kind of application? Well, no. It’s not. Even when seated we use small head movements to create parallax and infer depth. We do this unknowingly, all the time. In addition, when we turn our head sideways or look up or down, the center of rotation is somewhere inside our head. We don’t pivot around our eyes.
This means our eyes’ position is moving in space. This is a small shift, but it’s important even if it’s unnoticeable. 3DOF denies us this sensory input, and the result is a very unnatural way to view the world. The affect this has on people varies greatly. Some will not notice and won’t be affected at all, some will get simulation sickness (sometimes mistakenly referred to as motion sickness, which is similar but not the same thing), others might get dizzy, experience eye fatigue or just general discomfort. The length of time also plays a part. Most mobile VR experiences are on the short side and measured in minutes. Not long enough to trigger symptoms for most people. However, as the VR industry grows so will the games/apps, and this problem will become more acute.
360 video is not VR
I know some of you will disagree strongly with the statement above, and that’s fine. Allow me to explain. The definition of VR is “Virtual Reality” – an environment that is not real but feels like it is. True VR gives us a sense of presence, we feel that we are there, in that unreal place. 360 video cannot provide that, not even stereoscopic video (note that I didn’t use the term 3D, I’ll get to that a bit later). Primarily, because 360 video is 3DOF. You can’t move around in video (or can you? I’ll get to that as well). Why is 360 video so popular with content creators? Well, mainly because it’s easier than the alternative. Much easier. Lets talk about the pros and cons of 360 video vs the alternative – roomscale VR using 3D models.
**360 Video**
Pros:
Cons:
| **VR with 3D models**
Pros:
Cons:
|
Lets talk about these in more detail:
3D Modeled environment
3D modeling, texture and animation is hard work. It’s a slow and expensive process, that requires highly skilled artists. I can take a 360 camera and shoot a scene in my backyard, and it will take me an hour to process the video. Modeling the same scene in VR can take days if I want it “cartoonish”, or weeks/months if I want it photorealistic.
The advantages of 3D models over video are many. It’s sharp and detailed, it can run at 90fps or higher, it’s easy to modify and adjust as requirements change, or new features are added. It can be exactly the right scale, and you can render it with the correct IPD. It’s 6DOF (with the proper headset) and of course, it’s easy to make it interactive. These last two are the most important and in my opinion outweigh everything else.
360 Video
The main (and pretty much only) advantage 360 video has, is that it’s very easy to create. As I already mentioned, all you need is a camera. Some are cheap (I like the Vuze) and some are crazy expensive (Google Jump, $17,000 USD).
There are many disadvantages with 360 video. Pixel count is one them. 4k is not enough pixels to produce a sharp image when you have to stretch it to cover 360 degrees. 8K or even 16K is required, and that is really hard to shoot, edit and playback. In addition, video is usually 30fps or 60fps. VR requires a minimum of 90fps, any less and people will get eye strain, dizzy or sick.
Video is really hard to modify after you shoot it. You can change the color or texture of a 3D model in a few clicks, but patching video and doing fill-ins is complicated and expensive. Video is cheap to shoot but can be expensive in the long run.
There are a few problems with stereoscopic 360 videos as well:
- The effect feels natural only when the distance between the two cameras is the same as the actual distance between your eyes – the interpupillary distance (IPD).
- Even if you’re lucky to have an IPD that matches the stereoscopic camera setup, 360 videos still feel weird and the scale is often wrong. This is because the stereoscopic effect is only correct at very specific viewing angles, where you look in the same direction as the physical lenses on the camera rig. If you’re looking between these few sweet spots, things get weird.
- Stitching together a stereoscopic 360 image from multiple lenses / cameras is a problem that can’t be solved. It’s mathematically impossible to do it without introducing stitching errors or making certain sacrifices. This blog post does a great job explaining why: elevr.com
One last thought on this topic: 360 Video is not VR in the same way that a GoPro video is not a First Person Shooter game. The fact that you can use a VR headset to watch a 360 video doesn’t mean that 360 video is VR. I can also watch a DVD on my PlayStation, that doesn’t make it a game.
Stereoscopic vs 3D
The term 3D gets abused a lot. 3D tv, 3D movies, 3D displays. None of these are actually 3D, they are stereoscopic.
Stereoscopic means the image or video was shot from two angles, mimicking our eyes.
3D means that the object contains enough information to render it from any viewing angle.
Orange Head by ShacharWeis on Sketchfab
“Stereoscopic” is a long and ungainly word and up until now there wasn’t a good reason to distinguish between stereoscopic and 3D, so everyone just used 3D. Now there is, because true 3D video exists. It’s called volumetric video, and it’s amazing. Volumetric video will disrupt the entertainment industry in the same way that High-Definition did years ago. And just like the SD to HD transition, it’s going to take many years to get there.
Here is a demo by Intel, from CES 2017.
The technology exists today, but the cameras are insanely expensive ($250k and upwards) and the resulting file sizes are unreal (terabytes per minute). It’s going to take a massive cost reduction, bandwidth increase and advances in compression technology before producers can make volumetric video and consumers can watch it. I predict 8-10 years before it becomes mainstream.
While true volumetric video is still mostly in the future, there are some hybrid solutions that are available right now. Here is a list of a few notable mentions:
- Pseudoscience Pictures – These guys can take a stereo 360 video and convert it into a point cloud and a displacement map. Pretty cool and it’s free, so check it out.
- PresenZ from Nozon: 360° CG movies with interactive parallax
- Disney research: Real-time Rendering with Compressed Animated Light Fields
- Google Seurat: Introducing Google Seurat – ILMxLAB – WorldSense
- OTOY light fields
Update (March 2018): Google released Welcome_to_Light_Fields, free on Steam. As far as I know this is the first readily available demo of Light Field technology. Go try it out!
Conclusion
What does this all mean? Well, if you are a consumer I highly recommend you get a Vive, Rift or PSVR, or wait for the mobile 6DOF headsets that will soon hit the market. 3DOF is just so 2017.
Content creators should be mindful about using 360 video and learn how to work with and around it’s limitations. It’s not inherently bad, it has it’s uses if used carefully. Avoid close-up objects, avoid moving the camera, try to create hybrid apps that combine 360 video and 3D models (not easy, I’ll address this in a future post).
I want to thank Sarah Legault for letting me 3D scan one of her super cool hand-made dolls.