Dynamic occlusion on Quest 3 has recently seen significant improvements, with enhanced quality, reduced CPU and GPU usage, and simplified integration for developers in a growing number of supported apps.
Occlusion is a critical capability in mixed reality headsets that enables digital objects to appear behind real-world objects, creating a seamless and immersive experience. In computer vision, static occlusion refers to the process of perceiving objects or scenes in a fixed, unchanging environment. In contrast, dynamic occlusion involves detecting objects or changes within a scene that has been modified or rearranged.
The Quest 3 initially debuted with support for static occlusion but omitted dynamic occlusion capabilities. Just days after its introduction, dynamic occlusion was unveiled as an “experimental” feature for developers, thereby limiting its availability to the Quest Retailer and App Lab. However, by December, this restriction was lifted.
Building upon dynamic occlusion, developers leverage Meta’s Depth API to establish a solid foundation for each individual application, capitalizing on the API’s ability to provide a coarse per-frame depth map generated in real-time by the headset itself. While integrating may be considered an advanced concept in many areas, the statement as written is somewhat unclear and open to interpretation. Here’s a revised version: Builders are forced to tediously swap out shaders for each digital object that needs to be occluded, rendering the ideal one-click solution obsolete. Currently, only a limited number of Quest 3 apps that combine actuality functionality support dynamic occlusion.
A limitation of dynamic occlusion on Quest 3 is the potential for a low-resolution depth map, which can result in an eerie empty space along the edges of objects and a failure to capture intricate details, such as those between individual fingers?
With the release of Meta’s v67 Meta XR Core SDK, the company has made subtle yet significant enhancements to the visible quality of the Depth API, simultaneously achieving notable optimizations in terms of efficiency. The corporation reports a significant reduction in resource utilization, utilizing 80% fewer GPUs and 50% fewer CPUs, thereby freeing up additional resources for developers to leverage.
Unity’s latest update, version 67, simplifies shader integration by enabling seamless addition of occlusion support in shaders created with its Shader Graph tool, while also refining the underlying Depth API for improved usability.
I successfully tested the Depth API with version 67, finding that it delivers marginally enhanced occlusion quality, albeit still quite challenging to achieve. However, version 67 has another crucial feature that’s more vital than the raw quality improvement.
The Depth API now offers the option to exclude tracked fingers from the depth map, allowing for the effective masking of hands using the hand monitoring mesh as a replacement. For years, some developers have leveraged hand tracking meshes to implement hands-only occlusion, even on Quest Pro instances. Notably, with Meta’s v67 update, they provide a template demonstrating this technique, complementing the Depth API for occluding all other objects.
Upon examining this output, I found that it terminates in significantly improved quality occlusion on your fingers, yet presents some visible discrepancies at your wrist, where the system transitions to occlusion powered by the depth map?
As compared, Apple’s Imagination Pro features seamless occlusion only within your hands and arms, as it effectively conceals them, just like Zoom does, rather than generating a depth map. While Apple’s headset offers a higher standard of occlusion when it comes to your fingers and arms, users may still experience some unusual effects, such as objects they’re holding appearing behind virtual ones and becoming invisible in VR.
Quest builders can access in-depth API documentation for Unity at this link and for Unreal Engine at this link.