Spatial computing is an emerging technology that involves the integration of the digital world with the physical world in real-time, allowing for seamless interactions between the users and the digital content in a three-dimensional (3D) space.
Unlike traditional two-dimensional computing interfaces, spatial computing allows the users to interact with the digital content in a more natural and intuitive manner, leveraging spatial awareness and depth perception.
Currently, spatial computing experiences are developed, and experienced by leveraging three very different technologies and devices:
|
Augmented Reality |
Virtual Reality |
Mixed Reality |
|
Overlays digital information, such as virtual objects or data, onto the user's view of the real world through a smartphone, tablet, or AR glasses. |
Creates a fully immersive digital environment that users can interact with, using VR headsets and controllers. |
Combines elements of both AR and VR, allowing digital objects to interact and respond to the real-world environment via Head Mounted Devices (HMD). |
Devices |
Apple's Vision Pro is a revolutionary spatial computing platform that provides users with all three earlier-mentioned spatial experiences through just one device.
What Makes Vision Pro Special?
By utilizing compatible apps, Vision Pro has the potential to function as an AR Glass, projecting windows that showcase 2D or 3D content within containers. These windows can exist as single or multiple entities and can be resized to any desired scale.
Vision Pro can also seamlessly transform into a Mixed Reality Head Mounted Device, enabling real-time manipulation of 3D objects in a physical space using gaze and hand gestures.
Vision Pro facilitates the development of fully immersive Virtual Reality apps, transporting users to a virtual world by eliminating the see-through view. This immersive experience replaces all visual elements with custom content, allowing developers to create temporary transitional experiences, distraction-free spaces, VR games, and captivating virtual worlds to explore. Vision Pro conceals pass-through video, displaying only the user's hands when they come into view, giving full control over the on-screen content.
Developing Immersive Experiences in visionOS
Tools to Create Spatial Experiences
Spatial experiences for Vision Pro can be constructed by using a set of tools tailored for two distinct developer groups: those engaged in native Apple development and those creating apps with the Unity Game engine.
Xcode Beta 15.2+ |
Reality Composer Pro |
Unity 2022.3.4f1 + LTS |
Xcode for visionOS
The development process for visionOS commences with Xcode, which seamlessly integrates the visionOS SDK, and utilizes Reality Composer Pro to facilitate the inclusion and management of 3D assets with ease.
visionOS is specifically designed to assist in the creation of spatial computing apps and contains a wide range of frameworks available on other Apple platforms, such as SwiftUI, UIKit, RealityKit, and ARKit. Developers already working with an iPadOS or an iOS app can seamlessly transition their app to Vision Pro by simply adding the visionOS destination to their existing project.
Unity Engine for visionOS
Unity is also collaborating closely with Apple to offer an integration of visionOS with its PolySpatial SDK.
All shared space content is rendered by using RealityKit, while Unity materials and shaders require translation, which is efficiently handled by PolySpatial. Polyspatial SDK seamlessly supports Unity features, managing the translation of physically-based and custom materials via the Unity Shader Graph. Unity Shader Graphs are converted to MaterialX, a standard interchange format for complex materials.
The following are some of the key components which are translated into RealityKit by PolySpatial:
- Unity’s materials
- Mesh renderers
- Skinner mesh renderers
- Particle effects
- Sprites
For example, the Sprites in Unity are converted into 3D Mesh in RealityKit.
RealityKit does not support rendering handwritten shaders.
There is no change in function of Unity’s existing simulation features, including Physics, Animation, Timeline, Pathfinding, NavMesh, custom MonoBehaviors, and other non-rendering features.
PolySpatial introduces the "Play to device" feature, enabling you to have an instant preview of your scene and make real-time changes to it, providing a seamless development experience.
Interaction via Unity
With Unity’s XR Interaction Toolkit, integrating hand tracking into your visionOS projects becomes effortless. Utilize built-in system gestures with the Unity Input System and leverage the XR Hands package to access raw hand joint data for custom interactions.
Porting or Creating a New App
Porting an existing application or creating a new one is a simple process with Unity. Unity provides complete support for the visionOS platform, allowing you to view your projects on Vision Pro with ease.
You can do the following to run the app developed in Unity on the Vision Pro device:
- Select the build target for the platform
- Enable the XR plug-in
- Generate an Xcode project
- Within Xcode, you can build and run your application on either Vision Pro or the device simulator
Making an existing Unity project that uses AR Foundation and targets iOS devices run on VisionPro is a straightforward process with just a few simple steps:
- Upgrade the Unity version to 2022 or later
- Convert Shaders to Shader Graph
- Move from Built-in Render pipeline to Universal Render Pipeline
Rendering Technology
Unity recommends using the Universal Render Pipeline for visionOS projects because it enables a special feature called foveated rendering for higher-fidelity visuals.
In XR, single pass instance renders the same content for both eyes, optimizing scene creation. However, with multiview, each eye sees different content, resulting in higher VR processing costs. Thus, choosing the single pass instance can significantly improve scene performance.
Foveated rendering involves focusing computational resources on the area of the user's vision where they are looking, while reducing the level of detail in peripheral regions. By leveraging eye tracking technology in Vision Pro, foveated rendering optimizes performance, reduces rendering workloads, and enhances the overall experience of the device, providing realistic visuals with higher frame rates.
To get your project ready for visionOS, follow these steps:
- Utilize (or upgrade to) the Universal Render Pipeline for performance optimizations and to access visionOS platform features like foveated rendering
- Transform controller-based interactions into hand-based interactions
- Use the Unity Input System for streamlined input handling
- Convert shaders to Shader Graph or use standard shaders as needed
Hardware Specifications for Development
CPU (Processor): For Unity development, opt for a modern CPU with at least 6 Cores / 12 threads from either AMD or Intel. Recommended choices from AMD include Ryzen 5 5600X or 7 5800X. MacOS 13.4, which runs Xcide 15.2 with M2 chip, is recommended.
RAM (Memory): While 8GB of RAM is sufficient for most Unity tasks, consider a minimum of 16GB to accommodate OS, browser, Unity, and other applications.
GPU (Graphics Card): For optimal performance, consider NVIDIA GPUs like RTX 3060 or AMD RX 6600 when selecting a graphics card.
On-Device Experience
At this development phase, Apple offers developers three methods to enable their apps to run on Vision Pro.
- Compatibility Evaluation
Apple can assist you in ensuring that your visionOS, iPadOS, and iOS apps behave as expected on Apple Vision Pro. Align your app with the newly published compatibility checklist, and then directly request to have your app evaluated on Apple Vision Pro. Apple will send you the evaluation results, along with any relevant screen captures or crash logs.
Click Here for the compatibility checklist and submission.
- Developer Labs
Apple offers support to test and optimize your apps for the infinite spatial canvas.
In these self-directed coding and design labs, you’ll be able to test and optimize your apps on visionOS. Bring your Mac, code, and everything you need to modify, build, run, and test your app on Apple Vision Pro. Experience the labs in Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.
Click here to apply for Developer Labs.
- Developer Kit
If you have a fantastic idea for a visionOS app that requires building and testing on Apple Vision Pro, you can apply for an Apple Vision Pro developer kit. By gaining continuous and direct access to Apple Vision Pro, you'll have the opportunity to swiftly build, test, and refine your app, ensuring it delivers incredible spatial experiences in visionOS. Apple will loan you an Apple Vision Pro developer kit to prepare your app for the launch of the new App Store on Apple Vision Pro.
Click here to apply for the Development Kit
Conclusion
Unity's seamless integration with the visionOS platform enables developers to create engaging and immersive apps for the Vision Pro device, using familiar tools and workflows. The engine handles many of the translation challenges via PolySpatial SDK, making it easier to bring Unity-based experiences to the visionOS app ecosystem. In simple terms, Unity and Apple Vision Pro work together smoothly to create amazing experiences. It's like a bridge that connects your creativity with Vision Pro's capabilities. Whether you're upgrading an existing app or building a new one, Unity makes it simple. This partnership allows you to make apps that can bring the digital and real worlds together in exciting new ways. So, with Unity and Vision Pro, your ideas can turn into incredible spatial experiences.
Note: Images are taken from documentation and the newsroom from Apple.
Acknowledgement
This piece was written by Iniyan E from Encora.
About Encora
Fast-growing tech companies partner with Encora to outsource product development and drive growth. Contact us to learn more about our software engineering capabilities.