An AR experience can be improved by having virtual objects not only appear to stay in place in the real world, but also appear to interact with it. For this purpose, the device needs to understand the structure of the space around it. By detecting the shape of certain objects such as planes and points, we can use these shapes to place virtual content, such as a character on top of a table. Our tracking API uses ARCore or ARKit to provide spatial understanding of the environment.
Camera tracking is a process that makes a device understand its motion through space by using the image from the camera. We can achieve this process by tracking individual feature points throughout multiple frames. The resulting motion can be attached to a virtual camera in a 3D scene. We can add virtual objects into that scene and overlay these objects onto the camera image. By having the virtual camera match the movement of the real device, the virtual objects appear to stay in place.
Currently, the camera tracking process is provided by ARCore or ARKit.
We also provide a feature called relocalization, that consists of recognizing an image or an environment to enable persistent and shared AR experiences.
To achieve this, we provide a 3D Scanner Application to let developers scan environments and create maps in the locations where they want to build AR experiences. The tracking API can then be used to perform relocalization on these scanned environment.
For more details on how to use and enable relocalization, see Concept of Relocalization.