Module 4

Spatial Triggers are used when a user activates AR content by entering a specific area in relation to your location. When Custom Location AR  tracking successfully starts, the camera’s Transform will update within the coordinate system of your AR objects. This means we have access to the Snapchatter’s position and rotation in respect to the rest of the AR objects within the Lens. We know how far a Snapchatter is from an AR object and when they look at an AR object. This information enables you to create in-Lens interactivity through the use of spatial triggers. Spatial triggers allow us to create interactive user experiences composed of graphics, animations, vfx, sfx, or even lines of code that are triggered when you know the Snapchatter is in a certain position or looking in a certain direction.
When creating a Custom Location Lens, begin by thinking about what you want your AR experience to do based on Snapchatters’ position and the direction from which people generally approach the location. Once that’s designed, you can bring simple pieces of geometry in the Lens Studio scene, parent those objects under the location mesh, and position them where you want to set up spatial triggers, interactions, or navigation tips. To create a successful interactive experience you will want to leverage code to check when triggers should happen. We will cover distance check and camera position check.

Distance Check

For distance checks, you can use vector functions. Using the update event, you can write a function that takes the camera’s world position and the AR object’s world position, and uses vec3.distance() to calculate the distance between the two positions in centimeters. By using an “if statement,” you can see if the distance is within a threshold to execute a block of code.

Camera Position Check

For camera position check, you would do something similar to distance check, but use vec3.angleTo() or use camera.isSphereVisible() functions to determine if the camera is looking at an object. 
For vec3.angleTo(), you would need to use the forward facing vector of the camera and the vector from the camera to the object of interest. The forward vector of the transform is a read-only property. You can create a vector from camera to AR object by subtracting AR object position from camera position. Then use angleTo() to find the angle between the two vectors in radians (radian = degrees * (Pi / 180)), and test to see how big of an angle you need to see the AR object right on the edge of your phone screen view to trigger the “if'' statement.
For camera.isSphereVisible(), you can grab the cameraComponent and call this method with the parameters of the AR object world position and the size of the sphere to test in that location. Usually,1 is used for a 1 cm sphere. This abstracts a lot of the math away from you, so it might be easier to use this function without having to understand 3D math, or how screen vs world positions work.

Physics System for User Interactions

For user interactions, you can use the physics system to see if Snapchatters are tapping on an AR object. This involves using raycasts within the physics system that go from the camera’s screen space and check to see if it hits any physics colliders out in world space. The below image of Lens Studio’s Inspector Panel shows that a physics collider component can be added to a mesh object. In this example, it's added to a sphere within a scene. Alternatively, you can turn off “Fit Visual,” and add the collider to any shape that you want. Take care to remember to turn on “Show Collider” if there will be no render mesh companion. You can also use several primitive colliders to build up a more complex collider volume.
Note that you have to connect the physics collider to the physics root world settings Asset into the World Settings property.
When using physics to create Lens interactivity, you will need to listen to touch events and create event responses. When you cast a ray out into world space to search for physics colliders, you need to cast the ray out of a camera and know how far you want to cast it out. In the code below, we go out to the far plane of the camera. You are not limited to this method and if you can hardcode a set distance of say 1000cm, which is 10m. We need to cast the ray from the screen position where the user tapped, then we can use the physics system to cast a ray out from that position into world space. If there is a hit, then you get access to the collider that was hit and you can call functions on a component script, or any other code. To ensure accuracy of in-Lens interactions, you want to know what collider was hit, and then decide what to do based on that information.
Before integrating spatial triggers into a project here are a few tips to keep in mind while designing spatial navigation for your experience: 
  • Don’t position the spatial triggers behind the user. This poses a higher risk of hitting furniture, small animals, or other objects when the user is moving backwards.
  • If you have multiple spatial triggers use Visual cues (off-screen locators) to encourage users to explore the AR world around them.

What's Next?