logo

Module 5

STEP-BY-STEP GUIDE
In this step-by-step guide, we’ll recreate the Lens you see below. To follow along, you will need the Snapchat app to scan your location and Lens Studio to create your Lens. For those who want to unpack the process further, a copy of the finished project will also be available. Let’s get started! 
Note: This course provides a Lens Studio file for you to follow along. You can follow along and replace a Location ID to test the Lens relative to your scanned location. 

Scanning your Custom Location

*The Custom Location Creator requires LiDAR (iPhone Pro models 12 and above).

Snapchat’s Custom Location Creator

When you arrive at your location, open the Snapchat app and scan the Snapcode of the Custom Location Creator Lens to scan your chosen location.

Scanning

  • Once the Lens launches, tap ‘Create New’ to begin the creation process. For first-time users, keep settings on default.
  • Begin scanning the location. Try to scan the entire object/space (or as much of it as possible).
  • To finish scanning, tap ‘Done.’ Otherwise, once the mesh reaches maximum capacity, the app will automatically move to the next step.

Capturing Perspective

  • Determine the location from which Snapchatters are most likely to start engaging with your Lens. Go to those locations and wave your LiDAR-enabled phone to capture as many perspectives of the location as possible.
  • Once you get the minimum required captures, you can move to the next step.

Verify the Results

Now that you’ve captured the mesh and perspectives, you will need to verify the results. In this phase, you can see the scanned mesh overlaid on the camera view of the location. If you are satisfied with the overall scan, tap ‘Done,’ assign a title to the scan, and start uploading it.

Note: If the scanned mesh didn’t overlay well on the camera view, repeat the process again until you get better results.

[OPTIONAL] Supporting Capture using Third-Party Software

Keep in mind, Snapchat’s Custom Location Creator Lens does have some limitations. The scanned mesh is stored in the cloud and is unable to be exported as a .fbx or .obj file to use as a reference in 3D-modeling software like Maya, Cinema4D, or Blender. Because the Lens only scans up to three meters in height, scanning larger structures requires out-of-the-box solutions.
To overcome these limitations, you can use third-party LiDAR-scanning applications like Polycam to export a mesh that covers larger areas and can be used in third-party modeling software. Let’s take a look at how it works:
  1. While at your chosen location, open the Polycam app. To start a new scan, tap the ‘+’ icon (next to the gear icon) to initiate scan mode. Select ‘LiDAR Mode’ and tap the ‘Record’ button to start scanning. Just as you do the scanning process using Snapchat’s Creator Lens, scan your location from multiple perspectives by moving to a number of different angles and waving your phone.  
  2. Blue mesh indicates areas that do not have full coverage. For a high-quality capture, try to get rid of as much blue as you can by capturing as many details as possible. Note that Polycam automatically merges any remaining blue areas at the end.
  3. To finish scanning, tap ‘Stop Recording,’ then ‘Process.’ You will be asked to select between ‘Space’ or ‘Object.’ If your location is a statue or monument, select ‘Object.’ If your location is a space like a storefront, select ‘Space'

Creating a New Project and Setting Up Custom Location AR

  1. Open the latest version of Lens Studio on your computer. 
  2. Select ‘New Project.’ 
  3. In the Objects panel, click the ‘+’ sign and search “custom location.”
    Tip: Make sure to use the Objects panel — DO NOT use the Resource panel, as the Custom Location Object is different from the Custom Location Resource.
  4. In the pop-up box, enter your custom location ID and hit ‘Save.’ If this is the first time the ID is being used, it may take a couple of minutes before you will be able to see the mesh.
    Tip: You can find the ID within the Custom Location Creator Lens.
  5. Download complete: It may take a few minutes the first time, but once the Location ID loads, the scanned mesh will automatically be loaded and rendered in the Scene panel and associated resources will be auto-generated: Location Asset, Location Mesh, Location Material, and Camera Script.
  6. Assign Marker Reference: For review purposes, Snapchat requires Custom Landmarker Lenses include an Image Marker set to an image of the location itself. In the Objects panel, click the ‘+’ sign and search “image tracking.” Select ‘Image Tracking’ from the drop-down menu and hit ‘Enter.’
  7. Choose an image of the actual location from the files on your computer. You should now see the image marker object appear in the hierarchy within the Objects panel.

Designing UI for Custom Hints in Lenses

Tip: For the animations we use in our hints, download and import HintResources.lso.
Our custom hints use Lens Studio’s ScreenTransform-based UI. To get started, add a new Screen Transform in the Objects panel. This will automatically add an Orthographic Camera and Frame Region to render the UI objects. Rename the Frame Region to “Hints Frame Region.” We will use this first Screen Transform for the Loading Message.

Loading Message

You can always design your own loading message. In this course, we are animating a ring around an image of our location. To accomplish this setup, add two Screen Images and one Screen Text as children to your new Loading Message object. We will name them “Loading Ring,” “Location Image,” and “Loading Text.”
  1. Loading Ring: Set the Image Component Texture to “LoadingRingTexture” and add a TweenScreenTransform Script Component. Ensure the following are set: Play Automatically, Loop, Rotation Offset -150, Additive, 1s Time, 0s Delay, Linear, Out.
  2. Location Image: Set the Image Component texture to “LocationIcon [REPLACE_ME].” You should replace this with a circular image of your own. The position of this image should match the loading ring, so that the ring appears to rotate around the Location Image.
  3. Loading Text: Fill in the Text Component text to “Loading [Lens Title]” or a message of your choice. In our Lens, we added a reminder to “Please be aware of your surroundings.”
    Tip: For additional guidance about positioning Screen Transform and styling Screen Text, check out the linked Docs articles.

Loading Failed

To indicate loading failed, we designed an ‘X’ icon with an error message. To create it:
  1. Duplicate the Loading Message ScreenTransform and rename it to “Loading Failed”.
  2. Delete the “Loading Ring” Screen Image and rename the children to “Failed Icon” and “Failed Text.”

    Failed Icon: Set the Image Component texture to “FailedTexture” or one of your own.
    Failed Text: Fill in the Text Component text to something like “Location data failure. Try again later.”

Go to Location

For this step, we designed footsteps walking toward our Location Image. 
  1. Duplicate the Loading Message Object and rename the copy “Go to Location.” Delete the Loading Ring child.
  2. Keep the Location Image and add a new ScreenImage for the footstep animation named “Go to Image” and rename the text to “Go to Text.”
Go to Image:Set the Image Component texture to “GoToAnimation,” or choose one of your own. Position it so that the footsteps lead toward your location image.
Go to Text:Set the Text Component text to something like “Go to [your custom location] to experience this lens”

Tracking Setup

For the tracking setup design, we chose a phone icon animating from side to side in front of the Location Image.
  1. Duplicate the Go to Location Object and rename the copy “Tracking Instruction.” Keep the Location Image and rename the other children “Point at Image” and “Point at Text.”
Point At Image:Set the Image Component texture to “PointAtAnimation,” or choose one of your own. Position it so that the phone is pointing at your location image.
Point At Text:Set the Text Component text to something like “Point at [location name] to start tracking.”

Tracking Troubleshooting

We recommend displaying some additional text to offer the Snapchatter some tips if they haven’t been able to establish tracking. 
  1. Duplicate the Tracking Instruction Scene Object and rename the copy “Tracking Troubleshooting.” Duplicate the Screen Text inside so there are two Screen Texts.
  2. Rename the texts “Troubleshooting Header” and “Troubleshooting List.”
Troubleshooting Header:Set the Text Component text to something like “Having trouble?”
Troubleshooting List:Set the Text Component text to helpful tips like “Try another angle,” “Stand 3 meters back,” “Move slowly,” or “Get a clear view.”

AR Interaction Hint

Finally, if your Lens includes interactions, you can add your own custom hints to be displayed whenever location tracking starts. This will help guide Snapchatters through your AR experience!
  1. Duplicate the “Tracking Instruction” Scene Object and rename the copy “AR Interaction Hint.” Delete all of the children except a single Screen Text.
  2. Rename the text and write a hint that is applicable to your Lens. In this example, the creative includes interactive heart assets, so we’ll write “FIND HEARTS.”

Fade in With Tween

To allow each of the hints to fade in nicely, we’ll use Lens Studio’s Tween Helper Scripts. 
  1. Add “Tween” from the Objects panel. 
  2. Move the Tween Manager to the top of the Objects panel.
  3. Now we can add the animations. Add a Script Component with “TweenAlpha.js” to each Screen Image and Screen Text object. Change the Tween Alpha drop-down menu to “On Start” and configure the input time to 0.50. The rest of the settings can remain on their default.
  4. To automatically fade out the “ARInteractionHint” object after a few seconds, drag a second Tween Alpha script from the Resources panel to the ARInstructions Inspector. Configure this one to start at 1.00 and end at 0.00, and change the delay to a few seconds to give Snapchatters a chance to read the instructions.
Next, we’ll create a script that turns each Screen Transform hint on and off according to the Lens progression. The tweens you set up will play automatically, but if you don’t set the images and text colors to be transparent from the outset, you will be able to see a flicker of the hint at full opacity before the Fade In starts. To fix this, you can either manually set all of the image and text colors to transparent white when you are finished positioning them, or you can initialize each of these objects to transparent via script. We’ll show you how in the next section!

Lens Hint Functionality

To connect hint panels to the tracking process, we can create a custom script with functions for each state, then assign our custom functions to the corresponding property of the Device Location Tracking Component.   
Important note: The Device Location Tracking Component property expects a single function. If you need multiple functions to be called, refer to the Wrap Function code snippet in the Designing UI for Custom Hints module above. 
Create script: To get started, let’s create a script with functions for each hint state. Click the ‘+’ button in the Resources panel and enter “Script”; then, select your newly created script and rename it. For this example, we will name our script “HintController.js.”
Add script to the scene: Use the ‘+’ button in the Objects panel to add an Empty Scene Object to the scene. Rename the new Scene Object “Hint Controller” and select it. Next, drag the “HintController.js” script from the Resources panel to the Inspector panel. This will automatically attach your script to a Script Component on the Hint Controller Scene Object.
Open the script: Now it’s time to start writing the actual script! To open the Script Editor, locate HintController.js in the Resources panel, then click the drop-down menu that says ‘Open In’ from the top-right corner of the Inspector panel and choose ‘Open Built-in Editor.’
Set up script inputs: The script needs references to the Device Location Tracking Component in the scene as well as to each of the UI hints you plan to show and hide for each hint. First, add these inputs to your script and save. Then, select the ‘Hint Controller’ object in the Objects panel and assign your new script inputs in the Inspector panel.
Initialize UI hints: Before we show the loading message, we want to make sure all of the hints are turned off. We also want to make sure they start transparent, so they can fade in nicely when the time comes. Disabling all of the hints on Awake works because we changed the Tween Alpha scripts to play at Start in the previous module.
“disableAllHints” will disable all of the inputs we created.
“initAlpha” will search the children of all the inputs we created for any image and text components and set their associated Image.mainPass.baseColor and Text.textFill.color to a transparent version of their original color. That way, Tween Alpha can kick in nicely.

Loading Message Functionality

To enable and disable the corresponding UI hint, write two custom functions “onLoadingSuccess'' and “onLoadingFail.” Then, set the Device Location Tracking Component’s “onLocationDataDownloaded” and “onLocationDataDownloadFailed” properties to our custom functions. Track whether loading is complete by declaring a variable “loaded” as false on Awake and true onLoadingSuccess.

Location and Tracking Setup

When loading is complete but tracking still has not been established, we will give the Snapchatter a “Go to” location hint if they’re not within range to start tracking. If they are within range, the hint will tell them to “Point at” the location. 
  • Write an UpdateTrackingMessage() function that checks if the DeviceLocationTrackingComponent.locationProximityStatus is within range and enables the associated UI hint. Then, bind this function to the Update event.

Troubleshooting Tips if It’s Been Too Long

To improve our instructions even more, let’s show some tips if the Snapchatter is unable to find the AR tracking location after a few seconds. Declare variables timeSpent and maxTimeExpected. Write a new function pickTrackingInstruction() that adds to timeSpent and compares it to maxTimeExpected to decide whether to enable either the “Point at” hint or the troubleshooting tips. Then, write a hideTrackingInstruction() function that resets timeSpent to zero and hides both the “Point at” and troubleshooting tips.
For the timer to work, edit the previous updateLocationMessage() to replace enabling and disabling script.trackingIntroMessage with calls to showTrackingInstruction() and hideTrackingInstruction() instead.
Troubleshooting Tips if It’s Been Too Long content

Rear Camera and Safety Hint

Using the rear camera is the easiest way to set up tracking. If they aren’t using the rear camera, notify Snapchatters to swap cameras by adding a callback to theCameraFrontEvent and using the HintsComponent’s built in lens_hint_swap_camera.

Interaction Hint When Location Is Found

Once the location is successfully tracking in AR, we can disable all location hints. If your Custom Location Lens has interactions, this is a good time to give the Snapchatter a hint of what to do.
Make API Add a “OnLocationFound” api that calls the disableAllHints function we wrote before and turns on the AR Interaction Hint if there is one. This API can now be referenced in Behavior Components.

Behavior Trigger

  1. Add a “Behavior”  by clicking the “+” in the Objects Panel.
  2. Select the new Behavior object.
  3. In the Inspector Panel, set Trigger to “Location Event” with Event type “Location Found” with Location Tracking set to the Camera’s Device Location Tracking.
  4. Set Response Type to “Call Object API.” 
  5. Set Target Type to “Script API”, and drag the Hint Controller from the Objects Panel into the Script Component input. 
  6. Finally, set Call Type to Call Function, and Function Name to “onLocationFound.”

Adding AR Content to the Scene

  1. Import Assets Import the 3D assets that you want to appear at your location. For this course, we opened the Asset Library and imported “Love Jungle” by Clara Bacou.
  2. Place Assets To position 3D assets in the scene, drag the 3D Assets as children of the Location Mesh script. This way, they will render correctly within the Custom Location AR  coordinate space. 
  3. Setup Occluders If your 3D assets intersect with the ground or the Location Mesh, use an Occluder (a mesh with an Occluder material) to make sure the 3D assets render correctly in AR. We added a plane, positioned it at the ground, and placed an occluder material on it.

WARNING: Do not move, scale or rotate the “Location Mesh” or its child “Mesh” Scene Objects. Manipulating transform values will cause a misalignment within the tracked AR Lens experience.

Revealing Content When AR Tracking Starts

Before the AR Tracking has been established, all location-based content should be hidden. Once tracking starts, we can show the AR Content, but the Location Mesh itself should remain disabled. It exists solely as a reference for working in Lens Studio.
  1. Add “Behavior” Object 
    1. Click the “+” sign in the Objects Panel to add another Behavior object.
    2. Name it “Behavior AR”
  2. Disable Location Mesh on Awake 
    1. Set Trigger to “On Awake”
    2. Set Response Type to “Set Enabled”
    3. Set Target to the Location Mesh, and Action to “Disable.”
  3. Hide Custom Location AR Content on Awake Drag another Behavior Script from the Resources Panel onto “Behavior AR.” This one will disable the AR Content on Awake. Use the same settings mentioned above, but this time set the Target to the parent of your AR Content.
  4. Show Custom Location AR Content once tracking starts Drag another Behavior script from the Resources panel onto the “Behavior AR” object. This one will be used to show the AR content once the Custom Location AR is tracking. Set Trigger to “Location Event,” Event Type to “Location Found”. Drag the main camera from the Objects panel into the “Location Tracking” input. Set Response Type to “Set Enabled”, Target to your AR content, and Action to “Enable”
Tip: If you need to debug the Location Mesh or AR Content in Lens Studio Preview, you can temporarily disable all of the Behavior script functionality by unchecking “Behavior AR” in the Objects Panel.
5. Smooth the Transition with Animation If you don’t have an entrance animation for your 3D content, you can use tweens to smoothen the transition. We will scale up each of the meshes in our example by dragging a TweenTransform script from the Resources panel onto each of our 3D assets. 
Change “Type” to “Scale” and change the “End” values to match the scale you have in the Transform component.
6. Get Positions: Inside checkForTrigger(), first get the positions of the snapchatter and the target using Transform.getWorldPosition() and store these as “arTargetPos” and “cameraPos”

Adding Spatial Triggers

One way to incorporate interactivity into your AR experience is by responding to where the Snapchatter is or what they are looking at. As an example, we will make the hearts in our scene burst with confetti whenever the Snapchatter is aiming at it from nearby.
Prep Interaction Script
  1. Create Script Press “+” in the Resources Panel and search for Script. Rename the script “InteractionTrigger”
  2. Edit Script Select the new “InteractionTrigger” and in the Inspector Panel choose “Open in” to edit the script. 
  3. Add inputs to trigger behavior based on the Snapchatter’s position. To do so, we will need a reference to the main Camera.
    Add a float input named “distanceThreshold” to limit how close the Snapchatter needs to be to the object.
    Add a second float input named “angleThreshold” to limit how strictly the Snapchatter needs to be looking at the object.
    Add a third float input “cooldownTime” to specify how long you want to wait before allowing the trigger to happen again.
    Lastly, add a string “triggerName” to create a Custom Trigger name that can be used by Behavior Scripts.
  4. Setup Transform Variables To compare the Snapchatter position to the Object position, first get their transforms using getSceneObject().getTransform() and save these as “cameraTransform” and “arTargetTransform”.
  5. Setup Update Function Add a “checkForTrigger()” function that is bound to the UpdateEvent
  6. Get Positions: Inside checkForTrigger(), first get the positions of the snapchatter and the target using Transform.getWorldPosition() and store these as “arTargetPos” and “cameraPos”

Aim Check

Get Direction to Object Inside checkForTrigger(), subtract the target position from the snapchatter’s position to get a vector representing the a line between the camera and the object. Name this “vectorCamToObject”.
Calculate Angle to Object To get the angle of how close the Snapchatter is aiming at the object, use .angleTo() between the camera’s forward vector and the vector from the camera to the object. Note that this will be in radians.
Compare Angle to Threshold Now we can compare that angle to the angle threshold we defined for the trigger. To do this, we will also have to our threshold to radians: degrees * (PI/180) = redians.

Distance Check

Disregard Height Difference Inside checkForTrigger(), subtract the target position from the snapchatter’s position to get a vector representing the a line between the camera and the object. Name this “vectorCamToObject”.
Calculate Distance to Object To get how far the Snapchatter from the object, use .distance() between the camera’s position and the target’s position.
Calculate Distance to ObjectCompare Distance to Threshold Now we can compare that distance to the threshold we defined for the trigger.

Finish Check

Decide Trigger If lookingAtTarget is true AND closeToTarget is true, let’s call a new function onTrigger(). 
Prevent duplicates To prevent the trigger being sent every frame that we are close to the object, we will also add a “triggered” bool to the script that starts as false, and gets set to true when the trigger is sent. If triggered is already true, we can skip the checks.

Add Trigger Reaction

Send Trigger to Behavior Scripts Inside of onTrigger(), we can send a custom trigger to the Behavior scripts. Check to see if the triggerName input has been filled and then sendCustomTrigger().
Other Behavior scripts must use the same triggerName to react appropriately. 
Reset after Cooldown To be able to trigger the same reaction multiple times, check if cooldownTime has been set, and then create a “DelayedCallbackEvent” setting triggered back to false after the specified cooldownTime.

Reaction Example: Particle Burst

To showcase the spatial triggers, we will put an InteractionTrigger script on each of the Hearts in our scene so that when a Snapchatter aims at the heart, a sparkly particles burst out of it.
  1. Add InteractionTriggers to Target Objects: Drag the InteractionTrigger script onto each of the Heart objects. Fill in the script inputs. Drag the camera into the camera field, set cooldownTime to 2, and write a specific triggerName. We named ours trigger + the name of the object.
  2. Place Particles in Target Objects Drag the “GPU Particle Sparkle” Prefab from the Resources Panel onto each heart in the Objects panel so that the particles are a child of the heart object. Use the Scene Panel to confirm the position looks right.
  3. Hookup the event Click on each instance of the “GPU Particle Sparkle” prefab and type the same triggerName into the ParticleHandler script that is used in the associated Heart object’s InteractionTrigger script. This way, the particles will automatically react to the trigger behavior on the heart!

Test and Publish Lens

Testing a Lens: When your Lens is ready to be tested on your device, make sure that your Lens Studio account is connected with your Snapchat account. If you have not connected Lens Studio to Snapchat, please visit the Pairing to Snapchat guide.
If you are submitting a sponsored Lens, make sure that you are signed in to the Business Account that will be publishing the Lens. For additional instructions please check out this page.

Press “Send to Snapchat.” Interact with your Lens to make sure the garment is tracking to the body and is responsive to movement. Ensure the Lens is exactly what you envisioned before you submit it to Snapchat.
Tip: make a clear icon for the Lens to let Snapchatters know what the Lens will do. Clear stills or a short video of the Lens will help Snapchatters understand the Lens and improve Lens performance. 

CONCLUSION

By completing this course, you will have a better understanding of how location-based augmented reality offers the possibility to experience the world around us in more engaging, innovative, and exciting ways. And with Snapchat’s technology, you can transform any location into a custom AR experience — with endless creative possibilities.
The best part? Creating a Custom Location Lens doesn’t stop at the tools we’ve covered in this course! To make your Custom Location AR experience even more exciting, you can use Hand Tracking to create an interaction at the Location, Voice ML to enable users to discover animations using their voice. Or, if you create a sponsored Lens to promote a local business, consider adding shopping integrations to drive Snapchatters to the business’s website, loyalty program, or storefront. And finally, if you want to use what you’ve created and bring it to audiences outside of our platform, you can leverage Camera Kit, Snapchat’s turnkey SDK solution.
Find More Courses