Until now, using image-to-image networks in Lens Studio was just out of reach. Models were simply too large for mobile deployment, which prevented creators from getting the most out of their Lenses. 


Now we’ve introduced a new method that demonstrates how to train and compress image-to-image networks — like CycleGAN and Pix2Pix — for better deployment on your mobile devices. By changing the way you prepare the dataset for your image output, we’ve reduced and optimized file size for your Snap Lenses. This new method is based and developed on a paper written by our own Snap Research Team, CAT, which you can read in full here.

This update makes it even easier for developers to build out immersive Lens experiences using Image-to-Image Translation. Effects such as collection style transfer, object transfiguration, season transfer, and photo enhancement are now easily compressed under 1 megabyte for use in mobile devices. Also, the new model is templated so you can drop it directly into Lens Studio and run it in real-time across different devices. 


We’re not only excited about the new Image-to-Image Translation model — we’re thrilled by the opportunities it presents for creators like you. Transition effects like day-to-night, black-to-white, summer-to-winter, and painting-to-photo are just a few of the incredible ways the Snap AR community can use machine learning to their creative advantage.

This new model is a perfect example of how we're empowering our community to use SnapML to develop their own models for creating incredible Lenses. If you’d like to learn more about Image-to-Image Translation, as well as get training code for testing it out yourself, click here. Or, to see other ML Lens Templates, including Style Transfer, which this latest release utilized, click here.