How to Use DeepAR For AR Effects on Amazon IVS Live Streams

Written by amazonivs | Published 2022/05/27
Tech Story Tags: amazonivs | streaming | live-streaming | ar | deepar | virtual-world | good-company | hackernoon-top-story

TLDRWe’re in the era of widespread video communications in our daily personal and professional lives. Now that people are more accustomed to using video technology, AR stands to dial up both the functionality and fun factor of the medium. From facilitating realistic makeup and beauty effects to virtual accessory try-ons and real-time background replacements, the powerful multiplatform SDK from DeepAR is being used by almost 8,000 developers worldwide for web-based and mobile applications. The startup’s proprietary AR technology allows companies of all sizes to add AR effects to any iOS, Android, macOS or HTML5 application with just a few lines of code, an integration process that takes hours, not days. Once created, AR assets can be reused automatically for any platform.via the TL;DR App

We’re in the era of widespread video communications in our daily personal and professional lives.
Now that people are more accustomed to using video technology, AR stands to dial up both the functionality and fun factor of the medium.
From facilitating realistic makeup and beauty effects to virtual accessory try-ons and real-time background replacements, the powerful multiplatform SDK from DeepAR is being used by almost 8,000 developers worldwide for web-based and mobile applications.
The startup’s proprietary AR technology allows companies of all sizes to add AR effects to any iOS, Android, macOS or HTML5 application with just a few lines of code, an integration process that takes hours, not days. Once created, AR assets can be reused automatically for any platform.
According to DeepAR’s research, AR users in ecommerce typically receive a 15-25% clickthrough rate increase on product pages; up to 120% increased customer dwell time; more engaged customers that are twice as likely to make purchases; and double the conversion rate.
The following details how the DeepAR Android SDK can be integrated with Amazon IVS for AR-enhanced live streams. 

Requirements

First, we are going to need to set up a few things. 
2. Setup AWS Account.
3. Setup IAM Permissions.
Once you have a channel created, you are going to need to use these:
a.Ingest server
b.Stream key

Integration Steps

We will showcase the Amazon IVS <> DeepAR integration with a simple Android app that shows a camera preview and lets you add fun AR masks and filters. To follow along with this tutorial clone this GitHub repo.
Custom image-input sources allow an application to provide its own image input to the Amazon IVS broadcast SDK, instead of being limited to the preset cameras or screen share.
A custom image source can be as simple as a semi-transparent watermark or static "be right back" scene, or it can allow the app to do additional custom processing like adding beauty filters to the camera.
You can have multiple custom image sources, like a watermark plus a camera with beauty filters.
When you use a custom image-input source for custom control of the camera (such as using beauty-filter libraries that require camera access), the Amazon IVS broadcast SDK is no longer responsible for managing the camera.
Instead, the application is responsible for handling the camera’s lifecycle correctly. You will find more details of Amazon IVS broadcast SDK custom image sources here.
First, download the DeepAR Android SDK.
Copy the deepar.aar located in the lib folder from the downloaded zip into the <repo>/deepar directory.
Now open the repo as Android Studio project. Once the project loads, there might be some red errors. If this happens, run the Gradle Sync to fix the errors.
Now it’s time to pair your app with the DeepAR account via license key. To generate the license key, you need to create a project on the DeepAR developer site. For testing, create a project with a free plan. Give your project a name and then click add Android app. Copy the App ID from the build.grade of the app.
Now copy the generated key and paste it in the MainActivity.java
Copy the ingest server and stream key from the Amazon IVS console, and paste them in the MainActivity.java.
In this example app we are using Android’s CameraX API to get the frames from the camera. We have set it up with a 1280x720 image resolution.
You can look up this part in the bindImageAnalisys() method. 
The pipeline with DeepAR and Amazon IVS broadcast SDK includes:
1. CameraX sends camera frames in the ARSurfaceProvider class.
2. Feed those frames into DeepAR.
3. DeepAR renders a preview onto the surface provided by Amazon IVS broadcast SDK.
Here is where the DeepAR SDK and Amazon IVS broadcast SDK come together.
Breaking it down:
●      Create a BroadcastSession with the default device MICROPHONE so that the Amazon IVS broadcast SDK handles the sound part.
●      Create a custom input image source from the broadcast session. Set its size the same as camera (in this case 720p), and set the rotation to zero.
●      Bind broadcast session to the slot.
●      Set DeepAR to render to the custom image source surface.
●      Get the preview view from the broadcast session and add it to the view hierarchy to show the preview on the screen.
●      Start the broadcast session with ingest server and stream key. This will start streaming frames that DeepAR renders to the surface provided by the broadcast session.

Testing Time

Now run the app on the device. You should see a camera preview. With left and right buttons, you can change the AR masks and filters.
Open the channel you created in the Amazon IVS console and open the live stream tab, then you should be streaming the video augmented with AR effects. 

Written by amazonivs | Easy low-latency live video streaming at scale.
Published by HackerNoon on 2022/05/27