Serverless Allergy Checker with Amazon Rekognition, Lex, Polly, DynamoDB, S3 and Lambda

Written by ceyhun.ozgun | Published 2017/10/12
Tech Story Tags: aws | facial-recognition | amazon-lex | aws-lambda | dynamodb

TLDRvia the TL;DR App

This post is part of a series of posts about AWS services in my previous blog.

We are living in an age where software and mobile is eating the world and big companies turning to AI-first. And improvements in AI field is unprecedented.

In this fast changing world, our responsibility is to provide solutions that use technology for helping people. The Pollexy Project is a very good example of using the technology to help people.

After reading that use case, I have been thinking about finding a good use case for Amazon Rekognition, an image recognition service using deep learning technology. Amazon Rekognition lets users to detect objects and scenes in pictures. It can also be used to index faces in images and search faces.

Our twins like playing in grass, but they have grass allergy and we can’t spend too much time playing in grass due to their allergy. While I was doing research on allergies, I have learned that in some extreme cases, drug allergies could cause anaphylactic reactions. That struck me lot.

And I have decided to use Amazon Rekognition for saving and retrieving pictures of the patients that have drug allergy. This system can be used in emergency services to easily and quickly recording and checking patient allergy information.

Also I have decided to use Amazon Lex and Polly for allowing emergency service personnel to use the application hands free by their voice. In this way, wearable devices like Google Glass can be used to let personnel use their hands freely and use their voice to record and search patient information.

The Application

The application is used to save patient information and to check information of a patient. The application is a serverless application that is made of static web files has no server-side component. Static files of the web application is hosted on S3 and the other functionality is implemented using the managed services of AWS. AWS services is used directly from the browser using AWS JavaScript SDK.

The services used are below:

  • S3 is used to store the pictures of patients and static files of the web application
  • DynamoDB is used to store patient information including name, allergies and picture url
  • Rekognition is used to index and search the face of patients
  • Lambda is used to index patient face when a patient record is added to DynamoDB
  • Lex is used to allow users to add and search patients only by their voice
  • Polly is used to give voice responses to the user

The diagram below shows the steps of the save patient use case that will be used when a new patient information will be recorded. Once Lex bot determined the patient name and allergen, the picture of the patient is taken in the browser and sent to S3. Then the patient record is inserted into DynamoDB. A DynamoDB trigger fetchs the picture and calls Rekognition to index face of the patient.

Saving patient allergy information use case

When we want to check whether a patient has allergy or not, we can take the picture of the patient in the browser and search the face of the patient in the Rekognition face collection. The steps of the check use case are shown below.

Checking patient allergy information use case

The HTML5 user interface and corresponding JavaScript code for recording audio to be sent to Lex, playing response from Lex and chat interface are used from my previous post and can be found at my GitHub repo. That application was using Java Spring server-side to process commands and calls AWS services using Java SDK, but in this application all AWS services is called browser using JavaScript SDK.

Preparation

You should have an AWS account and install AWS CLI to create the required AWS resources.

Keep in mind that not all services are supported in all regions. I will use us-east-1, US East (N. Virginia) region because Amazon Lex is supported only in this region for now.

The code can be found here.

Note: For simplicity, AWS access key id and secret key used to access AWS services using JavaScript SDK is put in HTML file. In production you should definitely keep your AWS credentials private.

Steps

1. Get the code

2. Create and configure IAM policy, role and user

3. Create Rekognition collection

4. Create and configure DynamoDB table

5. Create Lambda DynamoDB trigger

6. Create and configure the S3 bucket

7. Create the Lex bot

8. Modify and review the application

9. Test the application

Let’s start.

1. Get the code

Clone from GitHub repo here.

2. Create and configure IAM policy, role and user

To access AWS services, we will create a role that has specific permissions for accesing AWS services to provide a minimum permission set according to least privilege principle. Please do not use your AWS root account or another account that have wider permissions.

The AWS CLI commands are below. First we create a user, then create a policy and attach that policy to the new user. Please replace AWS_ACCOUNT_ID with your AWS account id.

After the access key is created, output will be like below. Note the access key id and secret key because this credentials will be used by JavaScript SDK to call AWS services in the browser. We will put these credentials to HTML file.

3. Create Rekognition collection

Rekognition uses face collections to store face information that are extracted from images and use this collection to search faces also. We can create the collection with the command below.

4. Create and configure DynamoDB table

We will store patient information in the Patient table with a partition key named patientId like the picture below. Other fields are name, allergen, imageUrl and s3File.

Patient table

We can create the Patient table and enable streaming on it with the commands below.

Note the LatestStreamArn in output, we will use that stream as event source for our Lambda trigger. It will be like arn:aws:dynamodb:us-east-1:AWS_ACCOUNT_ID:table/Patient/stream/2017–10–09T19:41:06.419.

5. Create Lambda DynamoDB trigger

After we created the Patient table, we will associate a Lambda function that will be executed when a record is added to the table. First we create a Lambda execution role, then attach the policy we have created earlier to this role. This gives the required permissions to the Lambda function to call other AWS services, in this case indexFaces method of Rekognition.

Please replace AWS_ACCOUNT_ID with your AWS account id and PATIENT_TABLE_STREAM_ARN with the stream arn created in the previous step. Also set the value of S3_BUCKET_NAME to the S3 bucket name you want to use.

Lambda function runtime is Node.js and the code is in PatientFaceIndexerLambda.js file. The code is shown below. The code creates a Rekognition object when loading and use this object to index faces when a patient is added to the the Patient table. Patient picture url is put in s3File field.

The code calls indexFaces method with the S3 bucket name of the patient picture, our face collection id and a patientId as ExternalImageId. When we search a face with an image, Rekognition will return the matching face with this ExternalImageId and we could find the other information with this patientId.

6. Create and configure the S3 bucket

We will create a S3 bucket to store pictures of the patients. Normally, we don’t need to store patient images to be able to search, but to show the original image when a matching face is found we will store the original pictures.

We should enable CORS so we can upload from different domains. For simplicity I have enabled all domains, but you should limit the domains that access to the bucket in production usage. CORS settings are in s3-bucket-cors.json file, please set them accordingly. Also replace bucketName with name you want to use.

The application mark pictures as public-read after uploading to S3 to enable them to be shown in browsers. In production, please set bucket pemissions accordingly as the most recent data leakages are related to misconfigured S3 buckets.

7. Create the Lex bot

Now all resources are ready except the Lex bot. Actually, to use the Rekognition, S3 and DynamoDB we don’t need to create a bot, but the Lex bot will enable users to use the system with their voice easily while using their hands for other emergency operations.

Create a Lex bot named AllergyChecker with two intents, AddPatient and CheckPatient. For more information on creating a Lex bot, you can see my post on Lex or AWS Lex tutorial Exercise 2: Create a Custom Amazon Lex Bot.

AddPatient intent will be used for adding a patient to the system with a patient name and allergen. This intent will use PatientName and Allergen slots. When Lex has elicited the slots, the web application will take the picture of the patient and save these information to Patient table.

AddPatient intent

The slot type of PatientName slot is AllergicPersonName. I have used built-in Amazon.Person but I have experienced problems and created a custom slot type. The custom slot behaved better. Also I had to use English names because currently Lex only supports English.

AllergicPersonName custom slot type

Also Allergen slot uses a custom slot named Allergen.

Allergen custom slot type

CheckPatient intent will be used for checking patient by taking a picture. This intent has no slots. When this intent is ready for fulfillment, the web application will take the picture of the patient and will call Rekognition to find the patient with a matching face. If the patient is found the allergen info will be shown with the original patient picture.

CheckPatient intent

8. Modify and review the application

Now all AWS resources are ready. We are ready to modify our application.

Replace the configuration variables shown below in index.html.

Adding a patient to Rekognition face collection is implemented in the PatientFaceIndexer Lambda method we have created early.

Checking a patient using Rekognition is implemented in checkPatient method in index.html. searchFacesByImage method is called with face collection id, image and a face match threshold. We want Rekognition to return the faces with a match confidence greater than 80%. If a match is found, handleCheckPatientResponse method shows the patient name, allergen and patient picture.

9. Test the application

Open index.html in your browser. You should use a WebRTC compliant browser to be able to record audio and video. You can check your browser here.

Once the application is loaded, you can say ‘add’ or ‘check’ to use the application as shown in the video below.

Next Steps

First, I want to remind you about deleting the resources you have created for this post. Please be very strict about the security of resources you have created.

I have fulfilled the intents in the web app. In real applications, Lambda functions can be used for slot validation and fulfillment.

Also for simplicity, audio is recorded for 4 seconds and if all the recorded data is silence, the audio is not sent to Lex. In real applications, we can create a **ScriptProcessorNode** to analyze the recorded audio data in real-time and stop the recording if the silence is detected for a specific duration. For more information, see Web Audio API docs.

For simplicity, I have not used a user authentication and authorization mechanism, but in production usage you should definitely use one like Amazon Cognito to provide security. This way you can keep your AWS access key and secret key private and not put your credentials in JS or HTML files.

For now, Lex supports only English speech commands. I hope it will support other languages and regions.

Summary

In this post, I have developed an allergy checker application that use Amazon Rekognition to save and find patient allergy information with their face. Also I have shown the usage of the Amazon Lex and Amazon Polly to accept voice commands and give response with natural speech.

I have used AWS JavaScript SDK to call AWS services.

The code can be found here.

For more information on AWS services, you can see my posts:

I would like to hear your comments about the application and different use cases for Rekognition.

If you liked this post, please share, like or follow.


Published by HackerNoon on 2017/10/12