5 Commands you must learn when building AWS Serverless applications

Written by Terren_in_VA | Published 2017/11/25
Tech Story Tags: aws | serverless | aws-lambda | technology | devops

TLDRvia the TL;DR App

This week is ReInvent, the largest tech conference on Cloud computing. I will be there learning about the latest innovation and use cases. My primary interest this year is around a popular style of building infrastructure referred to as “Serverless”. It’s simple, fast, and less expensive than traditional infrastructure that requires directly managing the hosts.

As I mature my usage of this new model, I’ve learned how to control these services from the command line. The AWS console can be useful to get started, but not an efficient way for managing infrastructure on a daily basis. In the following blog, I will share five of the most powerful commands that I have learned that every software developer should know.

1 — Check User or Role Credentials

Serverless models are dependent on credential based security vs. traditional network based security. That means learning the Identity and Access Management (IAM) service, and how to check what policies exist for different entities. This includes the credentials that you will be using to execute the CLI. This is the best place to get started.

# check the current aws credentials used, similar to whoamiaws iam get-user

# check the user policies that are attached to the user nameaws iam list-attached-user-policies — user-name myawsusername

# check the group policies that are attached to the user nameaws iam list-groups-for-user --user-name myawsusername

# check for access keys granted to a user nameaws iam list-access-keys --user-name myawsusername

# check for roles within an accountaws iam list-roles

Building secure applications in AWS requires only granting what is absolutely needed, and nothing more. Frequently checking what has been granted is essential in keeping with least privilege principles.

2— Execute a Lambda function

Let’s start with the basics. Automating test cases is critical for building robust applications. Each function can be invoked from the command line, with the event data passed in from a local file. Here are the steps to do so.

# read request data into a local variablerequest=$(<request.json)

# invoke the function with the data, and write to local fileaws lambda invoke --function-name myFunction --payload "$request" response.json

# read response file into local variable then print to the consoleresponseOutput=$(<response.json)echo $responseOutput

Once you have this pattern down, automated testing is enabled. This establishes the foundation for running validation of many test scenarios. A few additional lines to this script enable comparisons between the output files and expected results. Like all test automation, this requires an upfront investment. The payback is significant, and a must when building robust applications.

3— Move files between a local host and S3

Storage is used for many things, including media files, binaries, test data, etc. The Simple Storage Service (S3) is highly durable, and very low cost as the pricing requires paying for only what was consumed. S3 is a great alternative to block based storage like EBS volumes that are attached to EC2 hosts.

To use S3 effectively requires thinking of it as an elastic drive — just one that is never mounted to a machine. The commands to manage this service are robust, and mimic most of the same Linux storage commands you are already familiar with.

# copy a local file to a s3 bucket and folderaws s3 cp foo.json s3://mybucket/myfolder

# retrieve a local file from a s3 bucketaws s3 cp s3://mybucket/myfolder foo.json

# list all buckets within an accountaws s3 ls

# list all of the objects and folder within a bucketaws s3 ls s3://mybucket

# test removal (aka dry run) of an object from a s3 bucketaws s3 rm s3://mybucket/myfolder/foo.json --dryrun

# remove an object from a S3 bucketaws s3 rm s3://mybucket/myfolder/foo.json

Here is the official reference guide to continue your learning, including the optional commands that can filter objects using queries.

4 — Update an existing Lambda function

Now that you can execute Lambda functions from the command line and know how to move data around, next is to automate the deployment pipeline. This can be done by creating a build package, staging in S3 (using #3 above), then deploying it to the runtime environment. Here are the commands required to do so.

# first create a zip file containing all elements in the package# note directory/ is what includes libraries in the zipzip -r myfunction.zip lambda.js directory/ package.json

# then copy the zip file to S3aws s3 cp myFunction.zip s3://mybucket/myfolder

# finally deploy the package to the runtime environmentaws lambda update-function-code --function-name myFunction --s3-bucket mybucket --s3-key myfunction.zip

Once you have this in place, add some automated test cases along with it to verify that the deployment was successful. This is the foundation for automated deployment pipelines.

5 — Create a new Lambda function

Function based architectures work best when the services are granular. Automating the provisioning process enables consistency in creating new functions as many attributes of the function are the same (i.e. language, execution role).

# first create a zip file containing all elements in the package# note directory/ is what includes libraries in the zipzip -r myfunction.zip lambda.js directory/ package.json

# create a new function based on the parameters and zip packageaws lambda create-function --function-name newFunction --runtime nodejs6.10 --role arn:aws:iam::1234567890:role/lex_execution --handler lambda --zip-file "fileb://myfunction.zip"

# note: runtime options include nodejs6.10, java8, python2.7

There are many different optional parameters with this command, and there is also the option of using CloudFormation Templates to do the same work. Learning this command makes creating new functions as easy as updating — and potentially overloading — an existing one.

Conclusion

Using these commands simplifies administration of serverless architectures, enabling focus to be on writing code, and not on managing infrastructure!


Published by HackerNoon on 2017/11/25