Write recursive AWS Lambda functions the right way

Written by theburningmonk | Published 2017/07/27
Tech Story Tags: aws | aws-lambda | serverless | cloud-computing

TLDRvia the TL;DR App

You may not realise that you can write AWS Lambda functions in a recursive manner to perform long-running tasks. Here’s two tips to help you do it right.

AWS Lambda limits the maximum execution time of a single invocation to 5 minutes. Whilst this limit might be raised in the future, it’s likely that you’ll still have to consider timeouts for any long-running tasks. For this reason, I personally think it’s a good thing that the current limit is too low for many long running tasks — it forces you to consider edge cases early and avoid the trap of thinking “it should be long enough to do X” without considering possible failure modes.

Instead, you should write Lambda functions that perform long-running tasks as recursive functions — eg. processing a large S3 file.

Yubl’s road to Serverless — Part 4, Building a scalable push notification system_we built a system that integrates with BigQuery results and is capable of sending millions of push notifications in a…_hackernoon.com

Here’s 2 tips to help you do it right.

use context**.**getRemainingTimeInMillis()

When your function is invoked, the context object allows you to find out how much time is left in the current invocation.

The Context Object (Node.js) - AWS Lambda_While a Lambda function is executing, it can interact with AWS Lambda to get useful runtime information such as:_docs.aws.amazon.com

Suppose you have an expensive task that can be broken into small tasks that can be processed in batches. At the end of each batch, use context.getRemainingTimeInMillis() to check if there’s still enough time to keep processing. Otherwise, recurse and pass along the current position so the next invocation can continue from where it left off.

use local state for optimization

Whilst Lambda functions are ephemeral by design, containers are still reused for optimization which means you can still leverage in-memory states that are persisted through invocations.

You should use this opportunity to avoid loading the same data on each recursion — eg. you could be processing a large S3 file and it’s more efficient (and cheaper) to cache the content of the S3 file.

I notice that AWS has also updated their Lambda best practices page to advise you to take advantage of container reuse:

However, as Lambda can recycle the container between recursions, it’s possible for you to lose the cached state from one invocation to another. Therefore, you shouldn’t assume the cached state to always be available during a recursion, and always check if there’s cached state first.

Also, when dealing with S3 objects, you need to protect yourself against content changes — ie. S3 object is replaced, but container instance is still reused so the cache data is still available. When you call S3’s GetObject operation, you should set the optional If-None-Match parameter with the ETag of the cached data.

Here’s how you can apply this technique.

Have a look at this example Lambda function that recursively processes a S3 file, using the approach outlined in this post.

theburningmonk/lambda-recursive-s3-demo_lambda-recursive-s3-demo - Recursive AWS Lambda function for processing large S3 file_github.com

Like what you’re reading but want more help? I’m happy to offer my services as an independent consultant and help you with your serverless project — architecture reviews, code reviews, building proof-of-concepts, or offer advice on leading practices and tools.

I’m based in London, UK and currently the only UK-based AWS Serverless Hero. I have nearly 10 years of experience with running production workloads in AWS at scale. I operate predominantly in the UK but I’m open to travelling for engagements that are longer than a week. To see how we might be able to work together, tell me more about the problems you are trying to solve here.

I can also run an in-house workshops to help you get production-ready with your serverless architecture. You can find out more about the two-day workshop here, which takes you from the basics of AWS Lambda all the way through to common operational patterns for log aggregation, distribution tracing and security best practices.

If you prefer to study at your own pace, then you can also find all the same content of the workshop as a video course I have produced for Manning. We will cover topics including:

  • authentication & authorization with API Gateway & Cognito
  • testing & running functions locally
  • CI/CD
  • log aggregation
  • monitoring best practices
  • distributed tracing with X-Ray
  • tracking correlation IDs
  • performance & cost optimization
  • error handling
  • config management
  • canary deployment
  • VPC
  • security
  • leading practices for Lambda, Kinesis, and API Gateway

You can also get 40% off the face price with the code ytcui. Hur­ry though, this dis­count is only avail­able while we’re in Manning’s Ear­ly Access Pro­gram (MEAP).


Written by theburningmonk | AWS Serverless Hero. Independent Consultant. Developer Advocate at Lumigo.
Published by HackerNoon on 2017/07/27