AWS Compute Blog

Understanding Container Reuse in AWS Lambda

Tim Wagner Tim Wagner, AWS Lambda

AWS Lambda functions execute in a container (sandbox) that isolates them from other functions and provides the resources, such as memory, specified in the function’s configuration. In this article we discuss how Lambda creates and reuses these sandboxes, and the impact of those policies on the programming model.

Startup

The first time a function executes after being created or having its code or resource configuration updated, a new container with the appropriate resources will be created to execute it, and the code for the function will be loaded into the container. In nodejs, initialization code is executed once per container creation, before the handler is called for the first time.

In nodejs, a Lambda function can complete in one of three ways:

  1. Timeout. The user-specified duration has been reached. Execution will be summarily halted regardless of what the code is currently doing.
  2. Controlled termination. One of the callbacks (which need not be the original handler entry point) invokes context.done() and then finishes its own execution. Execution will terminate regardless of what the other callbacks (if any) are doing.
  3. Default termination. If all callbacks have finished (even if none have called context.done()), the function will also end. If there is no call to context.done(), you’ll see the message “Process exited before completing request” in the log (in this case, it really means ‘exited without having called context.done()’).

There’s also effectively a fourth way to exit – by crashing or calling process.exit(). For example, if you include a binary library with a bug and it segfaults, you’ll effectively terminate execution of that container.

Since context.done plays an important role here, a quick reminder of how to use it: The first argument should be null to indicate a successful outcome of the function (undefined is treated similarly). Any other value will be interpreted as an error result. The stringified representation of non-null values is automatically logged to the AWS CloudWatch Log stream. An error result may trigger Lambda to retry the function; see the S3 bucket notification and registerEventSource documentation for more information on retry semantics and the checkpointing of ordered event sources, such as Amazon DynamoDB Streams. The second argument to done() is an optional message string; if present, it will be displayed in the console for test invocations below the log output. (The message argument can be used for both success and error cases.)

For those encountering nodejs for the first time in Lambda, a common error is forgetting that callbacks execute asynchronously and calling context.done() in the original handler when you really meant to wait for another callback (such as an S3.PUT operation) to complete, forcing the function to terminate with its work incomplete. There are also some excellent nodejs packages that provide fine-grained control over callback patterns, including synchronization and ordering mechanisms, to make callback choreography easier, and we’ll explore using some of them in Lambda in a future article.

Round 2

Let’s say your function finishes, and some time passes, then you call it again. Lambda may create a new container all over again, in which case the experience is just as described above. This will be the case for certain if you change your code.

However, if you haven’t changed the code and not too much time has gone by, Lambda may reuse the previous container. This offers some performance advantages to both parties: Lambda gets to skip the nodejs language initialization, and you get to skip initialization in your code. Files that you wrote to /tmp last time around will still be there if the sandbox gets reused.

Remember, you can’t depend on a container being reused, since it’s Lambda’s prerogative to create a new one instead.

The Freeze/Thaw Cycle

We’ve talked about what happens in the original nodejs process that represents your Lambda function, but what if you spawned background threads or other processes? Outside of nodejs, Lambda doesn’t look at what else you might have done (or still be doing) to decide when to finish execution. If you need to wait for additional work to complete, you should represent that in nodejs with a callback (one that doesn’t call context.done() until the background job is finished). But let’s say you have a background process running when the function finishes – what happens to it if the container is reused? In this case, Lambda will actually “freeze” the process and thaw it out the next time you call the function (but *only* if the container is reused, which isn’t a guarantee). So in the reuse case, your background processes will still be there, but they won’t have been executing while you were away. This can be really convenient if you use them as companion processes, since it avoids the overhead of recreating them (in the same way that Lambda avoids the overhead of recreating the nodejs process itself when it reuses a sandbox). In the future we’ll extend the duration limit of Lambda functions beyond 60 seconds, allowing you to do long-running jobs when your intent really is to keep things running.

Feedback

Lambda’s still in preview, and one of our goals for preview is to get feedback from users on the programming model and APIs. Let us know how we’re doing and any ideas you have for making Lambda easier to use!

 

-Tim