Networking & Content Delivery

Lambda@Edge Design Best Practices

This blog post is the first in a series about Lambda@Edge best practices to help you optimize using Lambda@Edge throughout the life cycle of your application. Topics will include how to create the best Lambda@Edge design for your use case, how to integrate Lambda@Edge in your CI/CD pipeline, and how to make sure your solution is working well and addressing your business needs. In this first post, I’ll focus on best practices for designing Lambda@Edge solutions. I’ll share some common use cases when our customers have implemented Lambda@Edge solutions, explain how to choose when to trigger a Lambda@Edge function, and, finally, provide recommendations to optimize performance and cost efficiency when you’re working with Lambda@Edge. To learn more about edge networking with AWS, click here.

Lambda@Edge enables you to run Node.js functions across AWS locations globally without provisioning or managing servers. This capability allows you to deliver richer and more personalized content to your customers with low latency. Functions that customize content can run at different times, depending on what you want to accomplish. For example, you might decide to have CloudFront execute a function when viewers request content, or when CloudFront makes a request to your origin server. After you upload your Node.js code to Lambda@Edge, the service takes care of everything required to replicate, route, and scale your code with high availability, so functions can run at an AWS location close to your users. You pay only for the compute time that you use.

Common use cases

There are many solutions already implemented by our customers today for business use cases. The benefits of using Lambda@Edge can be divided into the following four categories:

Performance: One of the biggest benefits of using Lambda@Edge is to improve the cache hit ratio by either increasing the likelihood that content will be cached when it’s returned from the origin, or increasing the usability of content that’s already in cache. An improved cache hit ratio results in better application performance by avoiding latency caused by a cache miss. Here are some examples of how you can use Lambda@Edge to improve the cache hit ratio:

  • Add or modify cache control headers on responses
  • Implement follow redirection for 3xx responses from your origin, to reduce viewer response latency
  • Use query string or user agent normalization to reduce request variably
  • Dynamically route to different origins based on attributes of request headers, cookies, or query strings

Dynamic Content Generation: With Lambda@Edge, you can dynamically generate custom content based on attributes of requests or responses. For example, you can do the following:

  • Resize images based on request attributes
  • Render pages based on a logic-less template, such as Mustache
  • Do A/B testing
  • Generate a 302/301 redirection response for all requests to an expired or outdated resource

Security: Lambda@Edge can also be used to handle custom authentication and authorization. The following are some example use cases:

  • Sign requests to custom origins that enforce access control
  • Configure viewer token authentication, for example, by using a JWT/MD5/SHA token hash
  • Set up bot detection
  • Add HSTS or CSP security headers

Origin independence: In some scenarios, origins require additional logic for requests and responses. Instead of implementing this in code that runs on the origin server, you can execute a Lambda@Edge function in CloudFront instead, for a more seamless solution. For example, you can implement logic to do the following:

  • Create pretty URLs
  • Manage authentication and authorization for origin requests
  • Manipulate URL or requests to match your origin directory structure
  • Implement custom load balancing and failover logic

Choose the right trigger

You can trigger Lambda@Edge functions to be executed on the following four different CloudFront events:

Request Response
Origin Executes on a cache miss, before a request is forwarded to the origin Executes on a cache miss, after a response is received from the origin
Viewer Executes on every request before CloudFront’s cache is checked Executes on all requests, after a response is received from the origin or cache

General guidance is provided in the developer guide, to help you choose a Lambda@Edge trigger, based on what you want to do. In addition, the following questions can help you decide which Lambda@Edge trigger to use:

  • Do you want your function to be executed on a cache miss? Use origin triggers.
  • Do you want your function to be executed on all requests? Use viewer triggers.
  • Do you want to modify a cache key (URL, cookies, headers, query string)? Use a viewer request trigger.
  • Do you want to modify the response without caching the result? Use a viewer response trigger.
  • Do you want to dynamically select the origin? Use an origin request trigger.
  • Do you want to rewrite a URL to the origin? Use an origin request trigger.
  • Do you want to generate responses that will not be cached? Use a viewer request trigger.
  • Do you want to modify the response before it’s cached? Use an origin response trigger.
  • Do you want to generate a response that can be cached? Use an origin request trigger.

Optimize for cost efficiency

Lambda@Edge is charged based on the following two factors:

  1. Number of requests. Currently (at the time of this blog post), the cost is $0.60 per 1M requests.
  2. Function duration. Currently (at the time of this blog post), the cost is $0.00005001 for every GB-second used. For example, if you allocate 128MB of memory to be available per execution with your Lambda@Edge function, then your duration charge will be $0.00000625125 for every 128MB-second used. Note that Lambda@Edge functions are metered at a granularity of 50ms.

Please refer to our pricing guide for current prices and several Lambda@Edge pricing examples. To help reduce the cost of using Lambda@Edge, follow these suggestions to make sure you invoke functions only when needed and optimize how resources are allocated.

First, optimize function invocation by triggering Lambda@Edge on the most specific CloudFront behaviour. For example, in this solution where Lambda@Edge is used to authorize viewers, the Lambda@Edge function is triggered only for private content. In this use case, the origin’s private content is identified by specifying a “private/*” path pattern in a CloudFront cache behavior.

Next, choose the right trigger. Some Lambda@Edge logic can be implemented by using either origin or viewer triggers, such as when you add an HSTS header on HTTP responses to viewers. In these scenarios, choose an origin trigger rather than a viewer trigger, to optimize Lambda@Edge invocation and leverage the CloudFront cache.

Finally, optimize function resource allocation. While resource allocation for viewer triggers is limited to 128MB, for origin triggers you can allocate up to 3008MB. Any increase in memory size results in an equivalent increase in the CPU available, which can be crucial to optimizing your function’s execution time. You need to pick the best memory size configuration to balance these factors, depending on what your logic requires and your budget.

Optimize for performance

Lambda@Edge functions execute across AWS locations globally on the CloudFront network. When a viewer request hits an edge location, the request is terminated at the edge, and then Lambda@Edge executes the function at an AWS location close to the viewer.

When you compare using a CloudFront distribution with and without using Lambda@Edge, the latency perceived by viewers is different. This difference depends on several factors, including the CloudFront distribution configuration, trigger type, edge location, function code, and application logic. Here are some examples:

  • Consider an origin in the us-east-1 Region with a Lambda function behind an API Gateway. The application dynamically generates 3xx redirects for global viewers with an average first byte latency (FBL) of 260 ms (160 ms network FBL to us-east1 Region + 100ms origin FBL). When redirection logic is moved to a Lambda@Edge function, the average application FBL drops to 110 ms (80 ms CloudFront FBL + 30 ms Lambda@Edge invocation time on viewer request trigger).
  • Consider the delivery of static html files on CloudFront with a cache hit ratio of 95%, where Lambda@Edge is used to add HTTP security headers. When Lambda@Edge is configured to only execute on cache misses, the average FBL increases by 0.5 ms (5% x 10 ms invocation time on origin response trigger).

The Lambda@Edge Service team is constantly improving the performance of the service, so the latency figures above will evolve over time.

For each use case, you can improve your viewers’ experience by optimizing your implementation of Lambda@Edge. To do this, explore ways to reduce the Lambda@Edge function execution time and make sure your function executes within the functional and scaling limits set by the service.

First, reduce function execution duration by doing the following:

  • Optimize your function code for performance. For example, reuse the execution context to limit the re-initialization of variables and objects on every invocation. This is especially important if you have an externalized configuration that your Lambda@Edge code retrieves, stores, and references locally. Instead, consider if you can use a static initialization or constructor, global, static variables, and singletons.
  • Optimize the external network calls your function uses. For example, enable TCP Keep Alive and reuse connections (to HTTP, databases, and so on) that you established during previous invocations. In addition, when possible, make network calls to resources in the same region where your Lambda@Edge function is executing to reduce network latency. One way you can do this is by using DynamoDB global tables. Another recommendation is to make an external request to only the resources that you need in your function. For example, you can use query filters in Aurora or S3 Select. Also, if an external variable that you request doesn’t change often, doesn’t vary for different viewers, and doesn’t require immediate propagation, consider using a constant in your code and only update your function when the variable changes.
  • Optimize your deployment package. For example, avoid dependencies on external packages for simple functions that you can write yourself. When you need to use an external resource, choose lightweight packages. In addition, use tools like minify and browserfy to compact your deployment package.

Second, pay attention to the functional and scaling limits of Lambda@Edge, and proactively request a limit increase if you need it.

To illustrate how you can estimate the limits your function needs, consider the following example. Suppose that you are delivering static images with CloudFront at a rate of 5000 requests per second (RPS) and a cache hit ratio of 90%, and that you are using a Lambda@Edge function with an origin response trigger to add HTTP security headers.

Lambda@Edge will be invoked at rate of 10% x 5000 RPS = 500 RPS. Because this is a simple function, we can estimate an average execution time of 1 ms. To calculate the number of steady-state concurrent executions, add 10 ms to the average execution time and multiply it by the RPS we just calculated. In our example, the calculation is 500 RPS x (10 ms + 1 ms) = 5.5 which is rounded up to 6 concurrent executions. This is well below the default limit of 1000 concurrent executions per account per region.

Please note that if your traffic profile requires Lambda@Edge functions to burst in a few seconds to hundreds of concurrent executions, you must take into consideration some additional factors. During sudden bursts, Lambda@Edge immediately increases your concurrently executing functions by a predetermined amount, and if it’s not sufficient to accommodate the traffic surge, Lambda@Edge will continue to increase the number of concurrent function executions by 500 per minute in each region until your account safety limit has been reached or the number of concurrently executing functions is sufficient to successfully process the increased load. Because of this automatic scaling, during the first minute of your traffic surge, Lambda@Edge’s concurrent execution is limited to that predetermined amount. Additionally, when scaling out, cold starts increase function execution times up to orders of magnitude over simple functions.


By using CloudFront and Lambda@Edge, together with other AWS services like DynamoDB global tables, you can start building high-performing distributed serverless web applications for your use cases. In this blog post, I shared several common Lambda@Edge use cases, and recommended some best practices to improve the performance of your Lambda@Edge implementation, while making sure it fits in your budget. In the next blog post in this series, I’ll explain straightforward ways to develop and test Lambda@Edge functions, and how to integrate Lambda@Edge in your CI/CD pipeline.

Blog: Using AWS Client VPN to securely access AWS and on-premises resources
Learn about AWS VPN services
Watch re:Invent 2019: Connectivity to AWS and hybrid AWS network architectures