How do I troubleshoot Lambda function throttling with "Rate exceeded" and 429 "TooManyRequestsException" errors?
Last updated: 2020-02-20
I'm getting "Rate exceeded" and 429 "TooManyRequestsException" errors for my AWS Lambda function. Why is my function throttled?
Throttling is intended to protect your resources and downstream applications. Though Lambda automatically scales to accommodate your incoming traffic, your function can still be throttled for various reasons. Follow these instructions to troubleshoot the cause.
Verify what's throttled
It's possible that throttles that you're seeing aren't on your Lambda function. Throttles can also occur on API calls during your function's invocation.
- Verify if you see throttling messages in Amazon CloudWatch Logs but no corresponding data points in the Lambda Throttles metrics. If there are no Lambda Throttles metrics, the throttling is happening on API calls in your Lambda function code.
- Check your function code for any throttled API calls. If certain API calls are throttled, be sure to use exponential backoff in your code to retry the API calls.
- If you determine that you need a higher transactions per second (TPS) quota for an API call, you can request a service quota increase, if the quota is adjustable.
Check concurrency metrics
- Review your Lambda metrics in Amazon CloudWatch. Check the ConcurrentExecutions metric for your function in the AWS Region where you see throttling.
- Compare the ConcurrentExecutions metric with the Throttles metric for the same timestamp. (View the Maximum statistic for ConcurrentExecutions and the Sum statistic for Throttles.) See if the maximum ConcurrentExecutions are close to your account-level concurrency quota in the Region, along with corresponding data points in the Throttles graph.
- Check if you're exceeding the initial burst concurrency quota for a particular Region. On the Metrics page for Lambda in the CloudWatch console, reduce the graph's time range to one minute. If you're limited by burst scaling, then you see a sudden spike of Throttles that corresponds to a stair-step pattern of ConcurrentExecutions on the graph. To work around burst concurrency limits, you can configure provisioned concurrency.
- Check for spikes in Duration metrics for your function. Concurrency depends on function duration. If your code is taking too long to execute, it might not have enough compute resources. Try increasing the function's memory setting. Then, use AWS X-Ray and CloudWatch Logs to isolate the cause of duration increases. If your function is in an Amazon Virtual Private Cloud (Amazon VPC), see How do I give internet access to my Lambda function in a VPC? for more information.
Note: Changing the memory setting can affect the charges that you incur for execution time.
- Check for an increase in Error metrics for your function. Increased errors can lead to retries and cause an overall increase in invocations. (For asynchronous invocations, Lambda retries failed invocations two more times.) Increased invocations can lead to an increase in concurrency. Use CloudWatch Logs to identify and eliminate errors, and have your function code handle exceptions.
Configure reserved concurrency
- Verify if you've configured reserved concurrency on your Lambda function. Check the setting using the Lambda console, or by calling the GetFunction API.
Note: If a function is configured to have zero reserved concurrency, then the function is throttled because it can't process any events. Be sure to increase the value to a number greater than zero.
- Review the Maximum statistic in CloudWatch for your function to see if it hits the maximum value for the ConcurrentExecutions metric at any point.
- Increase the reserved concurrency for your function to a concurrency value that keeps it from being throttled. Change the setting using the Lambda console, or by calling the PutFunctionConcurrency API.
Use exponential backoff in your app
As a best practice, retry throttled requests by using exponential backoff in your application that's calling your Lambda function.
Use a dead-letter queue
If you're using asynchronous event sources such as Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch Events, configure your function with a dead-letter queue (DLQ) to catch any events that are discarded due to constant throttles. This can protect your data if you're seeing significant throttling.
Note: For Amazon Simple Queue Service (Amazon SQS) event sources, you must configure the DLQ on the Amazon SQS queue.
Request a service quota increase