Why am I getting 503 Slow Down errors from Amazon S3 when the requests are within the supported request rate per prefix?

Last updated: 2020-01-14

The rate of requests to a prefix in my Amazon Simple Storage Service (Amazon S3) bucket is within the supported request rates per prefix. However, I'm still getting 503 Slow Down errors. Why is this happening?

Resolution

Amazon S3 supports a request rate of 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix in a bucket. The resources for this request rate aren't automatically assigned when a prefix is created. Instead, as the request rate for a prefix increases gradually, Amazon S3 automatically scales to handle the increased request rate.

If there is a fast spike in the request rate for objects in a prefix, Amazon S3 might return 503 Slow Down errors while it scales in the background to handle the increased request rate. To avoid these errors, you can configure your application to gradually increase the request rate and retry failed requests using an exponential backoff algorithm.

In rare cases where the supported request rates are exceeded, it's a best practice to distribute objects and requests across multiple prefixes.

Note: If you have followed the best practices, you have a Developer, Business or Enterprise Support plan, and you want to open a technical support case about 503 errors, then be sure to get the request IDs for the failed requests.


Did this article help you?

Anything we could improve?


Need more help?