Why am I getting 503 Slow Down errors from Amazon S3 when the requests are within the supported request rate per prefix?

Last updated: 2022-11-21

The rate of requests to a prefix in my Amazon Simple Storage Service (Amazon S3) bucket is within the supported request rates per prefix. But, I'm getting 503 Slow Down errors. Why am I getting errors, and how can I resolve them?

Resolution

Amazon S3 supports a request rate of 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. The resources for this request rate aren't automatically assigned when a prefix is created. Instead, as the request rate for a prefix increases gradually, Amazon S3 automatically scales to handle the increased request rate.

Note: Amazon S3 doesn't assign additional resources as you create prefixes. Instead, Amazon S3 scales based on request patterns. As you are making requests at a high request rate close to the rate limit, S3 returns 503 errors. You must maintain the request rate and implement a retry with exponential backoff. This allows Amazon S3 time to monitor the request patterns and scale in the backend to handle the request rate.

If there is a sudden increase in the request rate for objects in a prefix, Amazon S3 might return 503 Slow Down errors. It does this while it scales in the background to handle the increased request rate. To avoid these errors, configure your application to gradually increase the request rate. Then retry failed requests using an exponential backoff algorithm.

If supported request rates are exceeded, it's a best practice to distribute objects and requests across multiple prefixes.

Note: If you have a Developer, Business or Enterprise Support plan, you can open a technical support case about 503 errors. But first make sure you have followed the best practices, and get the request IDs for the failed requests.


Did this article help?


Do you need billing or technical support?