How can I increase Amazon S3 request limits to avoid throttling on my Amazon S3 bucket?

Last updated: 2021-04-19

My Amazon Simple Storage Service (Amazon S3) bucket is returning 503 Slow Down errors. How can I increase the Amazon S3 request limits to avoid throttling?

Resolution

You can send 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in an Amazon S3 bucket. There are no limits to the number of prefixes that you can have in your bucket.

Note: LIST and GET objects don't share the same limit. The performance of LIST calls depend on the number of Deleted markers present at the top of an object version for a given prefix. For more information, see When I run a list command for the objects in my Amazon S3 bucket, the command is unresponsive. How can I troubleshoot this?

If you receive only a few 503 Slow Down errors, then you can try to resolve the errors by implementing a retry mechanism with exponential backoff. If the error persists after you implement retries, then gradually scale up your S3 request workloads. After that, distribute the objects and requests among multiple prefixes. For more information, see Best practices design patterns: Optimizing Amazon S3 performance.


Did this article help?


Do you need billing or technical support?