How can I scale my request rate to S3, to improve request rate performance?

Last updated: 2022-06-27

I expect my Amazon Simple Storage Service (Amazon S3) bucket to get high request rates. What object key naming pattern should I use to get better performance?


Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. You can increase your read or write performance by using parallelization. Amazon S3 automatically scales in response to sustained new request rates, dynamically optimizing performance. If Amazon S3 is optimizing for a new request rate, you receive a temporary HTTP 503 request response until the optimization completes. When the performance optimization completes, the new request rate is no longer retried.

Because Amazon S3 optimizes its prefixes for request rates, unique key naming patterns are no longer a best practice.

For more information about Amazon S3 performance optimization, see Performance Guidelines for Amazon S3 and Performance Design Patterns for Amazon S3.

Did this article help?

Do you need billing or technical support?