How do I scale my request rate to Amazon S3 and improve my request rate performance?

1 minute read
0

My Amazon Simple Storage Service (Amazon S3) bucket gets high request rates, and I want to improve my rate performance.

Resolution

To automatically scale, Amazon S3 dynamically optimizes performance in response to sustained high request rates. Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned Amazon S3 prefix. There are no limits to the number of prefixes in a bucket. You can use parallelization to increase your read or write performance. If Amazon S3 is optimizing for a new request rate, then you receive a temporary HTTP 503 request response until the optimization completes. Because Amazon S3 optimizes its prefixes for request rates, unique key naming patterns are not a best practice. For more information about Amazon S3 performance optimization, see Performance guidelines for Amazon S3 and Performance design patterns for Amazon S3.

AWS OFFICIAL
AWS OFFICIALUpdated a month ago
2 Comments

The information presented on this page requires an update. It currently states, "Your application can achieve 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket," which might imply that each prefix can attain 3,500/5,500 TPS. The more accurate statement should be, "Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned Amazon S3 prefix" The crucial point here is the "per partitioned" aspect. Additionally, it's important to note that auto partitioning happens behind the scenes and involves S3 services monitors services that run automatically and the process can take from 30 to 60 minutes. Should the customer choose to do so, they can pre-petition using AWS support.

AWS
replied 6 months ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 6 months ago