Amazon S3 delivers strong read-after-write consistency automatically for all applications, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost. With strong consistency, S3 simplifies the migration of on-premises analytics workloads by removing the need to make changes to applications, and reduces costs by removing the need for extra infrastructure to provide strong consistency.

After a successful write of a new object, or an overwrite or delete of an existing object, any subsequent read request immediately receives the latest version of the object. S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with any changes reflected. 

Introducing strong consistency for Amazon S3, featuring Dropbox (4:36)

What is strong consistency?

Amazon S3 pioneered object storage in the cloud with high availability, performance, and virtually unlimited scalability, with eventual consistency. Millions of customers of all sizes and industries have used Amazon S3 to store and protect any amount of data for a range of use cases. Increasingly, customers are using big data analytics applications that often require access to an object immediately after a write. Without strong consistency, you would insert custom code into these applications, or provision databases to keep objects consistent with any changes in Amazon S3 across millions or billions of objects.

Amazon S3 now delivers strong read-after-write and list consistency automatically for all applications. With strong consistency, S3 simplifies the migration of on-premises analytics workloads by removing the need to make changes to applications, and reduces costs by removing the need for extra infrastructure to provide strong consistency. 

Read the documentation to learn more about the Amazon S3 consistency model.

Benefits

Reduced complexity

Strong read-after-write consistency and strong consistency for list operations is automatic, and you no longer need to use workarounds, or make changes to your applications. 

Cost-efficiency

S3 consistency is available at no additional cost and removes the need for additional third-party, services, and complex architecture. 

Data consistency

After a successful write of a new object or an overwrite of an existing object, applications can immediately download the object and the latest write is returned. You can also immediately perform a listing of the objects in a bucket after a write and any changes will be reflected in the results returned. 

How strong consistency for Amazon S3 works?

After a successful write of a new object, or an overwrite or delete of an existing object, any subsequent read request immediately receives the latest version of the object. S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with any changes reflected.

For all existing and new objects, and in all regions, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent. What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket. For more details, read the documetation

Customers

“Strong read-after-write consistency is a huge win for us because it helps drive down the time to complete our data processing pipelines by removing the software overhead needed to deliver consistency, and it also reduces our cost to operate the data lake by simplifying the data access architecture.”

Ashish Gandhi, Technical Lead Data Infrastructure - Dropbox

Read the case study 
Salesforce
"We’ve been using Amazon S3 and the new strong consistency model to enable users to access the petabytes of log data in production systems around the world. Strong consistency is important for our Presto-Hive based data processing workflows. Before the change in consistency model, we were planning for edge cases where eventually consistent directory listings could produce incorrect query results. Now, with S3 strong consistency, we are confident that our data platform will always provide accurate and consistent query results."

Anil Ranka, Senior Director - Infrastructure Engineering - Salesforce

Vincent Poon, Principal Engineer - Salesforce 

Performance

Amazon S3 provides industry leading performance for cloud object storage. Amazon S3 supports parallel requests, which means you can scale your S3 performance by the factor of your compute cluster, without making any customizations to your application. Performance scales per prefix, so you can use as many prefixes as you need in parallel to achieve the required throughput. There are no limits to the number of prefixes. Amazon S3 performance supports at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data. Each S3 prefix can support these request rates, making it simple to increase performance significantly.

To achieve this S3 request rate performance you do not need to randomize object prefixes to achieve faster performance. That means you can use logical or sequential naming patterns in S3 object naming without any performance implications. Refer to the Performance Guidelines for Amazon S3 and Performance Design Patterns for Amazon S3 for the most current information about performance optimization for Amazon S3.

Ready to get started?

Learn about S3 pricing
Learn more about S3 pricing

Pay only for what you use. There is no minimum fee.

Learn more 
Sign up for a free account
Sign up for a free account

Instantly get access to the AWS Free Tier and start experimenting with Amazon S3. 

Sign up 
Start building with S3
Start building in the console

Get started building with Amazon S3 in the AWS Console.

Get started