Get Started with AWS for Free

Create a Free Account

AWS Free Tier includes 5GB storage, 20,000 Get Requests, and 2,000 Put Requests with Amazon S3.

View AWS Free Tier Details »



Q: What is Amazon S3?

Amazon S3 is storage for the Internet. It’s a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at very low costs.

Q: What can I do with Amazon S3?

Amazon S3 provides a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. Using this web service, developers can easily build applications that make use of Internet storage. Since Amazon S3 is highly scalable and you only pay for what you use, developers can start small and grow their application as they wish, with no compromise on performance or reliability. It is designed to be highly flexible: Store any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a simple FTP application, or a sophisticated web application such as the Amazon.com retail web site. Amazon S3 frees developers to focus on innovation, not figuring out how to store their data.

Q: How can I get started using Amazon S3?

To sign up for Amazon S3, click the “Sign up for This Web Service” button on the Amazon S3 detail page. You must have an Amazon Web Services account to access this service; if you do not already have one, you will be prompted to create one when you begin the Amazon S3 sign-up process. After signing up, please refer to the Amazon S3 documentation and sample code in the Resource Center to begin using Amazon S3.

Q: What are the technical benefits of Amazon S3?

Amazon S3 was carefully engineered to meet the requirements for scalability, reliability, speed, low-cost, and simplicity that must be met for Amazon’s internal developers. Amazon S3 passes these same benefits onto any external developer. More information about the Amazon S3 design requirements is available on the Amazon S3 detail page.

Q: What can developers do now that they could not before?

Until now, a sophisticated and scalable data storage infrastructure like Amazon’s has been beyond the reach of small developers. Amazon S3 enables any developer to leverage Amazon’s own benefits of massive scale with no up-front investment or performance compromises. Developers are now free to innovate knowing that no matter how successful their businesses become, it will be inexpensive and simple to ensure their data is quickly accessible, always available, and secure.

Q: What kind of data can I store?

You can store virtually any kind of data in any format. Please refer to the Amazon Web Services Licensing Agreement for details.

Q: How much data can I store?

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

Q: How can I use Amazon S3’s lifecycle policy to lower my Amazon S3 storage costs?

With Amazon S3’s lifecycle policies, you can configure your objects to be archived to Amazon Glacier or deleted after a specific period of time. You can use this policy-driven automation to quickly and easily reduce storage costs as well as save time. In each rule you can specify a prefix, a time period, a transition to Amazon Glacier, and/or an expiration. For example, you could create a rule that archives all objects with the common prefix “logs/” 30 days from creation, and expires these objects after 365 days from creation. You can also create a separate rule that only expires all objects with the prefix “backups/” 90 days from creation. Lifecycle policies apply to both existing and new S3 objects, ensuring that you can optimize storage and maximize cost savings for all current data and any new data placed in S3 without time-consuming manual data review and migration. Within a lifecycle rule, the prefix field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects, specify their common prefix (e.g. “logs/”). You can specify a transition action to have your objects archived and an expiration action to have your objects removed. For time period, provide the date (e.g. January 31, 2013) or the number of days from creation date (e.g. 30 days) after which you want your objects to be archived or removed. You may create multiple rules for different prefixes. For more information, please refer to the Lifecycle Management topic in the Amazon S3 developer guide.

Q: How can I delete large numbers of objects?

You can use Multi-Object Delete to delete large numbers of objects from Amazon S3. This feature allows you to send multiple object keys in a single request to speed up your deletes. Amazon does not charge you for using Multi-Object Delete.

Q: How can I configure my objects to be deleted after a specific time period?

You can use the Object Expiration feature to remove objects from your buckets after a specified number of days. You can define the expiration rules for a set of objects in your bucket through the Lifecycle Configuration policy that you apply to the bucket. Each Object Expiration rule allows you to specify a prefix and an expiration period. The prefix field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects, specify their common prefix (e.g. “logs/”). For expiration period, provide the number of days from creation date (i.e. age) after which you want your objects removed. You may create multiple rules for different prefixes. For example, you could create a rule that removes all objects with the prefix “logs/” 30 days from creation, and a separate rule that removes all objects with the prefix “backups/” 90 days from creation.

After an Object Expiration rule is added, the rule is applied to objects that already exist in the bucket as well as new objects added to the bucket. Once objects are past their expiration date, they are identified and queued for removal. You will not be billed for storage for objects on or after their expiration date, though you may still be able to access those objects while they are in queue before they are removed. As with standard delete requests, Amazon S3 doesn’t charge you for removing objects using Object Expiration. You can set Expiration rules for your versioning-enabled or versioning-suspended buckets as well.

For more information on using Expiration feature, please refer to the Object Expiration topic in the Amazon S3 Developer Guide.

Q: What does Amazon do with my data in Amazon S3?

Amazon will store your data and track its associated usage for billing purposes. Amazon will not otherwise access your data for any purpose outside of the Amazon S3 offering, except when required to do so by law. Please refer to the Amazon Web Services Licensing Agreement for details.

Q: Does Amazon store its own data in Amazon S3?

Yes. Developers within Amazon use Amazon S3 for a wide variety of projects. Many of these projects use Amazon S3 as their authoritative data store, and rely on it for business-critical operations.

Q: How is Amazon S3 data organized?

Amazon S3 is a simple key-based object store. When you store data, you assign a unique object key that can later be used to retrieve the data. Keys can be any string, and can be constructed to mimic hierarchical attributes.

Q: How do I interface with Amazon S3?

Amazon S3 provides simple, standards-based REST and SOAP web services interfaces that are designed to work with any Internet-development toolkit. The operations are intentionally made simple to make it easy to add new distribution protocols and functional layers.

Q: How reliable is Amazon S3?

Amazon S3 gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service was designed for 99.99% availability, and carries a service level agreement providing service credits if a customer’s availability falls below 99.9%.

Q: What data consistency model does Amazon S3 employ?

Amazon S3 buckets in the US Standard region provide eventual consistency. Amazon S3 buckets in all other regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.

Q: What happens if traffic from my application suddenly spikes?

Amazon S3 was designed from the ground up to handle traffic for any Internet application. Pay-as-you-go pricing and unlimited capacity ensures that your incremental costs don’t change and that your service is not interrupted. Amazon S3’s massive scale enables us to spread load evenly, so that no individual application is affected by traffic spikes.

Q: What is the BitTorrent™ protocol, and how do I use it with Amazon S3?

BitTorrent is an open source Internet distribution protocol. Amazon S3’s bandwidth rates are inexpensive, but BitTorrent allows developers to further save on bandwidth costs for a popular piece of data by letting users download from Amazon and other users simultaneously. Any publicly available data in Amazon S3 can be downloaded via the BitTorrent protocol, in addition to the default client/server delivery mechanism. Simply add the ?torrent parameter at the end of your GET request in the REST API.

Q: Does Amazon S3 offer a Service Level Agreement (SLA)?

Yes. The Amazon S3 SLA provides for a service credit if a customer’s monthly uptime percentage is below our service commitment in any billing cycle. More information can be found here.


Q: Where is my data stored?

You specify a region when you create your Amazon S3 bucket. Within that region, your objects are redundantly stored on multiple devices across multiple facilities. Please refer to Regional Products and Services for details of Amazon S3 service availability by region

Q: How do I decide which region to store my data in?

There are several factors to consider based on your specific application. You may want to store your data in a Region that…

  • ...is near to your customers, your data centers, or your other AWS resources in order to reduce data access latencies.
  • ...is remote from your other operations for geographic redundancy and disaster recovery purposes.
  • ...enables you to address specific legal and regulatory requirements.
  • ...allows you to reduce storage costs. You can choose a lower priced region to save money. Please see the pricing section on the S3 detail page.

Q: I’m not in the US or Europe; can I use Amazon S3?

Anyone can use Amazon S3. You just have to decide which region you want Amazon S3 to store your data in.


Q: How much does Amazon S3 cost?

With Amazon S3, you pay only for what you use. There is no minimum fee. You can estimate your monthly bill using the AWS Simple Monthly Calculator.

We charge less where our costs are less. Some prices vary across Amazon S3 Regions and are based on the location of your bucket. There is no Data Transfer charge for data transferred within an Amazon S3 Region via a COPY request. Data transferred via a COPY request between Regions is charged at rates specified on the pricing section of the S3 detail page. There is no Data Transfer charge for data transferred between Amazon EC2 and Amazon S3 within the same Region or for data transferred between the Amazon EC2 Northern Virginia Region and the Amazon S3 US Standard Region. Data transferred between Amazon EC2 and Amazon S3 across all other Regions (i.e. between the Amazon EC2 Northern California and Amazon S3 US Standard Regions) is charged at rates specified on the pricing section of the S3 detail page.

For S3 pricing information, please visit the pricing section on the S3 detail page.

Q: Why do prices vary depending on which Amazon S3 Region I choose?

We charge less where our costs are less. For example, our costs are lower in the US Standard Region than in the US West (Northern California) Region.

Q: How will I be charged and billed for my use of Amazon S3?

There are no set-up fees or commitments to begin using the service. At the end of the month, your credit card will automatically be charged for that month’s usage. You can view your charges for the current billing period at any time on the Amazon Web Services web site, by logging into your Amazon Web Services account, and clicking “Account Activity” under “Your Web Services Account”.

With the AWS Free Usage Tier*, you can get started with Amazon S3 for free in all regions except the AWS GovCloud Region. Upon sign-up, new AWS customers receive 5 GB of Amazon S3 standard storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of data transfer in, and 15GB of data transfer out each month for one year.

Amazon S3 charges you for the following types of usage:
Note: The calculations below assume there is no AWS Free Tier in place.

Storage Used:

Amazon S3 storage pricing is summarized on the Amazon S3 Pricing Chart.

The volume of storage billed in a month is based on the average storage used throughout the month. This includes all object data and metadata stored in buckets that you created under your AWS account. We measure your storage usage in “TimedStorage-ByteHrs,” which are added up at the end of the month to generate your monthly charges.

Storage Example:
Assume you store 100GB (107,374,182,400 bytes) of standard Amazon S3 storage data in your bucket for 15 days in March, and 100TB (109,951,162,777,600 bytes) of standard Amazon S3 storage data for the final 16 days in March.

At the end of March, you would have the following usage in Byte-Hours:
Total Byte-Hour usage
= [107,374,182,400 bytes x 15 days x (24 hours / day)] + [109,951,162,777,600 bytes x 16 days x (24 hours / day)] = 42,259,901,212,262,400 Byte-Hours.

Let’s convert this to GB-Months:
42,259,901,212,262,400 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 52,900 GB-Months

This usage volume crosses three different volume tiers. The monthly storage price is calculated below assuming the data is stored in the US Standard Region:
1 TB Tier: 1024 GB x $0.0300 = $30.72
1 TB to 50 TB Tier: 50,176 GB (49×1024) x $0.0295 = $1,480.19
50 TB to 450 TB Tier: 1,700 GB (remainder) x $0.0290 = $49.30

Total Storage Fee = $30.72 + $1,480.19 + $49.30 = $1,560.21

Network Data Transferred In:

Amazon S3 Data Transfer In pricing is summarized on the Amazon S3 Pricing Chart.

This represents the amount of data sent to your Amazon S3 buckets. Data Transfer is $0.000 per GB for buckets in the US Standard, US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), South America (Sao Paulo), and AWS GovCloud (US) Regions.

Network Data Transferred Out:

Amazon S3 Data Transfer Out pricing is summarized on the Amazon S3 Pricing Chart. For Amazon S3, this charge applies whenever data is read from any of your buckets from a location outside of the given Amazon S3 Region.

Data Transfer Out pricing rate tiers take into account your aggregate Data Transfer Out from a given region to the Internet across Amazon EC2, Amazon S3, Amazon RDS, Amazon SimpleDB, Amazon SQS, Amazon SNS and Amazon VPC. These tiers do not apply to Data Transfer Out from Amazon S3 in one AWS region to another AWS region.

Data Transfer Out Example:
Assume you transfer 1TB of data out of Amazon S3 from the US Standard Region to the Internet every day for a given 31-day month. Assume you also transfer 1TB of data out of an Amazon EC2 instance from the same region to the Internet over the same 31-day month.

Your aggregate Data Transfer would be 62 TB (31 TB from Amazon S3 and 31 TB from Amazon EC2). This equates to 63,488 GB (62 TB * 1024 GB/TB).

This usage volume crosses three different volume tiers. The monthly Data Transfer Out fee is calculated below assuming the Data Transfer occurs in the US Standard Region:
10 TB Tier: 10,240 GB (10×1024 GB/TB) x $0.120 = $1,228.80
10 TB to 50 TB Tier: 40,960 GB (40×1024) x $0.090 = $3,686.40
50 TB to 150 TB Tier: 12,288 GB (remainder) x $0.070 = $860.16

Total Data Transfer Out Fee = $1,228.80+ $3,686.40 + $860.16= $5,775.36

Requests:

Amazon S3 Request pricing is summarized on the Amazon S3 Pricing Chart.

Request Example:
Assume you transfer 10,000 files into Amazon S3 and transfer 20,000 files out of Amazon S3 each day during the month of March. Then, you delete 5,000 files on March 31st.
Total PUT requests = 10,000 requests x 31 days = 310,000 requests
Total GET requests = 20,000 requests x 31 days = 620,000 requests
Total DELETE requests = 5,000×1 day = 5,000 requests

Assuming your bucket is in the US Standard Region, the Request fees are calculated below:
310,000 PUT Requests: 310,000 requests x $0.005/1,000 = $1.55
620,000 GET Requests: 620,000 requests x $0.004/10,000 = $0.25
5,000 DELETE requests = 5,000 requests x $0.00 (no charge) = $0.00

Please see here for details on billing of objects archived to Amazon Glacier.

* * Your usage for the free tier is calculated each month across all regions except the AWS GovCloud Region and automatically applied to your bill – unused monthly usage will not roll over. Restrictions apply; See offer terms for more details.

Q: How am I charged for accessing Amazon S3 through the AWS Management Console?

Normal Amazon S3 pricing applies when accessing the service through the AWS Management Console. To provide an optimized experience, the AWS Management Console may proactively execute requests. Also, some interactive operations result in more than one request to the service.

Q: Do your prices include taxes?

Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of the Asia Pacific (Tokyo) Region is subject to Japanese Consumption Tax. Learn more.


Q: How secure is my data?

Amazon S3 is secure by default. Only the bucket and object owners originally have access to Amazon S3 resources they create. Amazon S3 supports user authentication to control access to data. You can use access control mechanisms such as bucket policies and Access Control Lists (ACLs) to selectively grant permissions to users and groups of users. You can securely upload/download your data to Amazon S3 via SSL endpoints using the HTTPS protocol. If you need extra security you can use the Server Side Encryption (SSE) option or the Server Side Encryption with Customer-Provide Keys (SSE-C) option to encrypt data stored-at-rest. Amazon S3 provides the encryption technology for both SSE and SSE-C. Alternatively you can use your own encryption libraries to encrypt data before storing it in Amazon S3.

Q: How can I control access to my data stored on Amazon S3?

Customers may use four mechanisms for controlling access to Amazon S3 resources: Identity and Access Management (IAM) policies, bucket policies, Access Control Lists (ACLs) and query string authentication. IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account. With IAM policies, companies can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do. With bucket policies, companies can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address. With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object. With query string authentication, customers can create a URL to an Amazon S3 object which is only valid for a limited time. For more information on the various access control policies available in Amazon S3, please refer to the Access Control topic in the Amazon S3 Developer Guide.

Q: Does Amazon S3 support data access auditing?

Yes, customers can optionally configure Amazon S3 buckets to create access log records for all requests made against it. These access log records can be used for audit purposes and contain details about the request, such as the request type, the resources specified in the request, and the time and date the request was processed.

Q: What options do I have for encrypting data stored on Amazon S3?

You can choose to encrypt data using SSE, SSE-C, or a client library such as the Amazon S3 Encryption Client. All three enable you to store sensitive data encrypted at rest on Amazon S3.

SSE provides an integrated solution where Amazon handles key management and key protection using multiple layers of security. You should choose SSE if you prefer to have Amazon manage your keys.

SSE-C enables you to leverage Amazon S3 to perform the encryption and decryption of your objects while retaining control of the keys used to encrypt objects. With SSE-C, you don’t need to implement, or use, a client-side library to perform the encryption and decryption of objects you store in S3, but you do need to manage the keys that you send to S3 to encrypt objects when storing them on S3. Use SSE-C if you want to maintain your own encryption keys, but don’t want to implement or leverage a client-side encryption library.

Using an encryption client library, you retain control of keys used to encrypt and complete the encryption and decryption of objects client-side using an encryption library of your choice. Some customers prefer full end-to-end control of the encryption and storage of objects; that way, only encrypted objects are transmitted over the Internet to Amazon S3. It is important to note that transmitting unencrypted data over HTTPS/SSL is also secure. Use a client-side library if you want to maintain control of your encryption keys, are able to implement or use a client-side encryption library, and need to have your objects encrypted before they are sent to Amazon S3 for storage.

For more information on using Amazon S3 SSE or SSE-C, please refer to the topic on Using Encryption in the Amazon S3 Developer Guide.

Q: How does Amazon protect SSE encryption keys?

With SSE, every protected object is encrypted with a unique key. This object key is itself encrypted by a separate master key. A new master key is issued at least monthly. Encrypted data, encryption keys and master keys are stored and secured on separate hosts for multiple layers of protection.

Q: Can I comply with EU data privacy regulations using Amazon S3?

Customers can choose to store all data in the EU by using the EU (Ireland) or EU (Frankfurt) region. It is your responsibility to ensure that you comply with EU privacy laws.

Q: Where can I find more information about security on AWS?

For more information on security on AWS please refer to our Amazon Web Services: Overview of Security Processes document.



Q: How durable is Amazon S3?

Amazon S3 is designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.

Q: How is Amazon S3 designed to achieve 99.999999999% durability?

Amazon S3 redundantly stores your objects on multiple devices across multiple facilities in an Amazon S3 Region. The service is designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy. When processing a request to store data, the service will redundantly store your object across multiple facilities before returning SUCCESS. Amazon S3 also regularly verifies the integrity of your data using checksums.

Q: What checksums does Amazon S3 employ to detect data corruption?

Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.

Q: What is Versioning?

Versioning allows you to preserve, retrieve, and restore every version of every object stored in an Amazon S3 bucket. Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.

Q: Why should I use Versioning?

Amazon S3 provides customers with a highly durable storage infrastructure. Versioning offers an additional level of protection by providing a means of recovery when customers accidentally overwrite or delete objects. This allows you to easily recover from unintended user actions and application failures. You can also use Versioning for data retention and archiving.

Q: How do I start using Versioning?

You can start using Versioning by enabling a setting on your Amazon S3 bucket. For more information on how to enable Versioning, please refer to the Amazon S3 Technical Documentation.

Q: How does Versioning protect me from accidental deletion of my objects?

When a user performs a DELETE operation on an object, subsequent simple (un-versioned) requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored. Only the owner of an Amazon S3 bucket can permanently delete a version. You can set Lifecycle rules to manage the lifetime and the cost of storing multiple versions of your objects.

Q: Can I setup a trash, recycle bin, or rollback window on my Amazon S3 objects to recover from deletes and overwrites?

You can use Lifecycle rules along with Versioning to implement a rollback window for your Amazon S3 objects. For example, with your versioning-enabled bucket, you can set up a rule that archives all of your previous versions to the lower-cost Glacier storage class and deletes them after 100 days, giving you a 100 day window to roll back any changes on your data while lowering your storage costs.

Q: How can I ensure maximum protection of my preserved versions?

Versioning’s MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security. By default, all requests to your Amazon S3 bucket require your AWS account credentials. If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to permanently delete a version of an object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession. To learn more about enabling Versioning with MFA Delete, including how to purchase and activate an authentication device, please refer to the Amazon S3 Technical Documentation.

Q: How am I charged for using Versioning?

 Normal Amazon S3 rates apply for every version of an object stored or requested. For example, let’s look at the following scenario to illustrate storage costs when utilizing Versioning (let’s assume the current month is 31 days long):

1) Day 1 of the month: You perform a PUT of 4 GB (4,294,967,296 bytes) on your bucket.
2) Day 16 of the month: You perform a PUT of 5 GB (5,368,709,120 bytes) within the same bucket using the same key as the original PUT on Day 1.

When analyzing the storage costs of the above operations, please note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket. At the end of the month:

Total Byte-Hour usage
[4,294,967,296 bytes x 31 days x (24 hours / day)] + [5,368,709,120 bytes x 16 days x (24 hours / day)] = 5,128,190,951,424 Byte-Hours.


Conversion to Total GB-Months
3,942,779,977,728 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 6.419 GB-Months

The storage fee is calculated below assuming data is stored in the US Standard Region:
0 to 1 TB Tier: 6.419GB x $0.0300 = $0.19



Q: What is RRS?

Reduced Redundancy Storage (RRS) is a new storage option within Amazon S3 that enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. RRS provides a lower cost, less durable, highly available storage option that is designed to sustain the loss of data in a single facility.

Q: Why would I choose to use RRS?

RRS is ideal for non-critical or reproducible data. For example, RRS is a cost-effective solution for sharing media content that is durably stored elsewhere. RRS also makes sense if you are storing thumbnails and other resized images that can be easily reproduced from an original image.

Q: What is the durability of Amazon S3 when using RRS?

RRS is designed to provide 99.99% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.01% of objects. For example, if you store 10,000 objects using the RRS option, you can on average expect to incur an annual loss of a single object (i.e. 0.01% of 10,000 objects). This annual loss represents an expected average and does not guarantee the loss of 0.01% of objects in a given year.

The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage, and thus is even more cost effective. In addition, RRS is designed to sustain the loss of data in a single facility.

Q: How do I know if I lose an RRS object?

If an RRS object has been lost, Amazon S3 will return a 405 error on requests made to that object. Amazon S3 also offers notifications for Reduced Redundancy Storage (RRS) object loss. Customers can configure their bucket so that when Amazon S3 detects the loss of an RRS object, a notification will be sent through Amazon Simple Notification Service (SNS). This enables customers to replace lost RRS objects.

Q: How do I specify that I want to store my data using RRS?

All objects in Amazon S3 have a storage class setting. The default setting is STANDARD. You can use an optional header on a PUT request to specify the setting REDUCED_REDUNDANCY.

Q: Are my RRS objects backed with the Amazon S3 Service Level Agreement?

Yes, you can utilize RRS without sacrificing the availability of your data. RRS is backed with the Amazon S3 Service Level Agreement, providing financial penalties if availability is less than 99.9% in a given month.

Q: How will my performance be impacted as a result of using RRS?

You should expect the same latency and throughput as standard Amazon S3 storage when using RRS.

Q: How am I charged for using RRS?

Storage pricing for RRS can be found on the pricing section of the Amazon S3 detail page. Standard Amazon S3 rates apply for bandwidth and requests.



Q: Does Amazon S3 provide capabilities for archiving objects to lower cost storage options?

Yes, Amazon S3 enables you to utilize Amazon Glacier’s extremely low-cost storage service as storage for data archival. Amazon Glacier stores data for as little as $0.01 per gigabyte per month, and is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance.

Q: How can I store my data using the Amazon Glacier option?

You can use Lifecycle rules to automatically archive sets of Amazon S3 objects to Amazon Glacier based on lifetime. Use the Amazon S3 Management Console, the AWS SDKs or the Amazon S3 APIs to define rules for archival. Rules specify a prefix and time period. The prefix (e.g. “logs/”) identifies the object(s) subject to the rule. The time period specifies either the number of days from object creation date (e.g. 180 days) or the specified date after which the object(s) should be archived. Any Amazon S3 Standard or Reduced Redundancy Storage objects which have names beginning with the specified prefix and which have aged past the specified time period are archived to Amazon Glacier. To retrieve Amazon S3 data stored in Amazon Glacier, initiate a restore job via the Amazon S3 APIs or Management Console. Restore jobs typically complete in 3 to 5 hours. Once the job is complete, you can access your data through an Amazon S3 GET object request.

You can use Lifecycle rules for any of your buckets including versioned buckets. You can easily archive your object versions after an elapsed time period (number of days from overwrite/expire).

For more information on using Lifecycle rules for archival, please refer to the Object Archival topic in the Amazon S3 Developer Guide.

Q: Can I use the Amazon S3 APIs or Management Console to list objects that I’ve archived to Amazon Glacier?

Yes, like Amazon S3’s other storage options (Standard or Reduced Redundancy Storage), Amazon Glacier objects stored using Amazon S3’s APIs or Management Console have an associated user-defined name. You can get a real-time list of all of your Amazon S3 object names, including those stored using the Amazon Glacier option, using the Amazon S3 LIST API.

Q: Can I use Amazon Glacier APIs to access objects that I’ve archived to Amazon Glacier?

Because Amazon S3 maintains the mapping between your user-defined object name and Amazon Glacier’s system-defined identifier, Amazon S3 objects that are stored using the Amazon Glacier option are only accessible through the Amazon S3 APIs or the Amazon S3 Management Console.

Q: How can I restore my objects that are archived in Amazon Glacier?

To restore Amazon S3 data stored in Amazon Glacier, initiate a restore request using the Amazon S3 APIs or the Amazon S3 Management Console. Restore requests typically complete in 3 to 5 hours. The restore request creates a temporary copy of your data in RRS while leaving the archived data intact in Amazon Glacier. You can specify the amount of time in days for which the temporary copy is stored in RRS. You can then access your temporary copy from RRS through an Amazon S3 GET request on the archived object.

Q: How long will it take to restore my objects archived in Amazon Glacier?

When processing a restore job, Amazon S3 first retrieves the requested data from Amazon Glacier (which typically takes 3-5 hours), and then creates a temporary copy of the requested data in RRS (which typically takes on the order of a few minutes). You can expect most restore jobs initiated via the Amazon S3 APIs or Management Console to complete in 3-5 hours.

Q: What am I charged for archiving objects in Amazon Glacier?

Amazon Glacier storage is priced from $0.01 per gigabyte per month. Archive and Restore requests are priced from $0.05 per 1,000 requests. For large restores, there is also a restore fee starting at $0.01 per gigabyte. When an archived object is restored, it resides in both RRS and Glacier. You are charged for both RRS and Glacier storage usage for the duration the object remains restored, after which point you are only charged for Glacier storage of the object. There is a pro-rated charge of $0.03 per GB for items that are deleted prior to 90 days. As Amazon Glacier is designed to store data that is infrequently accessed and long lived, these restore and early delete charges will likely not apply to most of you. Standard Amazon S3 rates apply for bandwidth. To learn more, please visit the Amazon S3 detail page.

Q: How is my storage charge calculated for Amazon S3 objects archived to Amazon Glacier?

The volume of storage billed in a month is based on average storage used throughout the month, measured in gigabyte-months (GB-Months). Amazon S3 calculates the object size as the amount of data you stored plus an additional 32 kilobytes of Glacier data plus an additional 8 KB of S3 standard storage data. Amazon Glacier requires an additional 32 kilobytes of data per object for Glacier’s index and metadata so you can identify and retrieve your data. Amazon S3 requires 8KB to store and maintain the user-defined name and metadata for objects archived to Amazon Glacier. This enables you to get a real-time list of all of your Amazon S3 objects, including those stored using the Amazon Glacier option, using the Amazon S3 LIST API. For example, if you have archived 100,000 objects that are 1GB each, your billable storage would be:

1.000032 gigabytes for each object x 100,000 objects = 100,003.2 gigabytes of Amazon Glacier storage.
0.000008 gigabytes for each object x 100,000 objects = 0.8 gigabytes of Amazon S3 Standard storage.

If you archive the objects for one month in the US Standard Region, you would be charged:
(100,003.20 GB-Months x $0.0100) + (0.8 GB-Months x $0.0300) = $1,000.056

Q: How much data can I restore for free?

You can restore up to 5% of the Amazon S3 data stored in Amazon Glacier for free each month. Typically this will be sufficient for backup and archival needs. Your 5% monthly free restore allowance is calculated and metered on a daily prorated basis. For example, if on a given day you have 12 terabytes of Amazon S3 data archived to Amazon Glacier, you can restore up to 20.5 gigabytes of data for free that day (12 terabytes x 5% / 30 days = 20.5 gigabytes, assuming it is a 30 day month).

Q: How will I be charged when restoring large amounts of data from Amazon Glacier?

You can restore up to 5% of your archived data, pro-rated daily, for free each month. For example, if on a given day you have 75 TB of S3 data archived in Amazon Glacier, you can restore up to 128 GB of data for free that day (75 terabytes x 5% / 30 days = 128 gigabytes, assuming it is a 30 day month). In this example, 128 GB is your daily free restore allowance. You are charged a Data Restore fee only if you exceed your daily restore allowance. Let's now look at how this Restore Fee - which is based on your monthly peak billable restore rate - is calculated.

Let’s assume you have 75TB of data archived in Amazon Glacier and you would like to restore 140GB. The data restore fee you pay is determined by how fast you want to restore the data. For example, you can request all the data at once and pay $21.60, or restore it evenly over eight hours, and pay $10.80. If you further spread your restores evenly over 28 hours, your restores would be free because you would be restoring less than 128 GB per day. The more you spread out your restore requests, the lower your peak usage and the lower your cost.

Below we review how to calculate Restore Fees if you archived 75TB data and restored 140 GB in 4 hours, 8 hours and 28 hours respectively.

Example 1: Archiving 75TB of data to Amazon Glacier and restoring 140GB in 4 hours.
First we calculate your peak restore rate. Your peak hourly restore rate each month is equal to the greatest amount of data you restore in any hour over the course of the month. If you initiate several restores in the same hour, these are added together to determine your hourly restore rate. We always assume that a restore request completes in 4 hours for the purpose of calculating your peak restore rate. In this case your peak rate is 140GB/4 hours, which equals 35 GB per hour.

Then we calculate your peak billable restore rate by subtracting the amount of data you get for free from your peak rate. To calculate your free data we look at your daily allowance and divide it by the number of hours in the day that you restored your data. So in this case your free data is 128 GB /4 hours or 32 GB free per hour. This makes your peak billable restore rate as 35 GB/hour – 32 GB/hour which equals 3 GB per hour.

To calculate how much you pay for the month we multiply your peak billable restore rate (3 GB per hour) by the data restore fee ($0.01/GB) by the number of hours in a month (720 hrs). So in this instance you pay 3 GB/Hour * $0.01 * 720 hours, which equals $21.60 to restore 140 GB in 3-5 hours.

Example 2: Archiving 75TB of data to Amazon Glacier and restoring 140GB in 8 hours.
First we calculate your peak restore rate. Again, for the purpose of calculating your restore fee, we always assume restores complete in 4 hours. If you send requests to restore 140GB of data over an 8 hour period, your peak restore rate would then be 140GB / 8 hours = 17.50 GB per hour. (This assumes that your restores start and end in the same day).

Then we calculate your peak billable restore rate by subtracting the amount of data you get for free from your peak rate. To calculate your free data we look at your daily allowance and divide it by the number of hours in the day that you restored your data. So in this case your free data is 128 GB /8 hours or 16 GB free per hour. This makes your billable rate 17.5 GB/hour – 16 GB/hour which equals 1.5 GB/hour. To calculate how much you pay for the month we multiply your peak usage in a single hour (1.5 GB/hour) by the restore fee ($0.01/GB) by the number of hours in a month (720 hrs). So in this instance you pay 1.5 GB/hour * $0.01 * 720 hours, which equals $10.80 to restore 140 GB.

Example 3: Archiving 75TB of data to Amazon Glacier and restoring 140GB in 28 hours.
If you spread your restores over 28 hours, you would no longer exceed your daily free retrieval allowance and would therefore not be charged a Data Restore Fee.

Q: How am I charged for deleting objects from Amazon Glacier that are less than 3 months old?

Amazon Glacier is designed for use cases where data is retained for months, years, or decades. Deleting data that is archived to Amazon Glacier is free if the objects being deleted have been archived in Amazon Glacier for three months or longer. If an object archived in Amazon Glacier is deleted or overwritten within three months of being archived then there will be an early deletion fee. This fee is prorated. If you delete 1GB of data 1 month after uploading it, you will be charged an early deletion fee for 2 months of Amazon Glacier storage. If you delete 1 GB after 2 months, you will be charged for 1 month of Amazon Glacier storage.



Q: Can I host my static website on Amazon S3?

Yes, you can host your entire static website on Amazon S3 for an inexpensive, highly available hosting solution that scales automatically to meet traffic demands. Amazon S3 gives you access to the same highly scalable, reliable, fast, inexpensive infrastructure that Amazon uses to run its own global network of web sites. The service is designed for 99.99% availability, and carries a service level agreement providing service credits if a customer’s availability falls below 99.9%. To learn more about hosting your website on Amazon S3, please see our walkthrough on setting up an Amazon S3 hosted website.

Q: What kinds of websites should I host using Amazon S3 static website hosting?

Amazon S3 is ideal for hosting websites that contain only static content, including html files, images, videos, and client-side scripts such as JavaScript. Amazon EC2 is recommended for websites with server-side scripting and database interaction.

Q: Can I use my own host name with my Amazon S3 hosted website?

Yes, you can easily and durably store your content in an Amazon S3 bucket and map your domain name (e.g. “example.com”) to this bucket. Visitors to your website can then access this content by typing in your website’s URL (e.g., “http://example.com”) in their browser.

Q: Does Amazon S3 support website redirects?

Yes, Amazon S3 provides multiple ways to enable redirection of web content for your static websites. Redirects enable you to change the Uniform Resource Locator (URL) of a web page on your Amazon S3 hosted website (e.g. from www.example.com/oldpage to www.example.com/newpage) without breaking links or bookmarks pointing to the old URL. You can set rules on your bucket to enable automatic redirection. You can also configure a redirect on an individual S3 object.

Q: Is there an additional charge for hosting static websites on Amazon S3?

There is no additional charge for hosting static websites on Amazon S3. The same pricing dimensions of storage, requests, and data transfer apply to your website objects. For S3 pricing information, please visit the pricing section on the S3 detail page.

For S3 pricing information, please visit the pricing section on the S3 detail page.