Getting Started Using Amazon S3 Intelligent-Tiering

30 minute tutorial

Overview

Amazon S3 Intelligent-Tiering is an Amazon S3 storage class designed to optimize storage costs by automatically moving data to the most cost-effective access tier when access patterns change, without performance impact or operational overhead. S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. The purpose of this tutorial is to show you how easy it is to begin storing your data in the Amazon S3 Intelligent-Tiering storage class, so that you can start experiencing automatic storage cost savings.

S3 Intelligent-Tiering automatically stores objects in three access tiers: one tier optimized for frequent access, a lower-cost tier optimized for infrequent access, and a very-low-cost tier optimized for rarely accessed data. For a small monthly object monitoring and automation charge, S3 Intelligent-Tiering moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier for savings of 40%; and after 90 days of no access, objects are moved to the Archive Instant Access tier with savings of 68%. If the objects are accessed later, S3 Intelligent-Tiering automatically moves the objects back to the Frequent Access tier.

To save even more on data that doesn’t require immediate retrieval, you can activate the optional asynchronous Archive Access tier and Deep Archive Access tier. When turned on, objects not accessed for 90 consecutive days are automatically moved directly to the Archive Access tier with up to 71% in storage cost savings. After 180 days of consecutive no-access, objects are moved to the Deep Archive Access tier with up to 95% in storage cost savings. If the objects are accessed later, S3 Intelligent-Tiering moves the objects back to the Frequent Access tier. To retrieve an object stored in the optional Archive Access tier or Deep Archive Access tier, you must initiate the restore request and wait until the object is moved into the Frequent Access tier.

For the workload featured in this tutorial, you will activate only the optional Deep Archive Access tier for objects not accessed for 180 consecutive days.

You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data analytics, new applications, and user-generated content.

What you will accomplish

  • Create an Amazon S3 bucket
  • Directly upload objects to the Amazon S3 Intelligent-Tiering storage class
  • Transition objects stored in S3 Standard or S3 Standard-Infrequent Access (S3 Standard-IA) to the S3 Intelligent-Tiering storage class
  • Enable the optional S3 Intelligent-Tiering asynchronous archive tiers and achieve the highest storage cost savings for very rarely accessed data
  • Restore your objects stored in the opt-in archiving tiers

Prerequisites

 AWS experience

Beginner

 Time to complete

30 minutes

 Cost to complete

Less than $1 (Amazon S3 pricing page)

 Services used

 Last updated

July 25, 2022

Implementation

    • 1.1 — Sign in to the Amazon S3 console
      • From the AWS console services search bar, enter ‘S3’. Under the services search results section, select S3.
    Sign in to the Amazon S3 console
    • 1.2 — Create an S3 bucket
      • In the Amazon S3 menu on the left, choose Buckets, and then choose Create bucket in the Buckets section.
    • 1.3 — 
      • Enter a descriptive name for your bucket. Bucket names are globally unique; if you encounter an error with the name you selected, please try another combination. Select which AWS Region you would like your bucket created in.
    • 1.4 — 
      • The default Block Public Access setting is appropriate for this workload, so leave the default settings in this section.
    • 1.5 — 
      • Next, leave the default setting with ACLs disabled as ACLs are not necessary for this workload; access to the bucket and its objects is specified using only bucket policies.
    • 1.6 — 
      • Then, add a bucket tag to help track costs associated with this workload. AWS uses the bucket tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs. For more information, see Using Cost Allocation Tags in the AWS Billing User Guide.
    • 1.7 — 
      • Now enable Default encryption for the bucket. The settings here will apply to any objects uploaded to the bucket where you have not defined at-rest encryption details during the upload process. For this workload, enable Server-side encryption leveraging Amazon S3 service managed keys (SSE-S3). If your workload requirements are not satisfied by SSE-S3, you can also leverage AWS Key Management Service (AWS KMS). For more information about how Amazon S3 uses AWS KMS, see the AWS Key Management Service Developer Guide.
    • 1.8 — 
      • In the Advanced settings, for this workload we don’t need Object Lock, so leave it disabled and create the S3 bucket by choosing Create bucket.
  • Now that your bucket has been created and configured, you are ready to upload data to the Amazon S3 Intelligent-Tiering storage class.

    • 2.1 — Upload an object
      • If you have logged out of your AWS Management Console session, log back in. Navigate to the S3 console and select the Buckets menu option. From the list of available buckets, select the bucket name of the bucket you just created.
    2.1 (A) To configure the services used with AWS Backup*
    • 2.2 — 
      • Next, select the Objects tab. Then, from within the Objects section, choose Upload.
    Configure resources - AWS Backup
    • 2.3 — 
      • Then, in the Upload section, choose Add files. Navigate to your local file system to locate the file you would like to upload. Select the appropriate file, and then choose Open. Your file will be listed in the Files and folders section.
    • 2.4 — 
      • In the Properties section, select Intelligent-Tiering. Leave the rest of the options on the default settings, and choose Upload.
    • 2.5  — 
      • After your file upload operations have completed, you will be presented with a summary of the operations indicating if it has completed successfully or if it has failed. In this case, the file has uploaded successfully. Then, choose Close.
    You have successfully uploaded your file to your bucket using the S3 Intelligent-Tiering storage class. Next, we will discuss transitioning objects that are already stored in the S3 Standard or in the S3 Standard-IA storage classes to the S3 Intelligent-Tiering storage class.
  • When data is programmatically uploaded to Amazon S3, some clients might not be compatible with the S3 Intelligent-Tiering storage class. As a result, those clients will upload the data in the Amazon S3 Standard storage class. In this case, you can use Amazon S3 Lifecycle to immediately transition objects from the S3 Standard storage class to the S3 Intelligent-Tiering storage class.
     
    In this step, you will learn how to set an S3 Lifecycle configuration on your bucket.

    • 3.1 —
      • If you have logged out of your AWS Management Console session, log back in. Navigate to the S3 console and select the Buckets menu option. From the list of available buckets, select the bucket name of the bucket you created in Step 1.
    • 3.2 — 
      • Select the Management tab and then select Create lifecycle rule in the Lifecycle rules section.
    • 3.3 — Create lifecycle rule
      When you create an S3 Lifecycle rule, you have the option to limit the scope of the rule by prefix, tag, or object size specifying a minimum and a maximum object size between 0 bytes and up to 5 TB. By default, objects smaller than 128 KB are never transitioned to the S3 Intelligent-Tiering storage class because they are not eligible for auto tiering.

      For this workload we want to apply the Lifecycle rule to all objects in the bucket and therefore we won’t apply any filters.
      • Enter a descriptive Lifecycle rule name.
      • Select Apply to all objects in the bucket.
      • Select the I acknowledge that this rule will apply to all objects in the bucket checkbox.
      • In the Lifecycle rule actions, select the Move current versions of objects between storage classes checkbox. For more information, see Using versioning in S3 buckets.
      • In the Transition current versions of objects between storage classes section, select Intelligent-Tiering as Choose storage class transitions, and input 0 as Days after object creation.
      • Finally, choose Create rule.

    In this step, we created a Lifecycle rule to immediately transition files uploaded in the S3 Standard storage class into the S3 Intelligent-Tiering storage class.

  • To save even more on data that doesn’t require immediate retrieval, you can activate the optional asynchronous Archive Access and Deep Archive Access tiers. When these tiers are activated, objects not accessed for 90 consecutive days are automatically moved directly to the Archive Access tier with up to 71% in storage cost savings. Objects are then moved to the Deep Archive Access tier after 180 consecutive days of no access with up to 95% in storage cost savings.

    To access objects archived in the optional asynchronous Archive Access and Deep Archive Access tiers, you first need to restore them. Step 6 of this tutorial will guide you through the restore process.

    For this workload, we will activate only the Deep Archive Access tier as depicted below:

    • 4.1 —
      • If you have logged out of your AWS Management Console session, log back in. Navigate to the S3 console and select the Buckets menu option. From the list of available buckets, select the bucket name of the bucket you created in Step 1.
    • 4.2 —
      • Select the Properties tab.
    • 4.3 —
      • Navigate to the Intelligent-Tiering Archive configurations section and choose Create configuration.
    • 4.4 —
      • In the Archive configuration settings section, specify a descriptive Configuration name for your S3 Intelligent-Tiering Archive configuration.
    • 4.5 —
      • For this workload, we want to archive only a subset of the dataset based on the object tags. To do so, under Choose a configuration scope, select Limit the scope of this configuration using one or more filters.
      • In the Object Tags section, choose Add tag, and enter “opt-in-archive” as Key and “true” as Value of the tag.
      • Make sure that the Status of the configuration is Enable.

    • 4.6 —
      • Objects in the S3 Intelligent-Tiering storage class can be archived to the Deep Archive Access tier after they haven’t been accessed for a time between six months and two years. For this workload, we want to archive objects that haven’t been accessed for 6 months, to ensure that we only archive data that is not being used. To do so, in the Archive rule actions section, select Deep Archive Access tier, enter 180 as number of consecutive days without access before archiving the objects to the Deep Archive Access tier, and choose Create.
  • In Step 4, we enabled the Deep Archive Access tier only for objects with tag “opt-in-archive:true”. Now you’re going to learn how to apply the correct tag during the upload process to enable the Deep Archive Access tier.
    • 5.1 —
      • If you have logged out of your AWS Management Console session, log back in. Navigate to the S3 console and select the Buckets menu option. From the list of available buckets, select the bucket name of the bucket you created in Step 1.
    • 5.2 —
      • Next, select the Objects tab. Then, from within the Objects section, choose Upload.
    • 5.3 —
      • Then, choose Add files. Navigate to your local file system to locate the file you would like to upload. Select the appropriate file and then choose Open. Your file will be listed in the Files and folders section.
    • 5.4 —
      • In the Properties section, select Intelligent-Tiering. For more information about the Amazon S3 Intelligent-Tiering storage class, see the Amazon S3 User Guide.
    • 5.5 —
      • Because we want the file to be archived after 6 months of no access, in the Tags – optional section we select Add tag with Key “opt-in-archive” and Value “true”, and choose Upload.
    • 5.6 —
      • After your file upload operations have completed, you will be presented with a summary of the operations indicating if the upload has completed successfully or if it has failed. In this case, the file has uploaded successfully. Choose Close.
  • Before you can download a file stored in the Deep Archive Access tier, you must initiate the restore request and wait until the object is moved into the Frequent Access tier.

    In this step, you will learn how to restore a file.
    • 6.1 —
      • If you have logged out of your AWS Management Console session, log back in. Navigate to the S3 console and select the Buckets menu option. From the list of available buckets, select the bucket name of the bucket you created in Step 1.
    • 6.2 —
      • In the Objects tab, select the file stored in the Intelligent-Tiering Deep Archive Access tier.
    • 6.3 —
      • In the Properties tab, you will notice that both the Download and Open buttons are grayed out, and a banner notifies you that in order to access the file you must first restore it.
    • 6.4 —
      • To initiate the restore, choose Initiate restore.
    • 6.5 —
      • In the following Initiate restore form, you can select the type of restore. The Bulk retrieval typically completes within 48 hours while the Standard retrieval typically completes within 12 hours; both options are available at no charge. See Archive retrieval options for more information. For this workload, select the Standard retrieval option because it is required to complete the restore within 12 hours. Now you can initiate the restore by choosing Initiate restore.
    • 6.6 —
      • After initiating the restore, you will be presented with a summary of the operations indicating if it has initiated successfully or if it has failed. In this case, the restore has initiated successfully. Choose Close.
    • 6.7 —
      • In the Properties tab of the file, you can monitor the status of the restoration process.
    • 6.8 —
      • Once the restore operation has completed (typically within 12 hours), you are able to download the file by choosing Download.
  • In the following steps, you clean up the resources you created in this tutorial. It is a best practice to delete resources that you are no longer using so that you do not incur unintended charges.
    • 7.1 — Delete test objects
      • If you have logged out of your AWS Management Console session, log back in. Navigate to the S3 console and select the Buckets menu option. First you will need to delete the test object(s) from your test bucket. Select the radio button to the left of the bucket you created for this tutorial, and then choose Empty.
      • In the Empty bucket page, type “permanently delete” into the Permanently delete all objects confirmation box. Then, choose Empty to continue.
      • Next, you will be presented with a banner indicating if the deletion has been successful.
    • In the Empty bucket page, type “permanently delete” into the Permanently delete all objects confirmation box. Then, choose Empty to continue. 
    • Next, you will be presented with a banner indicating if the deletion has been successful.
    • 7.2 — Delete test bucket
      • Finally, you need to delete the test bucket you have created. Return to the list of buckets in your account. Select the radio button to the left of the bucket you created for this tutorial, and then choose Delete.
      • Review the warning message. If you desire to continue deletion of this bucket, type the bucket name into the Delete bucket confirmation box and choose Delete bucket.

Was this page helpful?

Congratulations!

You have learned how to create an Amazon S3 bucket, upload objects to the Amazon S3 Intelligent-Tiering storage class, activate the optional Deep Archive Access tier, and restore objects stored in the Deep Archive Access tier.

To learn more about the Amazon S3 Intelligent-Tiering storage class, visit the documentation and product page.