AWS News Blog

Amazon S3 – Object Expiration

Voiced by Polly

Update (March 2020) – In the years that have passed since this post was published, the number of rules that you can define per bucket has been raised from 100 to 1000.


Amazon S3 is a great way to store files for the short or for the long term.

If you use S3 to store log files or other files that have a limited lifetime, you probably had to build some sort of mechanism in-house to track object ages and to initiate a bulk deletion process from time to time. Although our new Multi-Object deletion function will help you to make this process faster and easier, we want to go ever farther.

S3’s new Object Expiration function allows you to define rules to schedule the removal of your objects after a pre-defined time period. The rules are specified in the Lifecycle Configuration policy that you apply to a bucket. You can update this policy through the S3 API or from the AWS Management Console.

Each rule has the following attributes:

  • Prefix – Initial part of the key name, (e.g. logs/), or the entire key name. Any object in the bucket with a matching prefix will be subject to this expiration rule. An empty prefix will match all objects in the bucket.
  • Status – Either Enabled or Disabled. You can choose to enable rules from time to time to perform deletion or garbage collection on your buckets, and leave the rules disabled at other times.
  • Expiration – Specifies an expiration period for the objects that are subject to the rule, as a number of days from the object’s creation date.
  • Id – Optional, gives a name to the rule.

You can define up to 100 expiration rules for each of your Amazon S3 buckets; however, the rules must specify distinct prefixes to avoid ambiguity. After an Object Expiration rule is added, the rule is applied to objects that already exist in the bucket as well as any new objects added to the bucket after the rule is created. We calculate the expiration date for an object by adding that object’s creation time to the expiration period and rounding off the resulting time to midnight of that day. If you make a GET or a HEAD request on an object that has been scheduled for expiration, the response will include an x-amz-expiration header that includes this expiration date and the corresponding rule Id.

We evaluate the expiration rules once each day. During this time, based on their expiration dates, any object found to be expired will be queued for removal. You will not be billed for any associated storage for those objects on or after their expiration date. If server access logging has been enabled for that S3 bucket, an S3.EXPIRE.OBJECT record will be generated when an object is removed.

You can use the Object Expiration feature on buckets that are stored using Standard or Reduced Redundancy Storage. You cannot, however, use it in conjunction with S3 Versioning (this is, as they say, for your own protection). You will have to delete all expiration rules for the bucket before enabling versioning on that bucket.

Using Object Expiration rules to schedule periodic removal of objects can help you avoid having to implement processes to perform repetitive delete operations. We recommend that you use Object Expiration for performing recurring deletions that can be scheduled, and use Multi-Object Delete for efficient one-time deletions.

You can use this feature to expire objects that you create, or objects that AWS has created on your behalf, including S3 logs, CloudFront logs, and data created by AWS Import/Export.

For more information on the use of Object Expiration, please see the Object Expiration topic in the Amazon S3 Developer Guide.

— Jeff;

Modified 08/18/2020 – In an effort to ensure a great experience, expired links in this post have been updated or removed from the original post.
TAGS:
Jeff Barr

Jeff Barr

Jeff Barr is Chief Evangelist for AWS. He started this blog in 2004 and has been writing posts just about non-stop ever since.