AWS Storage Blog

Managing resources effectively on Amazon S3 using AWS CloudFormation

Effectively storing and managing data has become a critical factor to many organizations’ success – and the amount of data stored, analyzed, and moved continues to increase rapidly. Many organizations use Amazon S3 for simply storing their data in its native format, a benefit of object storage in S3 buckets. Often times, that data in S3 becomes a foundational piece of application designs critical to an organization. As an organization, and the amount of data it collects, grows, it becomes essential to have guidelines in place for efficient synthesis and optimal management of such resources on S3. Without consistently enacting guidelines and automating processes, organizations can struggle to manage their data at scale, resulting in issues around security, compliance, and application performance – all detriments to organizational success.

AWS CloudFormation provides infrastructure as code (IaC) capability to customers that helps them effectively and efficiently handle the provisioning process of Amazon S3 buckets for their data at scale. Customers can use CloudFormation to ensure consistent automated processes, like making sure buckets are created with the right security guardrails – every time.

In this blog post, I have categorized best practices for using CloudFormation to manage Amazon S3 resources into three main sections: planning, security, and monitoring and logging. This CloudFormation template demonstrates the Amazon S3 properties discussed throughout this post. The AWS CloudFormation resources described in this post can be used in your own custom template to help you automate and scale your resource management on Amazon S3 Using these resources can minimize management overhead in regards to resource-management, and money – both essential to any business’s bottom line.


In this section, I discuss Amazon S3 bucket naming considerations, and properly configuring resources in your CloudFormation stack.

S3 bucket name considerations

  • Each S3 bucket name is globally unique, and all AWS accounts share the namespace. In general, avoid using generic names as bucket names; instead, use CloudFormation psuedo parameters, such as AWS::Region or AWS::StackName, to create unique bucket names. To specify a bucket name, use the BucketName property.
  • If you do not specify a name, AWS CloudFormation generates a unique ID and uses that ID for the bucket name. If you specify a BucketName, then you cannot perform updates that require replacement of this resource. If you must replace the resource, specify a new name. Additionally, you can also use the UpdateReplacePolicy of Retain such that you retain the old physical Amazon S3 as it is removed from AWS CloudFormation’s scope, and the S3 bucket will still exist in your account.
        Type: AWS::S3::Bucket
        DeletionPolicy: Retain
            BucketName: !Sub bucket-${AWS::AccountId}-${AWS::Region}-sample

Configuring resources in your CloudFormation stack

In this section, I cover best practices for setting up your CloudFormation stack to ideally meet your own customer requirements.

Use CloudFormation ChangeSets to update stacks with critical S3 buckets

Avoid performing direct update operations on CloudFormation stacks that contain critical S3 resources, instead use Change Sets to preview the changes CloudFormation will make to your stack and then decide whether to apply those changes.

Cross-account S3 bucket creation

CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and Regions with a single operation. Use StackSets to create S3 buckets in different accounts, and add S3 bucket policies appropriately.

Add tags to Amazon S3 resources

Adding tags on resources help to identify, manage, and categorize resources by purpose, owner, environment or other criteria. To enable tags for S3 bucket resource, use the Tags property and add arbitrary set of key-value pairs:

        Type: AWS::S3::Bucket
        DeletionPolicy: Retain
             - Key: name
               Value: samples3bucket


After specifying the pseudo name parameter, you should specify important bucket parameters for security and access control, data protection, and S3 Block Public Access.

Security and access control

In this section, I cover different tools and features you can use to ensure that your Amazon S3 resources are secured. With granular controls over access and permissions, organizations can meet all their compliance requirements, and with CloudFormation they can do so with minimal management.

Enable bucket policy

S3 bucket policies can be used for granting permission to Amazon S3 resources. Customer can specify what actions are allowed or denied for which principals on the bucket that the bucket policy is attached to. To add S3 bucket Policy, use the resource Type: AWS::S3::BucketPolicy to control access to S3 bucket:

      Type: 'AWS::S3::BucketPolicy'
          Ref: S3Bucket
            - Action:
                - 's3:*'
              Effect: Deny
                  - ''
                  - - 'arn:aws:s3:::'
                    - Ref: S3Bucket
                    - /*
                  - ''
                  - - 'arn:aws:s3:::'
                    - Ref: S3Bucket
              Principal: '*'
                    'aws:SecureTransport': 'false'

Enable Object Ownership

S3 Object Ownership is a new Amazon S3 feature that enables bucket owners to automatically assume ownership of objects that are uploaded to their buckets by other AWS accounts. This helps to standardize ownership of new objects in your bucket, and to share and manage access to these objects at scale via resource-based policies such as a bucket policy or an Access Point policy. To enable object ownership, use the OwnershipControls property to control and specify the ownership settings:

    Type: AWS::S3::Bucket
        - ObjectOwnership: BucketOwnerPreferred

Amazon S3 Access Points

S3 Access Points give you fine-grained control over access to your shared datasets. Instead of managing a single and possibly complex policy on a bucket, you can create an Access Point for each application, and then use an IAM policy to regulate the Amazon S3 operations via the Access Point. To add an S3 Access Points resource, use the resource Type: AWS::S3::AccessPoint for your S3 bucket:

  Type: AWS::S3::AccessPoint
      Ref: S3Bucket

Enforce encryption

Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket. You can set default encryption on a bucket so that S3 encrypts all new objects when you store them in the bucket. S3 encrypts the objects using server-side encryption. Customers can use S3-managed keys (SSE-S3) or customer master keys (CMK) stored in AWS Key Management Service. To enable encryption, use the BucketEncryption property to specify default encryption for a bucket using server-side encryption:

  - ServerSideEncryptionByDefault:
    SSEAlgorithm: AES256

Data protection

Protecting your data in the event of unlikely failure or malicious intrusion – whether purposeful or incidental – is essential to maintaining smooth business operations. The following S3 resources, which you can use in your CloudFormation template, are helpful for protecting your data’s availability, durability, and resiliency.

Enable Versioning

S3 Versioning is a means of keeping multiple variants of an object in the same bucket. It is useful to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. To enable versioning, use the VersionConfiguration property and set the status for S3 bucket:

  Status: Enabled

Use S3 Replication

S3 Replication allows for automatic, asynchronous copying of objects across different AWS Regions by using Amazon S3 Cross-Region Replication (CRR) or between buckets in the same AWS Region by using Amazon S3 Same-Region Replication (SRR). To enable versioning, use the ReplicationConfiguration property and set the replication rules for S3 bucket resource:

      Role: 'arn:aws:iam::123456789012:role/replication_role'
        - Id: MyRule1
          Status: Enabled
          Prefix: MyPrefix
            Bucket: 'arn:aws:s3:::BUCKET-NAME'
            StorageClass: STANDARD
        - Status: Enabled
          Prefix: MyOtherPrefix
            Bucket: 'arn:aws:s3:::BUCKET-NAME'

Implement Object Lock

S3 Object Lock is a new S3 feature that blocks object version deletion during a customer-defined retention period so that you can enforce retention policies as an added layer of data protection or for regulatory compliance. With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. You can use it to prevent an object from being deleted or overwritten for a fixed amount of time or indefinitely. To enable Object Lock, use the ObjectLockConfiguration property that applies to every new object in the specified bucket:

ObjectLockEnabled: true
       ObjectLockEnabled: Enabled
               Days: 3
               Mode: COMPLIANCE

S3 Lifecycle policies

To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. To enable lifecycle policy, use the LifecycleConfiguration property and specify the lifecycle configuration for objects in S3 bucket:

  - Id: DeleteObjectAfter7Days
    Status: Enabled
    ExpirationInDays: 7

Prevent accidental deletion

To avoid accidental deletion of S3 bucket resource created in CloudFormation stack, you can specify DeletionPolicy attribute for S3 bucket resource to avoid the bucket from deletion during CloudFormation Stack Delete operation.

        Type: AWS::S3::Bucket
        DeletionPolicy: Retain

S3 Block Public Access

S3 Block Public Access settings allow the proactive blocking of any attempts to make a bucket public, or specifying a public ACL for objects in the bucket. With S3 Block Public Access (BPA), account administrators and bucket owners can easily set up centralized controls to limit public access to their Amazon S3 resources. These access controls are enforced regardless of how resources are created, simplifying the procedure. To enable BPA, use the PublicAccessBlockConfiguration property to define how Amazon S3 handles public access for the specified S3 bucket:

     BlockPublicAcls: true
     BlockPublicPolicy: true
     IgnorePublicAcls: true
     RestrictPublicBuckets: true

Monitoring and logging

In this section, we discuss about monitoring and logging techniques to provide verbose information about S3 buckets and objects.

S3 server access logging

S3 server access logging is useful in security and access audits. To enable Server access logging, use the LoggingConfiguration property to define where logs are stored for the specified S3 bucket:

     DestinationBucketName: !Ref S3LoggingBucket

S3 Storage Lens

S3 Storage Lens provides organization-wide visibility into object storage usage, activity trends, and makes actionable recommendations to improve cost-efficiency and apply data protection best practices. To enable Storage Lens, use the resource Type: AWS::S3::StorageLens and create an instance of an Amazon S3 Storage Lens:

    Type: 'AWS::S3::StorageLens'
          Id: sample-lens
          IsEnabled: true
            IsEnabled: true

AWS CloudFormation drift detection

Drift detection enables you to detect whether a stack’s actual configuration differs, or has drifted, from its expected configuration. Use AWS CloudFormation to detect drift on an entire stack, or on individual resources within the stack such as AWS::S3::Bucket to identify if any manual change has been performed outside of scope of CloudFormation.

Manage stack resources through AWS CloudFormation

After you launch a stack containing S3 bucket resource, use the AWS CloudFormation console, API, or AWS CLI to update resources in your stack. Do not make changes to stack resources outside of AWS CloudFormation. Doing so can create a mismatch between your stack’s template and the current state of your stack resources, which can cause errors if you update or delete the stack. For more information, you check AWS CloudFormation best practices recommendations.


Using the guidelines covered in this blog post, customers can effectively manage their Amazon S3 resources while scaling. Customers using AWS CloudFormation with Amazon S3 are able to avoid any manual intervention involved with updating buckets. They minimize their own lift in the future by properly and securely configuring their buckets from the time that they create them. This enables customers to have required security guardrails in place from the outset, and simplifies managing bucket permissions and logging bucket events.

Using AWS CloudFormation guidelines that can help model Amazon S3 resources, provision them quickly and consistently, and manage them throughout their lifecycles. Because these best practices might not be appropriate or sufficient for your environment, please treat them as helpful considerations. You can also leverage the sample AWS CloudFormation template provided as part of this blog post.

Thanks for reading this blog post on best practices for managing S3 resources using AWS CloudFormation. If you have any comments or questions about anything covered, please don’t hesitate to leave a comment in the comments section.

Kanika Kapoor

Kanika Kapoor

Kanika Kapoor is a Support Engineer on the Amazon S3 team at AWS. She specializes in S3, and is a subject matter expert in AWS CloudFormation. Kanika enjoys practicing customer obsession by solving complex issues for customers. Outside of work, she enjoys to cook and recently started learning to play violin.