How do I copy Amazon S3 objects from another AWS account?

4 minute read
2

I want to copy Amazon Simple Storage Service (Amazon S3) objects across AWS accounts, and make sure that the destination account owns the copied objects.

Resolution

Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshoot AWS CLI errors. Also, make sure that you're using the most recent AWS CLI version.

Important: Objects in Amazon S3 aren't automatically owned by the AWS account that uploads the objects. When you change the S3 Object Ownership setting, it's a best practice to use the Bucket owner enforced setting. However, this option turns off all bucket access control lists (ACLs) and ACLs on any objects in your bucket.

When you use the Bucket owner enforced setting in S3 Object Ownership, the same bucket owner automatically owns all objects in an Amazon S3 bucket. The Bucket owner enforced setting simplifies access management for data that's stored in Amazon S3. For existing buckets, an S3 object is owned by the AWS account that uploaded it unless you explicitly turn off the ACLs.

If your existing method relies on ACLs to share objects, then identify the principals that use ACLs to access objects. For more information, see Prerequisites for turning off ACLs.

If you can't turn off your ACLs, then complete the following steps to take ownership of objects until you can adjust your bucket policy:

  1. In the source account, create an AWS Identity and Access Management (IAM) customer managed policy that grants an IAM identity (user or role) the required permissions. The IAM user must have access to retrieve objects from the source bucket and put objects back into the destination bucket. You can use an IAM policy that's similar to the following example:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "s3:ListBucket",
            "s3:GetObject"
          ],
          "Resource": [
            "arn:aws:s3:::source-DOC-EXAMPLE-BUCKET",
            "arn:aws:s3:::source-DOC-EXAMPLE-BUCKET/*"
          ]
        },
        {
          "Effect": "Allow",
          "Action": [
            "s3:ListBucket",
            "s3:PutObject",
            "s3:PutObjectAcl"
          ],
          "Resource": [
            "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET",
            "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/*"
          ]
        }
      ]
    }
    Note: The preceding example IAM policy includes only the minimum required permissions to list objects and copy objects across buckets in different accounts. Customize the allowed S3 actions based on your use case. For example, if the user must copy objects that have object tags, then you must also grant permissions for s3:GetObjectTagging. If you experience an error, then perform these steps as an admin user.
  2. In the source account, attach the customer managed policy to the IAM identity.
  3. In the destination account, set S3 Object Ownership on the destination bucket to Bucket owner preferred. New objects that you upload with the ACL set to bucket-owner-full-control are then automatically owned by the destination bucket's account.
  4. In the destination account, modify the destination's bucket policy to grant the source account permissions to upload objects. Also, include a condition in the bucket policy that requires object uploads to set the ACL to bucket-owner-full-control. Use a statement that's similar to the following example:
    {
      "Version": "2012-10-17",
      "Id": "Policy1611277539797",
      "Statement": [
        {
          "Sid": "Stmt1611277535086",
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::222222222222:user/Jane"
          },
          "Action": "s3:PutObject",
          "Resource": "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/*",
          "Condition": {
            "StringEquals": {
              "s3:x-amz-acl": "bucket-owner-full-control"
            }
          }
        },
        {
          "Sid": "Stmt1611277877767",
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::222222222222:user/Jane"
          },
          "Action": "s3:ListBucket",
          "Resource": "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET"
        }
      ]
    }
    Note: Replace destination-DOC-EXAMPLE-BUCKET with the name of the destination bucket. Replace arn:aws:iam::222222222222:user/Jane with the ARN of the IAM identity from the source account.
    The preceding example bucket policy includes only the minimum required permissions to upload an object with the required ACL. Customize the allowed S3 actions based on your use case.
  5. Make sure that the ACL is set to bucket-owner-full-control so that the source account's IAM identity can upload objects to the destination bucket. For example, the source IAM identity must run the cp AWS CLI command with the --acl option:
    aws s3 cp s3://source-DOC-EXAMPLE-BUCKET/object.txt s3://destination-DOC-EXAMPLE-BUCKET/object.txt --acl bucket-owner-full-control
    In the preceding example, the command copies the file object.txt. To copy an entire folder, run the following command:
    aws s3 cp directory/ s3://bucketname/directory --recursive --acl bucket-owner-full-control

Important: If your S3 bucket has default encryption with AWS Key Management Service (AWS KMS) activated, then you must also modify the AWS KMS key permissions. For instructions, see My Amazon S3 bucket has default encryption using a custom AWS KMS key. How can I allow users to download from and upload to the bucket?

Related information

Bucket owner granting cross-account bucket permissions

How do I change object ownership for an Amazon S3 bucket when the objects are uploaded by other AWS accounts?

Using a resource-based policy to delegate access to an Amazon S3 bucket in another account

AWS OFFICIAL
AWS OFFICIALUpdated 24 days ago
5 Comments

followed your youtube video steps https://www.youtube.com/watch?v=KenjP7lJjOA&t=185s at the end in cli fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

got this error

replied 9 months ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 9 months ago

Hi this seems to only work when you state the individual file in the aws s3 cp command i.e., '123.csv'.

Is there not a way to transfer ALL the objects within a file (which is also technically an object)?

replied 7 months ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 7 months ago

If you have the list of key-names/object-names in text file or single field .csv file then same CLI action can be performed with a simple for loop -

for a in cat key-name-list.txt; do aws s3 cp s3://source-DOC-EXAMPLE-BUCKET/$a s3://destination-DOC-EXAMPLE-BUCKET/$a --acl bucket-owner-full-control; done

If your .csv have key-name listed in second field like - bucketname, keyname, last_modified_date, etc, etc .. axim-vpay-input, adobeconnectapp_1.log, 2023-02-01T21:31:12.000Z axim-vpay-input, system-logs/messages, 2023-07-11T11:19:37.000Z axim-vpay-input, app.js, 2023-08-18T03:46:20.000Z

Then this for loop should do the job -

for a in cat my-key-list.csv|cut -f 2 -d,; do aws s3 cp s3://source-DOC-EXAMPLE-BUCKET/$a s3://destination-DOC-EXAMPLE-BUCKET/$a --acl bucket-owner-full-control; done

AWS
razguru
replied 7 months ago