AWS Storage Blog

Limit access to Amazon S3 buckets owned by specific AWS accounts

Customers use Amazon S3 to store and protect data for a range of use cases, including data lakes, enterprise applications, backup, and archive. Amazon S3 provides easy-to-use management features, fine-grained access controls, strong consistency, and durability to meet a range of business, organizational, and compliance requirements. A common data loss prevention requirement is ensuring that individuals and applications only have access to specific S3 buckets within certain defined AWS accounts. Those S3 buckets could be within the organization that houses that individual or application, or they could belong to a trusted business partner.

In this post, we show how you can simplify your identity and network-based policies using a new Amazon S3 service-specific condition key. Previously, you would have to list individual buckets in an AWS Identity and Access Management (AWS IAM) policy. This required you to maintain a list of allowed buckets and presented a scaling challenge in large environments that have a large and dynamic set of S3 buckets. Now, with the s3:ResourceAccount condition key, you can write straightforward policies that limit access to Amazon S3 buckets owned by specific AWS accounts. We discuss the specific use cases for this new condition key and provide sample policies showing its usage.

IAM refresher

Before we dive deeper on the newly launched s3:ResourceAccount condition key, let’s quickly refresh some IAM concepts. IAM enables you to manage access to AWS services and resources securely. An IAM policy allows you to define an ‘Action’ (such as s3:GetObject). A ‘Principal’ (such as an IAM user or role) can take that Action on a specific AWS ‘Resource’ (such as an S3 object). AWS IAM policies also enable you to use ‘Condition’ to add an additional layer of fine-grained access control.

Condition is an optional IAM policy element you can use to specify special circumstances under which the policy allows or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions apply to a specific AWS service. For example, a condition key s3:prefix scopes down specific Amazon S3 actions to key names with a specific prefix. On the other hand, global condition keys support actions across a broad range of AWS services. For example, a global condition aws:SourceIp can be used to allow principals to perform actions only from within a specified IP range. Let’s now dive into the new condition that has been launched.

New service-specific Amazon S3 condition key

s3:ResourceAccount is an Amazon S3 service-specific condition key that simplifies IAM policies. It enables you to easily restrict access to S3 buckets that specific AWS accounts own, without the need to list individual buckets one by one in a policy. This means that customers can put in place straightforward policies that assert that all direct use of S3 buckets remains within the control of a known set of accounts. s3:ResourceAccount can be used in identity-based policies attached to users and roles within your AWS account and in VPC endpoint policies. Let’s now look at two specific use cases where this condition key can be used to add value.

Limit access to only authorized AWS accounts’ S3 buckets using identity-based policies

Consider the following scenario, where there are three AWS accounts:

Limit access to only authorized AWS accounts' S3 buckets using identity-based policies

You want to enable a production application running on Amazon EC2 instances to access an application configuration bucket in the production account and multiple data buckets in the shared services account. Instead of listing all allowed S3 buckets in the shared services account in the IAM policy attached to the EC2 instance profile, you can now craft a straightforward policy:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"AllowS3AccessToAppConfigBucket",
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:GetObject"
         ],
         "Resource":"arn:aws:s3:::bucket-containing-myapp-config/path/to/objects/*"
      },
      {
         "Sid":"AllowS3AccessInTrustedAccounts",
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:GetObject"
         ],
         "Resource":"*",
         "Condition":{
            "StringEquals":{
               "s3:ResourceAccount":[
                  "222222222222"
               ]
            }
         }
      }
   ]
}

The second statement in the preceding policy AllowS3AccessInTrustedAccounts uses the s3:ResourceAccount condition. It uses the condition to allow s3:GetObject and s3:PutObject for all S3 buckets owned by the shared services account (222222222222), while implicitly preventing access to buckets in other accounts (333333333333).

Create data perimeter protection using Amazon S3 VPC endpoint policies

In this use case, you would like to create a data perimeter in your VPC network for S3 access, ensuring that only Trusted Principals access Trusted Resources.

  • Trusted Principal – a principal accessing S3 buckets should be from a trusted AWS account (444444444444)
  • Trusted Resource – an S3 bucket being accessed by a principal should be owned by a trusted AWS account (444444444444)

Create data perimeter protection using Amazon S3 VPC endpoint policies

To achieve this secure data perimeter, you can use S3 VPC endpoints to author policies with the appropriate conditions. Global condition keys such as aws:PrincipalAccount and aws:PrincipalOrgID already make it simple to filter access to a Principal belonging to a specific AWS Account or AWS Organization. With the newly launched s3:ResourceAccount condition key, you can now similarly filter access to trusted S3 buckets that belong only to specific AWS accounts.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3VPCEndpointAllowAccessToTrustedPrincipalsAndAccounts",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": "*",
            "Principal": {
                "AWS": "*"
            },
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalOrgID": "o-xxxxxxxxxx",
                    "s3:ResourceAccount": "444444444444"
                }
            }
        }
    ]
}

A VPC endpoint policy is not a grant, but a guardrail. This means that it sets the maximum permissions a VPC endpoint user can have, but does not itself grant access to Amazon S3. Therefore, you can allow s3:GetObject and s3:PutObject actions on all S3 resources provided the principal making the S3 API call belongs to the organization o-xxxxxxxxxxx (trusted principal) and the buckets being accessed are owned by the AWS account 444444444444 (trusted resource). By using a combination of these two conditions in the endpoint policy, you can prevent accidental or intentional exfiltration of data from a non-trusted AWS account’s (555555555555) IAM credentials. You can also prevent writing to a bucket that is owned by a non-trusted account.

Conclusion

In this post, we discussed the newly launched s3:ResourceAccount condition key. This feature simplifies identity and network-based policies by providing a straightforward way to limit access to Amazon S3 buckets owned by specific AWS accounts. This helps prevent access to unauthorized buckets and minimizes the risk of data loss. We also covered two use cases. The first one is using the new condition key to limit access to buckets belonging to a specific AWS account without having to list out individual buckets one by one. s3:ResourceAccount reduces the size and complexity of policies, and therefore reduces the chance of error and the frequency of updates to those policies. The second use case allows customers to use the new condition key to write a straightforward data perimeter policy as a guardrail to prevent accidental or intentional exfiltration of data to unauthorized accounts’ buckets.

Thanks for reading about this new feature. If you have any comments or questions, please don’t hesitate to leave them in the comments section.

Ilya Epshteyn

Ilya Epshteyn

Ilya Epshteyn is a Principal Solutions Architect with AWS. He helps customers to innovate on the AWS platform by building highly available, scalable, and secure architectures. He enjoys spending time outdoors and building Lego creations with his kids.

Harsha Sharma

Harsha Sharma

Harsha W. Sharma is a Solutions Architect with AWS in New York. Harsha joined AWS in 2016, and he works with Global Financial Services customers to design and develop architectures on AWS, supporting their journey on the AWS Cloud.