AWS Storage Blog
Securing data in a virtual private cloud using Amazon S3 Access Points
Using a virtual private cloud (VPC) is common amongst enterprises looking to run scalable virtual networks in a private and completely customizable environment. This enables organizations to ensure security, isolation, and centralization for their virtual operations. AWS offers a VPC service in the form of Amazon VPC, a natural home for VPC users, given the plethora of fully managed, cost-efficient, and data-optimizing cloud services on the AWS platform. Even within a VPC, organizations must enforce granular permissions on data, to restrict access to sensitive data, meet compliance requirements, and minimize unnecessary data-control risks.
To meet compliance requirements and restrict access to sensitive data, many customers want to restrict data sharing within their Amazon VPC based upon AWS Identity and Access Management (AWS IAM) policies. Many of these customers must also allow for granular controls for data access within Amazon S3 for users assigned to particular IAM roles. With multiple S3 bucket policies to manage, controlling S3 bucket access on a granular level can become complex.
In a previous blog post, our AWS colleague used Amazon S3 Access Points in combination with VPC endpoint policies to simplify managing access to shared datasets on Amazon S3. In that post, you learned how to ensure that users could only access certain buckets from within a VPC, using a VPC endpoint policy. You also learned how to use an S3 Access Point to enforce your data management and access rules without having to constantly edit your VPC endpoint policy upon new bucket creation.
In this blog post, we expand further and demonstrate how using Amazon S3 Access Points, governed by IAM roles and policies, can help enforce access controls on S3 buckets. This can enable you to ensure that data is available within selected VPCs. We also cover automating the infrastructure deployment using AWS CloudFormation, and testing the effectiveness of different Access Point policies combined with various AWS IAM roles.
Solution overview
Amazon S3 Access Points simplify managing and securing data access at scale for applications using shared datasets on S3. Customers can create unique hostnames using access points to enforce distinct and secure permissions and network controls for any request that they make through the Access Point.
Large organizations with private Amazon S3 buckets using S3 Access Points as their solution for complying with data-sharing requirements across departments need a way to simplify access management while adhering to strict security standards. In our example, a customer has three different organizational departments: Finance, Marketing, and Operations. Each department has different needs for the data in the S3 bucket, while also requiring different controls.
Organizational and departmental requirements in this example:
- An Amazon S3 bucket that holds the restricted data that different departments must securely access and manage. The S3 bucket should not be public, and it should only be accessible from within the VPC. Furthermore, the VPC should have no outbound or inbound internet access.
- The Finance department uses application role 1, and this role should enable members of this department to upload data to the S3 bucket if the prefix matches /Application1 or /Application3. In this scenario, this role ensures that members of this department could take no other actions, like download or delete.
- The Marketing department uses application role 2, and this role should enable members of this department to download objects from the S3 bucket. In this scenario, this role ensures that this department could take no other actions, like upload or delete.
- The Operations department uses application role 3, and this role should enable members of this department to download objects if the prefix matches /Application3. This role also permits members of this department to delete objects in any of the application folders inside the S3 bucket.
For this solution, we use the following services to securely grant access to shared data in S3 buckets to three different IAM roles.
- Amazon EC2: An Amazon EC2 instance to assume IAM roles to demonstrate the security boundaries application to IAM roles.
- Amazon S3: S3 bucket stores data, and access is restricted to the VPC by S3 Access Points, with no other access.
- Amazon S3 Access Points: The S3 bucket has three Access Points tied to different IAM roles.
- AWS IAM: IAM roles govern the access to S3 Access Points and operations possible on the Access Points.
- AWS Systems Manager Session Manager: Session Manager to log in to the Amazon EC2 instance to test different IAM roles.
- Amazon VPC: Our VPC has two private subnets and has no internet or NAT Gateway. VPC can reach to AWS services like Amazon S3 and Systems Manager via VPC endpoints.
Figure 1: Securing data in a virtual private cloud using Amazon S3 Access Points
Here is how the process works, as shown in Figure 1:
- The solution uses S3 buckets that are only accessible within the VPC. This allows no public access to S3 buckets, and you cannot access data from outside the VPC, resulting in tight security. To restrict data access, S3 bucket has Block Public Access set to on and restricted to Access Points in the AWS account.
- The VPC has no internet gateway or network address translation (NAT) gateway attached to it restricting inbound and outbound internet access.
- Amazon S3 Access Points enable fine grain controls of what operations users can perform on objects in the S3 bucket. They have policies restricting access to IAM roles and possible operations like s3:GetObject. In this solution, there are three Access Points, one for each of the organization’s departments. Check the documentation on configuring IAM policies for using access points for more details on how S3 access points support IAM.
- As we can only access the Amazon S3 objects from within the VPC, we must stand up an EC2 instance so that we can assume IAM roles to access the S3 objects using Access Points. The Amazon EC2 instance in the private subnet has IAM permissions to assume those roles.
- Each Amazon S3 Access Point has its own policies and allows access to different IAM roles as per the policies. You control data access using S3 Access Point policies and IAM roles.
Prerequisites
For this tutorial, you should have the following prerequisites:
- An AWS account
- IAM role suitable to create resources
AWS CloudFormation template deployment and solution tutorial
In this section, we deploy the CloudFormation template before proceeding to set up the roles for the different departments in this example
Deploy the AWS CloudFormation template
To deploy the CloudFormation template in ap-southeast-2 Region, select the following button to launch the stack:
- Give your stack a name.
- Under Parameters, enter values for the following parameters based on your requirements or leave them as defaults.
- AccessPoint1Name: Name for the Amazon S3 Access Point 1.
- AccessPoint2Name: Name for the Amazon S3 Access Point 2.
- AccessPoint3Name: Name for the Amazon S3 Access Point 3.
- Env: Environment in which you deploy this solution.
- InstanceType: Select an instance type that you will use for testing.
- LatestAmiId: AMI ID to use. Defaults to latest Amazon Linux 2.
- PrivateSubnet1CIDR: IP range (CIDR notation) for the private subnet in the first Availability Zone.
- PrivateSubnet2CIDR: IP range (CIDR notation) for the private subnet in the first Availability Zone.
- VPCCIDR: CIDR range for the VPC.
Figure 2: Example values in CloudFormation Parameters section
- On the Specify stack details page, select Next, and then, on the Configure stack options page, select Next
- On the Review page, check the box that says I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create stack (Figure 3).
Figure 3: Check the box to acknowledge the conditions
- After you create the stack, the CloudFormation template automatically creates the resources needed for this solution tutorial.
Setting up and checking roles
Once you have deployed the CloudFormation template, you can log on to the EC2 machine using Session Manager provides you a browser-based shell. In the AWS Management Console, right click on the provisioned instance and select Connect, select Session Manager in the Options, and then press Connect to get a browser-based shell.
Once inside the browser-based shell, you can set the Region for your Access Point test:
AWS_REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep '\"region\"' | cut -d\" -f4)
aws configure set region ${AWS_REGION}
To validate your setup, you can assume the IAM roles later and access the Access Points separately to test and from the EC2 instance in the private subnet. As you must know the details of the IAM roles and S3 Access Points, you can note the values from the CloudFormation stack output.
export ARN_OF_ROLE1="Replace with Value of ApplicationRole1" export ARN_OF_ROLE2="Replace with Value of ApplicationRole2" export ARN_OF_ROLE3="Replace with Value of ApplicationRole3" export ACCESS_POINT1_ARN="Replace with Value of S3AccessPoint1Arn" export ACCESS_POINT2_ARN="Replace with Value of S3AccessPoint2Arn" export ACCESS_POINT3_ARN="Replace with Value of S3AccessPoint3Arn"
Application role 1
You can now assume the IAM role for application 1 from the Amazon EC2 instance by using AWS Security Token Service (AWS STS) and storing the temporary credential values provided by AWS STS in environment variables of your shell.
STS_OUTPUT=$(aws sts assume-role --role-arn ${ARN_OF_ROLE1} --role-session-name Application1 --endpoint-url https://sts.${AWS_REGION}.amazonaws.com)
export AWS_ACCESS_KEY_ID=$(echo $STS_OUTPUT | jq -r '.Credentials.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(echo $STS_OUTPUT | jq -r '.Credentials.SecretAccessKey')
export AWS_SESSION_TOKEN=$(echo $STS_OUTPUT | jq -r '.Credentials.SessionToken')
Note: The resource names in the following policy are examples. When you deploy the solution, the template updates them to match resources in your account.
You can test if you have right role. The output should have Application1 in the UserId
field:
aws sts get-caller-identity --endpoint-url https://sts.${AWS_REGION}.amazonaws.com
Output:
{
"Account": "${AWS:: 111122223333}",
"UserId": "AROAXXXXXXXXXXXPZ2:Application1",
"Arn": "arn:aws:sts::${AWS::AccountId}:assumed-role/${AWS::StackName}-ApplicationRole1-ZEYI1X44MSF1/Application1"
}
You are now ready for testing the roles and access points. You can create a test file called s3access.txt with some random strings in it.
cd ~; head -c 1048576 </dev/urandom >s3access.txt
Application role 1 is only allowed to:
- Use
s3:PutObject
on the bucket if the prefix matches/Application1/
. - Use
s3:PutObject
on the bucket if the prefix matches/Application3/
.
You can try the preceding operations now.
aws s3api put-object --key Application1/s3access.txt --bucket ${ACCESS_POINT1_ARN} --body s3access.txt Output: { "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"" }
For /Application3/
prefix:
aws s3api put-object --key Application3/s3accessApp3.txt --bucket ${ACCESS_POINT1_ARN} --body s3access.txt
Output:
{
"ETag": "\"c9ae470dd77f3915395a1c6cd1fb4498\"",
"ServerSideEncryption": "AES256"
}
You will get an “Access Denied” if you wish to get same object, as the policy in place only allows this role put-object operations.
aws s3api get-object --key Application1/s3access.txt --bucket ${ACCESS_POINT1_ARN} s3access.txt
Output:
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
Application role 2
To assume application role 2, you would need to request temporary credentials from AWS STS for application rule 2. This requires resetting the environment variables so that new values can be stored.
unset AWS_ACCESS_KEY_ID unset AWS_SECRET_ACCESS_KEY unset AWS_SESSION_TOKEN
You can now assume the IAM role for Application 2 from the Amazon EC2 instance by using AWS STS. You can proceed to store the temporary credential values provided by AWS STS in the environment variables of your shell.
STS_OUTPUT=$(aws sts assume-role --role-arn ${ARN_OF_ROLE2} --role-session-name Application2 --endpoint-url https://sts.${AWS_REGION}.amazonaws.com)
export AWS_ACCESS_KEY_ID=$(echo $STS_OUTPUT | jq -r '.Credentials.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(echo $STS_OUTPUT | jq -r '.Credentials.SecretAccessKey')
export AWS_SESSION_TOKEN=$(echo $STS_OUTPUT | jq -r '.Credentials.SessionToken')
The role only allows application role 2 to:
- Use
s3:GetObject
on the bucket
$ aws s3api get-object --key Application1/s3access.txt --bucket ${ACCESS_POINT2_ARN} s3access_new.txt Output: { "AcceptRanges": "bytes", "ContentType": "binary/octet-stream", "LastModified": "Thu, 09 Jul 2020 09:13:48 GMT", "ContentLength": 0, "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"", "Metadata": {} } ls s3access_new.txt Output: s3access_new.txt
Application role 2 cannot upload anything to the bucket, as the policy in place forbids that:
aws s3api put-object --key Application1/s3access_new.txt --bucket ${ACCESS_POINT2_ARN} --body s3access_new.txt
Output:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Application role 2 cannot access any other Access Point due to policies on Access Point 1:
aws s3api put-object --key Application1/s3access.txt --bucket ${ACCESS_POINT1_ARN} --body s3access_new.txt
Output:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Application role 3
To assume application role 3, you would need to request temporary credentials from AWS STS for application rule 3. This requires resetting the environment variables so that new values can be stored.
unset AWS_ACCESS_KEY_ID unset AWS_SECRET_ACCESS_KEY unset AWS_SESSION_TOKEN
You can now assume the IAM role for application 3 from the Amazon EC2 instance by using AWS STS and storing the temporary credential values provided by AWS STS in environment variables of your shell.
STS_OUTPUT=$(aws sts assume-role --role-arn ${ARN_OF_ROLE3} --role-session-name Application3 --endpoint-url https://sts.${AWS_REGION}.amazonaws.com) export AWS_ACCESS_KEY_ID=$(echo $STS_OUTPUT | jq -r '.Credentials.AccessKeyId') export AWS_SECRET_ACCESS_KEY=$(echo $STS_OUTPUT | jq -r '.Credentials.SecretAccessKey') export AWS_SESSION_TOKEN=$(echo $STS_OUTPUT | jq -r '.Credentials.SessionToken')
Application role 3 permissions:
- Use
s3:GetObject
on the bucket if the prefix matchesApplication3/
. - Use
s3:DeleteObject
if the prefix is similar toApplication*/
.
aws s3api get-object --key Application3/s3accessApp3.txt --bucket ${ACCESS_POINT3_ARN} s3accessApp3.txt Output: { "AcceptRanges": "bytes", "ContentType": "binary/octet-stream", "LastModified": "Tue, 14 Jul 2020 14:12:34 GMT", "ContentLength": 1048576, "ETag": "\"c9ae470dd77f3915395a1c6cd1fb4498\"", "ServerSideEncryption": "AES256", "Metadata": {} ls s3accessApp3.txt s3accessApp3.txt
“Access Denied” is returned if any other prefix or operation is tried which doesn’t match the rules set in policy:
aws s3api get-object --key Application1/s3access.txt --bucket ${ACCESS_POINT3_ARN} s3access.txt
Output:
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
On the other hand, the delete objects operation completes successfully:
aws s3api delete-object --bucket ${ACCESS_POINT3_ARN} --key Application3/s3accessApp3.txt aws s3api delete-object --bucket ${ACCESS_POINT3_ARN} --key Application3 aws s3api delete-object --bucket ${ACCESS_POINT3_ARN} --key Application1/s3access.txt aws s3api delete-object --bucket ${ACCESS_POINT3_ARN} --key Application1
Cleaning up
If you would like to avoid incurring future charges for following along with this solution, delete the resources you created by deleting the CloudFormation Stack.
Things to know
- Amazon S3 access points support AWS Identity and Access Management (IAM) resource policies that allow you to control the use of the Access Point by resource, user, or other conditions. For an application or user to be able to access objects through an Access Point, both the Access Point and the underlying bucket must permit the request.
- Adding an S3 Access Point to a bucket doesn’t change the bucket’s behavior when accessed through the existing bucket name or ARN. All existing operations against the bucket will continue to work as before. Restrictions that you include in an Access Point policy apply only to requests made through that Access Point.
- You can monitor and audit Access Point operations such as “create Access Point” and “delete Access Point” through AWS CloudTrail logs.
- You can control Access Point usage using AWS Organizations support for AWS Service Control Policies (SCP). You can create an SCP that requires that all Access Points be restricted to a VPC firewalling your data to within your private networks.
- Each Access Point is associated with exactly one bucket, which you must specify when you create the Access Point. After you create an Access Point, you can’t associate it with a different bucket. However, you can delete an Access Point and then create another one with the same name associated with a different bucket.
- After you create an Access Point, you can’t change its VPC configuration.
- You can only use access points to perform operations on objects. You can’t use access points to perform other Amazon S3 operations, such as modifying or deleting buckets. For a complete list of S3 operations that support access points, see Access Point compatibility with S3 operations and AWS services.
- Access Points work with some, but not all, AWS services and features. For example, you can’t configure Cross-Region Replication to operate through an Access Point. For a complete list of AWS services that are compatible with S3 Access Points, see Access Point compatibility with S3 operations and AWS services.
Conclusion
In this blog, we demonstrated how you could use Amazon S3 Access Points to simplify S3 bucket access controls, so that data is governed by IAM roles and policies only available in your VPC. The demonstrated solution enables granular control of data access that is essential for many organizations that must comply with data-security regulations or otherwise are seeking to mitigate risk through more restricted access policies.
Amazon S3 Access Points simplify how you can manage data access to shared datasets in Amazon S3 for users in different departments. You no longer have to manage a single, complex bucket policy with hundreds of different permission rules that you must write, read, track, and audit. This can result in time saving and efficient cloud infrastructure management. You can now create application-specific access points permitting access to shared datasets with policies tailored to the specific application. This enables to keep the data sharing scope small and focus on specific task in your design.
Thanks for reading this blog post on using Amazon S3 Access Points with Amazon VPC and AWS IAM to control data access permissions. If you have questions about this post, don’t hesitate to submit comments in the comments section.