AWS Storage Blog

Enforcing encryption in transit with TLS1.2 or higher with Amazon S3

Update April 8, 2024: As of February 27th, 2024, all AWS service API endpoints (including for Amazon S3) now require a minimum of TLS version 1.2. Therefore, the S3 bucket and S3 Access Point policy examples in this post that enforce minimum of TLS version 1.2 are no longer necessary as this is the default for S3 API endpoints. The concepts remain valid should you want to enforce a higher encryption protocol version such as TLS 1.3.


In 2022 we published a blog post explaining we will be updating the TLS configuration for all AWS service API endpoints to a minimum of version TLS 1.2. This update means you will need to use TLS versions 1.2 or higher for your connections, as we will be updating TLS configurations with a continued gradual rollout that will complete by December 31, 2023.

For Amazon Simple Storage Service (S3) customers, “how can I enforce encryption in transit?” is a common request. Since its launch in 2006, Amazon S3 has provided access to objects through HTTP or HTTPS, and provides mechanisms to enforce encryption in transit, such as S3 bucket policies. You can also enforce encrypted access to your objects through a content delivery network (CDN) service like Amazon CloudFront.

In 2020, Amazon S3 introduced a new AWS Identity and Access Management (IAM) policy condition key, allowing you to enforce the TLS encryption protocol version in your Amazon S3 buckets. This feature allows you to implement security controls at the S3 bucket level that enforce the use of the TLS encryption protocol for Amazon S3 requests.

In this blog post, we demonstrate how to enforce TLS encryption protocol in transit using Amazon S3 policies. The examples demonstrate how to start enforcing connections with TLS using Amazon CloudFront, and configuring more granular policies that allow you to define the TLS protocol version that will be accepted in your connections. Next, we demonstrate how to use S3 Access Points to provide different encryption requirements to S3 objects using S3 Access Point. The available policy options and the S3 Access Points allow you to enforce your security requirements on your Amazon S3 buckets, helping you meet your security and corporate compliance requirements.

Enforcing encryption in transit with the use of Amazon CloudFront

If you are not able to use TLS v1.2 or higher and need support for TLS v1.0 or v1.1, you can utilize Amazon CloudFront to access your data in Amazon S3. (We have a Knowledge Center article here explaining more details on how to setup Cloudfront.) Taking this approach, it is recommended to configure your S3 bucket policy to only allow access from this CloudFront distribution. You can create an S3 bucket policy to enforce this restriction. As the S3 endpoints will restrict the access to only TLS v1.2 or higher soon, CloudFront can keep your legacy applications working with your S3 objects with previous versions of TLS.

In this first example, we will configure objects to be only accessible by a CloudFront distribution point. You can define the encryption protocols acceptable from users, and users can only access the objects through the CloudFront distribution point as enforced in the Amazon S3 bucket policy.

In the following bucket policy, you can see how to restrict access to your bucket to the CloudFront Origin Access Control (OAC). As shown in Figure 1, you can enforce encryption between the viewer and CloudFront, and from CloudFront to the Origin.

{
    "Version": "2012-10-17",
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Sid": "AllowCloudFrontServicePrincipal",
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudfront.amazonaws.com"
            },
            "Action": "s3:GetObject",
            "Resource": " arn:aws:s3:::awsexamplebucket1/*",
            "Condition": {
                "StringEquals": {
                    "AWS:SourceArn": "arn:aws:cloudfront:: 111122223333:distribution/EDFDVBD6EXAMPLE"
                }
            }
        },
        {
            "Sid": "DenyAllOthers",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": " arn:aws:s3:::awsexamplebucket1/*",
            "Condition": {
                "StringNotEquals": {
                    "AWS:SourceArn": "arn:aws:cloudfront:: 111122223333:distribution/EDFDVBD6EXAMPLE"
                }
            }
        }
    ]
}

Note: This policy document has two statements. The first statement allows the access from the CloudFront distribution ARN. The second statement deny any other s3:GetObject request that is not the CloudFront distribution arn.

Figure 1: Using CloudFront to enforce encryption in-transit with Amazon S3

Figure 1: Using CloudFront to enforce encryption in-transit with Amazon S3

If you want to get more details on how to configure CloudFront OAC, you can read  this blog post.

With the preceding (example) bucket policy, you can enforce that only the principal of the CloudFront OAC is able to get objects from your bucket. You can configure CloudFront to restrict which protocol is used to encrypt in-transit using the CloudFront security policies available. As AWS will be updating the TLS configuration for all AWS Service API endpoints to a minimum version of TLS 1.2, there is an existing post here with detailed instructions on how to configure Cloudfront with S3 to support previous versions of TLS.

This type of enforcement is useful when you are providing access to data in Amazon S3 bucket through internet or a public network. However, if you want to enforce the encryption in-transit through an internal system or private network you must use the condition key aws:SecureTranport or a new condition key s3:TlsVersion.

Enforcing a specific in transit encryption protocol

You can compare the TlsVersion. with the TLS version negotiated between the client and the S3 service. If the API request matches the condition requirements, the API operation is allowed. The s3:TLSVersion condition can be compared using the IAM policy operators:

  • NumericNotEquals
  • NumericEquals
  • NumericLessThan
  • NumericLessThanEquals
  • NumericGreaterThan
  • NumericGreaterThanEquals

Note: AWS recommends not to use NumericEquals or NumericNotEquals to check TLS version. As versioning is progressive, customers should allow newer versions and avoid pinning on a single version.

You can use the preceding operators to evaluate if the version number in the request matches the version of the TLS protocol required in the bucket policy.

In the following Amazon S3 bucket policy example, you can see how to enforce a bucket to only accept requests of GetObject if the connection is using TLS version 1.2 or higher.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3::awsexamplebucket1/*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "true"
                 },
                "NumericLessThan": {
                    "s3:TlsVersion": [
                        "1.2"
                    ]
                }
            }
        }
    ]
}

Enforcing different in transit encryption protocols with S3 Access Point

There are scenarios that you have to provide access to Amazon S3 data to a system that may not support the latest TLS version. For example, you could provide access to embedded systems, IoT devices, or systems that have low computing capacity or run a software that doesn’t have support to such protocol like TLS 1.3 or higher. In this case, you can combine the use of S3 Access Point policy and S3 bucket policy. The S3 bucket policy can contain a broader security policy that provides access to systems that can encrypt with older protocols like TLS 1.2 or higher, and the S3 access point will enforce the encryption with newer standard protocols like TLS 1.3. The different systems will use different S3 endpoint names to access the same object, but based on the S3 endpoint that they are accessing the encryption requirement will be different. S3 Access Points are available in all regions at no additional cost. To learn more about S3 access points you can refer to the documentation.

The following example is an Amazon S3 bucket policy with a minimum requirement of encryption in transit:

{
  "Id": "BucketPolicy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowSSLRequestsOnly",
      "Effect": "Deny",
      "Principal": "*",

      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::awsexamplebucket",
        "arn:aws:s3:::awsexamplebucket/*"
      ],
      "Condition": {
          "NumericLessThan": {
              "s3:TlsVersion": [
                  "1.2"
              ]
          }
      }
    }
  ]
}

The following example is an Amazon S3 Access Point with enforcement to require access with 1.3 or higher:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:us-east-2:123456789012:accesspoint/ap-awsexamplebucket/object/*",
            "Condition": {
                "NumericLessThan": {
                    "s3:TlsVersion": [
                        "1.3"
                    ]
                }
            }
        }
    ]
}

With the preceding policies applied, Clients who don’t support newer encryption protocols will access the data through the bucket endpoint and clients who support newer encryption protocol will access the data through the access point as presented in the following diagram.

Figure 2: Access by two different clients, one with support for newer encryption protocols with S3 Access Points and another client accessing using older encryption protocols

Figure 2: Access by two different clients, one with support for newer encryption protocols with S3 Access Points and another client accessing using older encryption protocols

To access objects through S3 Access Points, you must use the Access Point ARN. For example, to download an object through an S3 Access Point using AWS CLI, use the following command:

aws s3 cp s3://arn:aws:s3:us-east-2:123456789012:accesspoint/ap-awsbucketexample/test.txt .

The same Amazon S3 Access Point can be accessible using the regular S3 bucket name using the S3 Access Point alias. When you apply an access point policy to enforce encryption, the policy will be effective through the alias as well. You can use S3 Access Point aliases to give access to AWS services, including Amazon EMR, AWS Storage Gateway, and Amazon Athena, open-source packages, such as Apache Spark and Apache Hive, and AWS Partner Network (APN) solutions without any code changes.

Testing your policies

A simple way to test the effectiveness of this enforcement control is generating a presigned URL and running a request with the command line tool curl and setting the maximum TLS encryption that your client will accept. The curl tool come pre-installed in many Linux distributions and iOS and can be installed in Windows once downloaded.

To generate a presigned URL, you can use the AWS CLI, if you don’t have the AWS CLI you can download and install from here.

The following is an example of how to generate a preassigned URL for a file test.txt that is inside a bucket awsexamplebucket1 in the Region US East (Ohio).

aws s3 presign s3://awsexamplebucket1/test.txt --region=us-east-2

This will generate an output similar to the following:

https://awsexamplebucket1.s3.us-east-2.amazonaws.com/test.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAEXAMPLEG4W3R6JHA%2F20210520%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20210520T063041Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=c47bf7f05135adc9e40fe3b50eb4132f3050127aad2e525bd20fcdde3c50e39c

You can test the download of your file with curl and specify that your encryption protocol is TLS v1.2. With this command, you can download the object from the bucket ARN, but not from the S3 Access Point that requires TLS version v1.3 or higher.

curl https://awsexamplebucket1.s3.us-east-2.amazonaws.com/test.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAEXAMPLEG4W3R6JHA%2F20210520%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20210520T063041Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=c47bf7f05135adc9e40fe3b50eb4132f3050127aad2e525bd20fcdde3c50e39c --tlsv1.2 --tls-max 1.2

Note: This test will limit the TLS protocol version to only v1.2. The intent is to compare the results of requests to the S3 bucket and Access Point. With this protocol version you should be able to access the S3 bucket but not the Access Point.

To generate a presigned URL from an object through the Amazon S3 Access Point, you just need to change the bucket name to the access point.

The following is an example of generating a presigned URL for an object test.txt from an Amazon S3 Access Point ARN: arn:aws:s3:us-east-2:123456789012:accesspoint/ap-awsexamplebucket1

aws s3 presign s3://arn:aws:s3:us-east-2:123456789012:accesspoint/ap-awsexamplebucket1/test.txt --region=us-east-2

This will generate an output similar to this:

https://ap-awsexamplebucket1-123456789012.s3-accesspoint.us-east-2.amazonaws.com/test.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAEXAMPLEG4W3R6JHI%2F20210520%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20210520T063858Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=fd737819ea753bdab7a9b6f8f866665b4ad90ae75500a121996b9c575aaed0b4

Run again the curl command, but now with the presigned URL from the access point. If you have correctly configured the S3 Access Point policy and the S3 bucket policy, you will see a response similar to the following:

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>T4GCEXAMPLE9B</RequestId><HostId>UwigItOa3wZbTH4gnKuAp6mC7O0UjMhoFdtI8swPEf277L4rCHiIklsURLK0twyX4LEutFB+8mE=</HostId></Error>%

This happens because the access point enforces the minimum protocol version to be TLSv1.3 or higher. To complete the access with a successful operation, you can try the command again, but now passing the parameter tlsv1.3 instead of tlsv1.2.

Note: The client that you are using may not support the TLSv1.3 protocol. Refer to your operating system recommendations to review support.

Conclusion

In this blog post, we demonstrated different mechanisms to enforce encryption in-transit for your data and the different options to enforce encryption in transit based on different clients. Using Amazon S3 bucket policy conditions to enforce encryption versions, you can meet security requirements for your organization or security standards for your industry. You can combine the configurations presented here with your existing Amazon S3 bucket policies to increase the security and access control of your data. If you have systems that need to use older protocol versions than TLSv1.2 we have explained how to use CloudFront to provide access to your data with TLSv1.0 or TLSv1.1. S3 Access Points can provide a secure, scalable and cost effective mechanism to give access to the data based on the requester or access path.

If you have feedback or questions about this post, feel free to submit comments in the comments section.

Want more AWS Storage how-to content, news, and feature announcements? Follow us on Twitter (@aws_storage).

Rafael Koike

Rafael Koike

Rafael M. Koike is a Principal Solutions Architect supporting Enterprise customers in SouthEast and is part of the Storage TFC. Rafael has a passion to build, and his expertise in security, storage, networking, and application development has been instrumental in helping customers move to the cloud securely and fast.

Jonathan Delfour

Jonathan Delfour

Jonathan Delfour is a Principal Technical Account Manager specializing in Energy customers, providing top-notch support as part of the AWS Enterprise Support team. His technical guidance and unwavering commitment to excellence ensure that customers can leverage the full potential of AWS, optimizing their operations and driving success.

Lee Kear

Lee Kear

Lee Kear has been working in IT since she received her Master’s Degree in Computer Science from the Georgia Institute of Technology in 1999. She started working at AWS in 2012 as a systems engineer on the Amazon S3 team. She became the first Storage Specialist Solutions Architect specializing in S3 in 2016 and she still enjoys this role today. She loves to help customers use S3 in the most efficient, performant, and cost effective way possible for their use case. Outside of work, she enjoys traveling with her wife.