AWS Storage Blog

Changing your Amazon S3 encryption from S3-Managed to AWS KMS

Customers who use Amazon Simple Storage Service (Amazon S3) often take advantage of S3-Managed Encryption Keys (SSE-S3) for server-side object encryption (SSE). For many customers, the decision to use SSE-S3 meets their security requirements, as it protects their data at rest. However, for some other customers, SSE-S3 may have met their requirements initially, but their requirements may have changed over time. For example, a customer may be winning new business that requires compliance with a different set of standards. As another example, analytics customers often start by running proof of concepts with non-sensitive data. As they derive value from their analytics platform, they add more data from different data sources and this aggregation of data often changes the classification. They may be required to implement additional controls for handling the encryption keys by giving them more control over who can access them. They may also seek to separate logging and auditing, or the ability to support PCI-DSS compliance requirements for separate authentication of the storage and cryptography. For further details on the difference between AWS managed keys and customer managed keys you can read this blog post.

To meet stronger security and compliance requirements, some customers may want to change their encryption model from SSE-S3 to SSE-KMS, which uses the AWS Key Management Service (AWS KMS) for encryption. Doing so can provide some additional benefits, including protection from policies that may be overly permissive. For example, adding a bucket policy allowing overly broad access to the data instead of individual users or roles. By implementing encryption using KMS keys, the accessor of the resources would need Amazon S3 policy access and access to a KMS key in order to decrypt data. Customers choosing to use AWS KMS with customer managed keys also get the following benefits, which can support additional compliance requirements:

  • You maintain the ownership of keys with the ability to revoke access, rendering access to the data impossible.
  • You can create, rotate, and disable auditable customer managed CMKs from the AWS KMS console inline with your own compliance requirements.
  • The security controls in AWS KMS can help you meet encryption-related compliance requirements.

In this post, I demonstrate four things:

  • How to set up default encryption on a bucket to use KMS keys for encryption.
  • How to change existing objects to use KMS keys for encryption.
  • The additional protection using AWS KMS offers against overly permissive policies.
  • How to set a bucket policy that only allows uploads if a specific KMS key is requested for encryption.

While the method in this post can provide the benefits or requirements in the preceding list, you must carefully understand some of the tradeoffs that come with more control over encryption. AWS KMS establishes request per second (RPS) quotas to ensure that it can provide fast and resilient service. For example, the default number of requests to AWS KMS is limited anywhere between 5,500 and 30,000 RPS (depending on the AWS Region). For more information about AWS KMS limits and how to request a limit increase, see AWS KMS limits. AWS KMS request quotas are adjustable, except for the custom key store quota. If you must exceed a quota, you can request a quota increase in Service Quotas. Use the Service Quotas console or the RequestServiceQuotaIncrease operation. For details, see requesting a quota increase in the Service Quotas User Guide. If Service Quotas for AWS KMS are not available in the AWS Region, visit the AWS Support Center and create a case.

Using AWS KMS with customer managed keys also has cost considerations. To help understand this impact, let’s assume you store 10 TB of 1-GB objects stored on S3 Standard in the Europe (London) Region. Over the month you download the objects 2,000,000 times and overwrite 10,000 of them with updated versions, all within the same Region.

  • S3 costs = $240.86 per month

If you used SSE-S3 this would be the total cost. However, if you changed the encryption to SSE-KMS, you must factor in 10,000 encryption requests and 2,000,000 decryption requests over the month.

  • One AWS KMS CMK = $1
  • Encryption and decryption = $6

To size your transition to AWS SSE-KMS, you can use either the S3 Inventory Report, or the new Amazon Macie, to identify the number of objects and byte counts. This can be used to create a cost model for your migration. To understand how quotas may affect you, see the AWS KMS developer guide documentation.

Throughout this post, I use a combination of the AWS Command Line Interface (AWS CLI) and the AWS Management Console to interact with the AWS services. If you haven’t already installed the AWS CLI, then follow this guide to do so.

For the rest of this post, where you see commands, you should change the parameters to suit your environment. This includes replacing the bucket name “kms-encryption-demo” and any ARNs or specific references like, “<access key from the create-access-key command>.” Note that the code in this blog post is provided as an example of how you can script an encryption key change. Carefully test and understand these changes before using them in production.

In this first step, I create a new bucket and upload an object to demonstrate the differences accessing S3 content under different encryption scenarios.

Setting default bucket encryption

Initially you are going to create a bucket with SSE-S3 encryption enabled and upload a file.
Run the following commands in the AWS CLI (remember to edit as appropriate):

aws s3 mb s3://kms-encryption-demo
aws s3api put-bucket-encryption --bucket kms-encryption-demo --server-side-encryption-configuration '
	"Rules": [{
		"ApplyServerSideEncryptionByDefault": {
			"SSEAlgorithm": "AES256"
echo 'Lots of data' > test-1.log
echo 'Even more data' > test-2.log
echo 'The most data' > test-3.log
aws s3 cp test-1.log s3://kms-encryption-demo/

Finally, query the object you uploaded to validate server-side encryption has been set correctly. Do so with the following command:

aws s3api head-object --bucket kms-encryption-demo --key test-1.log

If you look at the response you receive from the AWS CLI, you can see that the object has S3 server-side encryption set. You can see this by looking at the field ServerSideEncryption, which is set to “AES256.”

Create an AWS KMS key

AWS KMS is a simple to use key management service. AWS KMS makes it easy for you to create, manage, and control keys for use with a wide range of AWS services to encrypt and decrypt your data.

In this next step, you create a new key. You then set the default encryption on the bucket to use the KMS key, and then upload a new file to validate it is encrypted with the new key. To achieve this, you first create a KMS key by following this article.

When configuring AWS KMS use the following values:

  • Alias: kms-demo
  • Description: AWS KMS Demo
  • Advanced options: Keep defaults
  • Tags: Keep blank
  • Key Administrator Permissions: Your user name or group
  • Key Usage Permissions Your user name or group

Set default encryption on the bucket to use our new key

Once the key has been created, you must tell S3 to use it for the bucket you created earlier. Do so by running the following command in the AWS CLI:

aws s3api put-bucket-encryption --bucket kms-encryption-demo --server-side-encryption-configuration '{
        "Rules": [
                "ApplyServerSideEncryptionByDefault": {
                    "SSEAlgorithm": "aws:kms",
                    "KMSMasterKeyID": "arn:aws:kms:eu-west-2:1111111111111:key/90258e51-2441-3332-ff43-62a87177c8ac"

Upload a new file and check the encryption applied to the object

Now run the following commands to upload a new file to the bucket and check the encryption in use:

aws s3 cp test-2.log s3://kms-encryption-demo/
aws s3api head-object --bucket kms-encryption-demo --key test-1.log
aws s3api head-object --bucket kms-encryption-demo --key test-2.log

If you look at the response you receive from the AWS CLI, you can see that the first object has the same SSE encryption set. Additionally, you can see that the second object has the value “SSEKMSKeyId” set to the KMS key you created earlier.

Test our existing file access with a new user

To demonstrate the granular permissions that AWS KMS provides, create a new user with full access to the S3 bucket and objects and try to access both files.

Do so by running these commands in the AWS CLI:

aws iam create-user --user-name kms-demo
echo '{
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
}' > policy.json
aws iam create-policy --policy-name kms-demo --policy-document file://policy.json
aws iam attach-user-policy --user-name kms-demo --policy-arn <ARN returned from the create policy command, e.g. <arn:aws:iam::11111111111:policy/kms-demo >
aws iam create-access-key --user-name kms-demo
export AWS_ACCESS_KEY_ID=<access key from the create-access-key command>
export AWS_SECRET_ACCESS_KEY=<secret key from the create-access-key command>
aws s3api get-object --bucket kms-encryption-demo --key test-1.log .
aws s3api get-object --bucket kms-encryption-demo --key test-2.log .

After completing these commands, you can see that the user kms-demo can still successfully access test-1.log because the default S3 encryption is used. However, if the same user tries to access the second file (test-2.log), you get “AccessDenied,” as they do not have access to the KMS key to decrypt the object.

Updating all objects in a bucket to use AWS KMS keys for encryption

To make changes across all objects in a bucket, you can use the AWS CLI. Copy other metadata, such as amls or tags, and you may need to specify these explicitly. To find out more, there is a great blog post going into detail about how to use the AWS CLI here, and an example of how to do this would be:

aws s3 cp s3://kms-encryption-demo/ s3://kms-encryption-demo/ --recursive --sse-kms-key-id arn:aws:kms:eu-west-2:1111111111111:key/90258e51-2441-3332-ff43-62a87177c8ac --sse aws:kms

If there are millions of items in the S3 bucket, this could take a while to complete. You can offload the job of babysitting the task by using S3 Batch Operations. Details on achieving this can be found in this blog post.

Note, as explained in the cost example at the beginning of this blog post, there are additional costs associated with performing this operation, and they can become significant across billions of objects. These are charges to get and put the objects, for AWS KMS encryption, and AWS KMS decryption upon retrieval. Before converting your objects from SSE-S3 to SSE-KMS, it is advised to do cost modelling to understand the expenses that will be incurred for your specific use case.

Forcing AWS KMS key encryption

In this final section, you set a bucket policy that prevents users from overriding the default AWS KMS encryption that was set up in the initial step. Do so by running these commands in the AWS CLI:

echo '{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
            "Sid": "RequireKMSEncryption",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::kms-encryption-demo/*",
            "Condition": {
                "StringNotLikeIfExists": {
                    "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:eu-west-2:11111111111:key/d0344c2d-1b23-47ff-917a-3bea854ad3f9"
}' > bucket_policy.json
aws s3api put-bucket-policy --bucket kms-encryption-demo --policy file://bucket_policy.json
aws s3 cp test-3.log s3://kms-encryption-demo/ --sse

If you attempt to upload a file without the correct encryption, then the upload will be rejected because the encryption type is not correct.

In preceding action, an attempt was made to upload “test-3.log” and specified SSE-S3 encryption. This attempt failed as you have now attached a policy to the bucket only allowing encryption with our generated KMS key.

If you attempt the same operation, but without specifying the encryption type, then you can upload your document:

aws s3 cp test-3.log s3://kms-encryption-demo/
aws s3api head-object --bucket kms-encryption-demo --key test-3.log

Performing a final check on the object shows us that the correct encryption has been set this time.

Cleaning up

To prevent incurring ongoing charges, you should clean up the resources you created during this tutorial. Do so by running these commands in the AWS CLI:

aws kms schedule-key-deletion --key-id d0344c2d-1b23-47ff-917a-3bea854ad3f9 --pending-window-in-days 7
aws iam detach-user-policy --user-name kms-demo --policy-arn arn:aws:iam::11111111111:policy/kms-demo
aws iam delete-user --user-name kms-demo
aws iam delete-policy --policy-arn arn:aws:iam::11111111111:policy/kms-demo
aws s3 rb s3://kms-encryption-demo --force


In this post, I have shown you:

  1. How to set default encryption on a bucket to automatically encrypt new object uploads.
  2. The additional protection that SSE-KMS offers against overly permissive access policies.
  3. How to update the encryption for a small number of objects using the AWS CLI and pointed to resources to achieve this with Batch operations.
  4. How to enforce object uploads to only allow them if specific types of encryption are specified.

The methods demonstrated in this blog post, demonstrating changing S3 encryption to SSE-KMS, can help you meet your compliance requirements. This can be helpful for customers that find their compliance needs changing over time, as they must adhere to more stringent policies for data security. This is because the user requires permission not only to S3 but also on the AWS KMS key. This aligns with a defense-in-depth approach, as described in the Well Architected security pillar.

These methods make up just some of many additional controls that can be used to help improve your security posture. Using these methods with services like AWS Config and AWS Organizations (SCP policies) can implement further controls to help you monitor and enforce the desired policies for your S3 buckets. Thanks for reading the post, and let me know if you have questions or feedback in the comments section.