AWS Storage Blog

Auditing Amazon S3 encryption methods for object uploads in real time

Encryption of data at rest is increasingly necessary for maintaining compliance and providing another layer of protection for data against unauthorized access. Amazon S3 offers multiple methods for server-side encryption (SSE) of new objects in your bucket. As not all encryption options are equal, customers often ask how to track and control the method of encryption of objects according to internal security controls in real time.

Some customers also ask how to create alert notifications when an object is created with no encryption, or an encryption method is used that is non-compliant with security controls.

In this post, I walk through creating custom Amazon CloudWatch metrics that enable you to track which server-side encryption method is being used for objects created in your bucket.

The AWS CloudTrail service offers logging and auditing of Amazon S3 data events for reads and writes made to S3 objects. For the purpose of this blog post, I show how to create a solution to audit S3 PUT object operations to ensure that the correct server-side encryption option is used.

Amazon S3 encryption

Amazon S3 offers a number of encryption options for customers to encrypt their data. Server-side encryption is the encryption of data at its destination by the application or service that receives it. Amazon S3 encrypts your data at the object level as its received by the service, and decrypts it for you when you access it. S3 server-side encryption options differ on the management of the encryption keys and features.

More information on the differences between these server-side encryption options can be found in this documentation.

Amazon S3 enables customers to set SSE-S3 or SSE-KMS in PUT, POST, and COPY object requests with the x-amz-server-side-encryption header specified. Customers can also take advantage of S3’s default encryption for when the header is not specified in the request.

Customers often standardize on a single server-side encryption method and configuration to more easily control and audit object encryption use across all of their buckets and accounts.

Things to know:

For the filters I create, there are six possible outcomes of object encryption on Amazon S3 when creating objects:

  • No encryption
  • Encryption using Amazon S3-managed keys using default bucket encryption
  • Encryption using Amazon S3-managed keys and specified using the x-amz-server-side-encryption request header
  • Encryption using AWS KMS with default bucket encryption
  • Encryption using AWS KMS and specifying the customer master key (CMK) in the x-amz-server-side-encryption request header
  • Encryption using customer-provided encryption keys

Tracking the encryption method on object creation operations can be easily done using CloudTrail logs when Amazon S3 data events are enabled. More info can be found in this documentation.

Setting up your auditing system

Here is a summary of the steps to set this up:

  1. Set up Amazon S3 data events on CloudTrail.
  2. Set up an Amazon EventBridge.
  3. Set up CloudWatch metric filters with an alarm.
  4. (Optional) Add an S3 bucket policy to enforce an encryption method.

Prerequisites

To follow along with the process outlined in this post, you must have set up a CloudTrail trail in your AWS account or AWS Organization. You also need an existing S3 bucket that is receiving newly created objects. The bucket must be owned by the same AWS account as the CloudTrail trail or an AWS account belonging to the AWS Organization.

Step 1: Set up Amazon S3 data events on CloudTrail

Ensure that Amazon S3 data events on CloudTrail are enabled for WRITE events for the S3 buckets you want to track in your CloudTrail trail. You have the option of selecting all S3 buckets in your account or a specific list of buckets. Check out this documentation for more information on logging data events with CloudTrail.

Ensure that Amazon S3 data events on CloudTrail are enabled for WRITE events for the S3 buckets you want to track in your CloudTrail trail.

Step 2: Set up an EventBridge rule

It is time to create an EventBridge rule that can read the Amazon S3 data events on CloudTrail, filter them, and send them to a target of your choice. For this example, I set a custom CloudWatch log group as my target.

The SSEApplied value in the additionalEventData portion of the CloudTrail data event informs us of the encryption type applied to the object. The absence of this value informs us that no server-side encryption was used.

Here is an example of a data event for a PutObject operation made where the object was encrypted with default SSE-KMS encryption:

{
    "version": "0",
    "id": "c0e84bfc-75ce-3f0f-c2dd-573e2614df55",
    "detail-type": "AWS API Call via CloudTrail",
    "source": "aws.s3",
    "account": "111122223333",
    "time": "2020-06-16T17:57:50Z",
    "region": "us-west-2",
    "resources": [],
    "detail": {
        "eventVersion": "1.07",
        "userIdentity": {
            "type": "IAMUser",
            "principalId": "AIDA1234567890ABDCE",
            "arn": "arn:aws:iam::111122223333:user/matthew",
            "accountId": "111122223333",
            "accessKeyId": "AKIA1234567890ABDCE",
            "userName": "matthew"
        },
        "eventTime": "2020-06-16T17:57:50Z",
        "eventSource": "s3.amazonaws.com",
        "eventName": "PutObject",
        "awsRegion": "us-west-2",
        "requestParameters": {
            "bucketName": "examplebucket",
            "Host": "examplebucket.s3.us-west-2.amazonaws.com",
            "key": "test/myobject"
        },
        "responseElements": {
            "x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-west-2:111122223333:key/12345678-d1eb-abcd-97e4-987654321097d",
            "x-amz-server-side-encryption": "aws:kms",
		  "x-amz-version-id": "EvL3oCr9HUg14Hr8XaUO8nvbhqfd_.vV"
        },
        "additionalEventData": {
            "SignatureVersion": "SigV4",
            "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
            "bytesTransferredIn": 0,
            "SSEApplied": "Default_SSE_KMS",
            "AuthenticationMethod": "AuthHeader",
            "x-amz-id-2": "tzVha6QfX3Ygs4hJj0mkcvDcJlXzTw9jt05e01+OxN7jk0B3V2ALAJhMM0y3r4cO59iOlntsf8E=",
            "bytesTransferredOut": 0
        },
        "requestID": "F3DAD4614724AFFF",
        "eventID": "e0c88b9a-f859-407d-8460-a9159dbd08f5",
        "readOnly": false,
        "resources": [
            {
                "type": "AWS::S3::Object",
                "ARN": "arn:aws:s3:::examplebucket/test/myobject"
            },
            {
                "accountId": "111122223333",
                "type": "AWS::S3::Bucket",
                "ARN": "arn:aws:s3:::examplebucket"
            }
        ],
        "eventType": "AwsApiCall",
        "managementEvent": false,
        "recipientAccountId": "111122223333",
        "eventCategory": "Data"
    }
}

The key point here is that the SSEApplied value only appears with the following Amazon S3 API calls: PutObject, CopyObject, and CreateMultipartUpload (also known as InitiateMultipartUpload). Therefore you must ensure your EventBridge rule only applies to the PUT, POST, and COPY object operations – otherwise you see an incorrect count of non-encrypted objects.

In the Eventbridge console, go to the Rules tab and choose the Create Rule button.

  1. Create a rule with a name of your choice.
  2. Under the Define Pattern section, select the Event pattern radio button.
  3. Select the Pre-defined pattern by service radio button.

Then set each field to the following values:

  • Service Provider: AWS
  • Service Name: S3
  • Event Type: Object Level Operations
  • Specific operation(s): PutObject, CopyObject, CreateMultipartUpload

Your EventBridge rule can optionally be set for a specific list of S3 buckets, but for the purposes of this blog post, set this rule to Any Bucket. The following screenshot shows how the Define Pattern section looks on the EventBridge rule page:

On the EventBridge rule page the fields are filled out with the service provider, service name, event type, and specific operations

To validate that you have set this correctly, the EventBridge console will show the following event pattern:

{
    "source": [
        "aws.s3"
    ],
    "detail-type": [
        "AWS API Call via CloudTrail"
    ],
    "detail": {
        "eventSource": [
            "s3.amazonaws.com"
        ],
        "eventName": [
            "PutObject",
            "CopyObject",
            "CreateMultipartUpload"
        ]
    }
}

Your EventBridge rule can have additional targets, but for the purposes of this walk through, just add a CloudWatch log group as the target. Set your CloudWatch Log Group to: /aws/events/sse-objects.

The Select targets section should look like this in the EventBridge page:

The Select targets section should look like this in the EventBridge page when adding a CloudWatch log group as the target

Then choose Create at the bottom of the page.

If your bucket is receiving PUT, POST, and COPY object operations, you should now see a new CloudWatch log group in the CloudWatch log groups’ page called /aws/events/sse-objects.

If you click on that log group, you see that events appear in the log stream with details on the Amazon S3 object operation.

Step 3: Set up your CloudWatch log group metric filters

Now time to set up log group metrics filters for your encryption types.

Make sure you are on your log group’s page in the CloudWatch console. Click on the Metric Filters tab and create six metric filters by clicking on the Create metric filter button.

In each metric filter you must set a filter pattern that has specifics on the server-side encryption method used to encrypt the object. The filter name is any name that helps you identify your metric filter. The Metric namespace and Metric name are important to group together your custom metrics under a unique namespace while being able to identify each filter under a unique metric. For more information on these fields you can go here.

The following are the six metric filters and the values for each field:

Metric filter 1: Tracking objects created with no server-side encryption.

Filter name: NON_ENCRYPTED
Filter pattern: { $.detail.additionalEventData.SSEApplied NOT EXISTS }
Metric namespace: S3ObjectEncryption
Metric name: non-encrypted
Metric value: 1
Default value: 0

Metric filter 2: Tracking objects created with a default server-side encryption setting of SSE-S3.

Filter pattern: { $.detail.additionalEventData.SSEApplied = “Default_SSE_S3” }
Filter name: SSE_S3_DEFAULT
Metric namespace: S3ObjectEncryption
Metric name: sse-s3-default
Metric value: 1
Default value: 0

Metric filter 3: Tracking objects created with server-side encryption header specifying SSE-S3.

Filter pattern: { $.detail.additionalEventData.SSEApplied = “SSE_S3” }
Filter name: SSE_S3_HEADER
Metric namespace: S3ObjectEncryption
Metric name: sse-s3-header
Metric value: 1
Default value: 0

Metric filter 4: Tracking objects created with a default server-side encryption setting of SSE-KMS.

Filter pattern: { $.detail.additionalEventData.SSEApplied = “Default_SSE_KMS” }
Filter name: SSE_KMS_DEFAULT
Metric namespace: S3ObjectEncryption
Metric name: sse-kms-default
Metric value: 1
Default value: 0

Metric filter 5: Tracking objects created with server-side encryption header specifying SSE-KMS.

Filter pattern: { $.detail.additionalEventData.SSEApplied = “SSE_KMS” }
Filter name: SSE_KMS_HEADER
Metric namespace: S3ObjectEncryption
Metric name: sse-kms-header
Metric value: 1
Default value: 0

Metric filter 6: Tracking objects created with server-side encryption header specifying SSE-C.

Filter pattern: { $.detail.additionalEventData.SSEApplied = “SSE_C” }
Filter name: SSE_C_HEADER
Metric namespace: S3ObjectEncryption
Metric name: sse-c-header
Metric value: 1
Default value: 0

The following screenshot is an example of what the CloudWatch console will look like with these fields filled in:

Screenshot is an example of what the CloudWatch console will look like with these metric filter fields filled in - metric filter1

After you complete these steps, you should see six metric filters created in the CloudWatch metric filters page. Each block represents a metric filter with a link to each custom metric indicating an encryption method. It should look like this:

Six metric filters created in the CloudWatch metric filters page - each block represents a metric filter with a link to each custom metric

You can optionally add alarms to any of these filters, alarming you if any of these values breach a given threshold. For example, if you must not have any non-encrypted or SSE-S3 encrypted objects in your bucket, you can set alarms when any of these values go over 0.

For more information on how to do this, you can go here.

This is what an added alarm to a filter looks like, this one focuses on 'non-encrypted' objects

Once done, you are able to browse to the CloudWatch Metrics tab and go to your custom namespace, S3ObjectEncryption.

You can graph these metrics by selecting Sum under the Statistic down arrow in the the Graphed Metrics tab to easily show the trend. Note that these metrics only appear from when you created the preceding metric filters.

This can optionally be added to a CloudWatch dashboard, and the following graph is an example of what this would look like:

an Object Encryption Methods graph on an Amazon CloudWatch dashboard with all the different types of encryption at the bottom

Step 4: (Optional) Configure an S3 bucket policy to enforce a method of encryption.

You can also enforce a specific method of server-side encryption by using Amazon S3 bucket policy, Access Point policies, or AWS Organizations service control policies.

Using the Amazon S3 condition key: s3:x-amz-server-side-encryption, you can enforce the encryption methods discussed above except for SSE-C.

Here is an example of an IAM identity policy that only allows PUT, POST, and COPY object requests specifying SSE-KMS encryption with a specific KMS key. This policy also allows operations with no encryption specified in the request – these would then use the S3 bucket default encryption setting.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Allow PutObject only with specific KMS encryption key",
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::<bucket_name>/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:eu-west-1:123456789123:key/7436850b-4f22-4e11-8c88-8db5aaf58be1"
                }
            }
        },
        {
            "Sid": "Allow PutObject with no encryption specified",
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::<bucket_name>/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": true
                }
            }
        }
    ]
}

A bucket policy can also be used. For requests made by IAM identities in the same account as the S3 bucket, you need to use a deny statement. More information on why this is required can be found here.

We recommend setting S3 default bucket encryption to your intended encryption to catch requests that are made without the x-amz-server-side-encryption value thereby preventing non-encrypted objects from being created. If this is set, you can remove the third policy statement in the following example bucket policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Prevent SSE-S3 encryption",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::<bucket_name>/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
        {
            "Sid": "Prevent encryption with other KMS keys",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::<bucket_name>/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-server-side-encryption": "aws:kms"
                },
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:eu-west-1:123456789123:key/7436850b-4f22-4e11-8c88-8db5aaf58be1"
                }
            }
        },
        {
            "Sid": "Prevent PutObject with no encryption or SSE-C encryption",
            "Effect": "Deny",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::<bucket_name>/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": true
                }
            }
        }
    ]
}

Cleaning up:

If you followed along with us for testing purposes, you might want to do some clean up to avoid incurring any unnecessary or unwanted costs. If so, delete the created CloudWatch metric filters, delete the EventBridge rule, and disable the Amazon S3 data events configuration in your CloudTrail trail.

Conclusion:

In this post, I demonstrated conducting an audit to ensure that Amazon S3 server-side encryption rules are followed across different S3 resources. This not only helps ensure your data is protected, but it enables you to ensure it is protected the way you want it to be – with the right encryption. By using S3 data events on CloudTrail logging with EventBridge and Cloudwatch metric filtering – you can track what encryption is being used in real time. You can avoid lapses in whatever your encryption measures are deemed to be, and also ensure that your data is protected while you meet your compliance standards.

I also provided the steps to set-up alarms and remediation steps when your objects are not encrypted or encrypted with a method you do not expect. I explained how you can take advantage of Amazon S3 resource policies, such as S3 bucket policies, to enforce a particular server-side encryption method. Doing so enabled you to prevent objects from being encrypted with the wrong encryption type and possibly fall out of compliance with your internal security controls.

Thanks for reading this blog post. If you have any comments or questions, please don’t hesitate to leave them in the comments section.

Matthew Clark

Matthew Clark

Matthew Clark is a Systems Dev Engineer on the Amazon S3 team at AWS. Originally from Cape Town, South Africa, he spends his days working on improving the S3 service and working with S3 customers. When not working, he loves spending time with his family and exploring Seattle, but longs for a good boerewors roll with some Chakalaka.