Infrastructure & Automation

Manage Amazon S3 Event Notifications using a Lambda function

Is your Amazon Simple Storage Service (Amazon S3) bucket shared across multiple AWS CloudFormation stacks? If so, you most likely struggle with finding an efficient way to manage S3 Event Notifications.

By default, when you configure a shared S3 bucket, updates to the event notification configuration can override preexisting notifications. This behavior exists because the notifications are managed centrally by the S3 bucket, not by the individual teams that manage the stacks.

If you want to change the S3 bucket configuration to accommodate all notifications, you must engage with the team that manages the bucket. The problem with this is that individual teams that manage the stacks and own the individual services cannot configure their own notifications.

To solve this, you can use an AWS Lambda function and custom AWS CloudFormation resources to bypass the centralized S3 notification configuration, allowing you to manage notifications for each service. With this approach, each service’s configuration can manage adding or removing notifications for various S3 actions without affecting the existing notifications configured by other services.

In this post, I walk through the process of creating the stacks, Lambda function, and custom resources to help you manage S3 Event Notifications.

About this blog post
Time to read ~15 min.
Time to complete ~30 min.
Cost to complete ~$0
Learning level Intermediate (200)
AWS services AWS Lambda
Amazon S3
Amazon Simple Queue Service (Amazon SQS)
Amazon Simple Notification Service (Amazon SNS)
AWS Identity and Access Management (IAM)
AWS CloudFormation

Process overview

In the exercises that follow, you use the AWS Management Console to configure three stacks:

  • Stack A contains Amazon SQS and a custom resource.
  • Stack B contains Amazon SNS and a custom resource.
  • Stack C contains an S3 bucket, Lambda function, and IAM policy and role.

Three CloudFormation stacks

The deployment includes the following:

  1. Create an Amazon S3 bucket in stack C.
  2. Create an IAM policy in stack C.
  3. Create an IAM role in stack C.
  4. Create a Lambda function in stack C.
  5. Create stack A using Amazon SQS and the custom resource.
  6. Create stack B using Amazon SNS and the custom resource.

Prerequisites

This post assumes that you have the following:

Walkthrough

Complete the following steps using the AWS Management Console. Alternatively, you can follow along using the sample AWS CDK project from our GitHub repository.

The first step includes signing in to the AWS Management Console and creating an S3 bucket.

Step 1: Create an S3 bucket

  1. Navigate to the Amazon S3 console.
  2. In the navigation pane, choose Buckets.
  3. Choose Create bucket.
  4. In the Bucket name field, enter a Domain Name System–compliant name for your bucket.
  5. In the AWS Region field, choose the Region where you want the bucket to reside.
  6. For Block Public Access settings for this bucket, select the settings that you want to apply to the bucket.
  7. Choose Create bucket.

Step 2: Create an IAM policy

  1. Navigate to the IAM console.
  2. In the navigation pane, choose Policies.
  3. Choose Create policy.
  4. Choose the JSON tab, and enter the following IAM policy, replacing <bucket-name> with the name of the bucket you created in the previous step:
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutBucketNotification",
            "s3:GetBucketNotification"
         ],
         "Resource":"arn:aws:s3:::<bucket-name>"
      },
      {
         "Effect":"Allow",
         "Action":[
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
         ],
         "Resource":"*"
      }
   ]
}
  1. Choose Next: Tags.
  2. Choose Next: Review.
  3. (Optional) On the Review policy page, enter a name and description.
  4. Choose Create policy.

Step 3: Create an IAM role

  1. Navigate to the IAM console.
  2. In the navigation pane, choose Roles and then Create role.
  3. For Select type of trusted entity, choose AWS service.
  4. For Choose a use case, choose Lambda, and then choose Next: Permissions.
  5. Search for and locate the IAM policy you created in step 2. Attach the policy to the IAM role, and then choose Next: Tags.
  6. Choose Next: Review.
  7. (Optional) Add a role name and description.
  8. Choose Create role.

Step 4: Create the Lambda function

The Lambda function manages S3 Event Notifications and is required to create, update, and delete notifications from events that take place in stacks A and B. The Lambda function must be deployed before you can create the CloudFormation custom resources in Stack A and B.

  1. Navigate to Functions on the Lambda console.
  2. Choose Create function.
  3. Choose Author from scratch.
  4. Under Basic information, specify a function name, and choose Node.js 14 for the runtime.
  5. Under Change default execution role > Execution role, choose Use an existing role, and specify the role you created.
  6. Choose Create function.
  7. On the Lambda function details page, choose Code.
  8. For Code source, open the js file, and replace the contents with the following code:
const aws = require('aws-sdk'); 
const s3 = new aws.S3();
const url = require('url');
const https = require('https');

exports.handler = async function (event, context) {
    log(JSON.stringify(event, undefined, 2));
    try {
        const props = event.ResourceProperties;
        const getParams = {
            Bucket: props.BucketName
        };
        const currentConfiguration = await s3.getBucketNotificationConfiguration(getParams).promise();
        const mergedConfiguration = mergeConfigurations(event.RequestType, props.NotificationConfiguration, currentConfiguration)
        const putParams = {
            Bucket: props.BucketName,
            NotificationConfiguration: mergedConfiguration
        }
        log({ bucket: props.BucketName, previousConfiguration: JSON.stringify(currentConfiguration), newConfiguration: JSON.stringify(mergedConfiguration) });
        await s3.putBucketNotificationConfiguration(putParams).promise();
        return await submitResponse('SUCCESS');
    } catch (e) {
        logError(e);
        return await submitResponse('FAILED', e.message + `\nMore information in CloudWatch Log Stream: ${context.logStreamName}`);
    }
    function mergeConfigurations(request, inputConfig, currentConfig) {
        const mergedConfig = {}
        for (const [key, value] of Object.entries(currentConfig)) {
            // Default to use existing configuration
            mergedConfig[key] = value;

            const input = inputConfig[key];
            if (input && input.length) {
                // If input configuration exists, merge it with existing configuration
                const inputIds = new Set(input.map(obj => obj.Id));
                if (request == 'Delete') {
                    mergedConfig[key] = value.filter(obj => !inputIds.has(obj.Id));
                } else {
                    const filterConfig = value.filter(obj => !inputIds.has(obj.Id));
                    mergedConfig[key] = filterConfig.concat(input);
                }
            }
        }
        return mergedConfig;
    }
    async function submitResponse(responseStatus, reason) {
        const responseBody = JSON.stringify({
            Status: responseStatus,
            Reason: reason || 'See the details in CloudWatch Log Stream: ' + context.logStreamName,
            PhysicalResourceId: event.PhysicalResourceId || event.LogicalResourceId,
            StackId: event.StackId,
            RequestId: event.RequestId,
            LogicalResourceId: event.LogicalResourceId,
            NoEcho: false,
        });
        log({ responseBody });
        const parsedUrl = url.parse(event.ResponseURL);
        const options = {
            hostname: parsedUrl.hostname,
            port: 443,
            path: parsedUrl.path,
            method: 'PUT',
            headers: {
                'content-type': '',
                'content-length': responseBody.length,
            },
        };
        return new Promise((resolve, reject) => {
            const request = https.request(options, (res) => {
                log({ statusCode: res.statusCode, statusMessage: res.statusMessage });
                context.done();
            });
            request.on('error', (error) => {
                log({ sendError: error });
                context.done();
            });
            request.write(responseBody);
            request.end();
        });
    }
    function log(obj) {
        console.log(event.RequestId, event.StackId, event.LogicalResourceId, obj);
    }
    function logError(obj) {
        console.error(event.RequestId, event.StackId, event.LogicalResourceId, obj);
    }
};
  1. Choose Deploy.
  2. Choose Configuration > General configuration.
  3. Choose Edit, set the Timeout value to 300 seconds, and choose
  4. Choose Configuration > Concurrency.
  5. Choose Edit, set the Reserve concurrency value to 1, and choose

Confirm that the Lambda function was deployed successfully by checking for the Changes deployed message at the top of the page.

Step 5: Create stack A

Create stack A with a new template file that provisions Amazon SQS and the custom resource. The custom resources in this stack use the Lambda function to configure S3 Event Notifications and send events to the Amazon SQS queue.

  1. Copy and paste the following code into a text editor, and save it as a YAML file. In a later step, you will import this file into the AWS Management Console.
---
Description: Stack that synthesizes S3 event notifications to an SQS queue (qs-1s4376pnc)
Parameters:
  BucketName:
    Type: String
    Description: Bucket to enable S3 event notifications
  FunctionName:
    Type: String
    Description: Lambda function that can manage S3 event notifications
Resources:
  SampleQueue:
    Type: AWS::SQS::Queue
    UpdateReplacePolicy: Delete
    DeletionPolicy: Delete
  SampleQueuePolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Statement:
        - Action:
          - sqs:SendMessage
          - sqs:GetQueueAttributes
          - sqs:GetQueueUrl
          Condition:
            ArnLike:
              aws:SourceArn:
                Fn::Join:
                - ''
                - - 'arn:'
                  - Ref: AWS::Partition
                  - ":s3:::"
                  - Ref: BucketName
          Effect: Allow
          Principal:
            Service: s3.amazonaws.com
          Resource:
            Fn::GetAtt:
            - SampleQueue
            - Arn
        Version: '2012-10-17'
      Queues:
      - Ref: SampleQueue
  SampleBucketNotification:
    Type: AWS::CloudFormation::CustomResource
    Properties:
      ServiceToken:
        Fn::Join:
        - ''
        - - 'arn:'
          - Ref: AWS::Partition
          - ":lambda:"
          - Ref: AWS::Region
          - ":"
          - Ref: AWS::AccountId
          - ":"
          - Ref: FunctionName
      BucketName:
        Ref: BucketName
      NotificationConfiguration:
        QueueConfigurations:
        - Id: SampleQueueNotification
          Events:
          - s3:ObjectCreated:*
          Filter:
            Key:
              FilterRules:
              - Name: prefix
                Value: CategoryA/
          QueueArn:
            Fn::GetAtt:
            - SampleQueue
            - Arn
    UpdateReplacePolicy: Delete
    DeletionPolicy: Delete
  1. Navigate to the AWS CloudFormation console.
  2. In the navigation pane, choose Stacks.
  3. Choose Create stack > With new resources (standard).
  4. For Prerequisite—Prepare template, choose Template is ready.
  5. For Specify template, choose Upload a template file, and upload the YAML file.
  6. Choose Next.
  7. Enter a name for the stack.
  8. Specify the BucketName and FunctionName parameters, based on the resources generated in previous steps.
  9. Choose Next for the next two pages.
  10. Choose Create stack.

Note: If the CloudFormation custom resource hangs during creation, verify that the Lambda function deployed properly.

Step 6: Create stack B

Create stack B with a new template file that provisions Amazon SNS and a custom resource that points to the Lambda function. For this configuration, notifications are sent to an Amazon SNS topic. The custom resource then calls the Lambda function to pass notifications to the S3 bucket as a batch.

  1. Copy and paste the following code into a text editor and save it as a YAML file. In a later step, you will import this file into the AWS Management Console.
---
Description: Stack that synthesizes S3 event notifications to an SNS topic (qs-1s4376pnc)
Parameters:
  BucketName:
    Type: String
    Description: Bucket to enable S3 event notifications
  FunctionName:
    Type: String
    Description: Lambda function that can manage S3 event notifications
Resources:
  SampleTopic:
    Type: AWS::SNS::Topic
    UpdateReplacePolicy: Delete
    DeletionPolicy: Delete
  SampleTopicPolicy:
    Type: AWS::SNS::TopicPolicy
    Properties:
      PolicyDocument:
        Statement:
        - Action: sns:Publish
          Condition:
            ArnLike:
              aws:SourceArn:
                Fn::Join:
                - ''
                - - 'arn:'
                  - Ref: AWS::Partition
                  - ":s3:::"
                  - Ref: BucketName
          Effect: Allow
          Principal:
            Service: s3.amazonaws.com
          Resource:
            Ref: SampleTopic
        Version: '2012-10-17'
      Topics:
      - Ref: SampleTopic
  SampleBucketNotification:
    Type: AWS::CloudFormation::CustomResource
    Properties:
      ServiceToken:
        Fn::Join:
        - ''
        - - 'arn:'
          - Ref: AWS::Partition
          - ":lambda:"
          - Ref: AWS::Region
          - ":"
          - Ref: AWS::AccountId
          - ":"
          - Ref: FunctionName
      BucketName:
        Ref: BucketName
      NotificationConfiguration:
        TopicConfigurations:
        - Id: SampleSnsNotification
          Events:
          - s3:ObjectCreated:*
          Filter:
            Key:
              FilterRules:
              - Name: prefix
                Value: CategoryB/
          TopicArn:
            Ref: SampleTopic
    UpdateReplacePolicy: Delete
    DeletionPolicy: Delete
  1. Navigate to the AWS CloudFormation console.
  2. In the navigation pane, choose Stacks.
  3. Choose Create stack > With new resources (standard).
  4. For Prerequisite—Prepare template, choose Template is ready.
  5. For Specify template, choose your previously created YAML file.
  6. Choose Next.
  7. Enter a name for the stack.
  8. Specify the BucketName and FunctionName parameters, based on the resources generated in previous steps.
  9. Choose Create stack.

 Note: If the CloudFormation custom resource hangs during creation, verify that the Lambda function deployed properly.

Test S3 notifications

First, confirm that a change to the event notification configuration of stack A does not affect the notifications from stack B. Second, confirm that files uploaded to S3 trigger a notification to the configured destination.

Step 1: Update stack A with a new prefix filter

Update the template of stack A by replacing the current S3 Event Notifications prefix filter value of CategoryA/ with NewCategoryA/. The prefix filter organizes data in your S3 bucket.

  1. Using a text editor, open your YAML file, replace its contents with the following code, and save the file.
---
Description: Stack that synthesizes S3 event notifications to an SQS queue (qs-1s4376pnc)
Parameters:
  BucketName:
    Type: String
    Description: Bucket to enable S3 event notifications
  FunctionName:
    Type: String
    Description: Lambda function that can manage S3 event notifications
Resources:
  SampleQueue:
    Type: AWS::SQS::Queue
    UpdateReplacePolicy: Delete
    DeletionPolicy: Delete
  SampleQueuePolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Statement:
        - Action:
          - sqs:SendMessage
          - sqs:GetQueueAttributes
          - sqs:GetQueueUrl
          Condition:
            ArnLike:
              aws:SourceArn:
                Fn::Join:
                - ''
                - - 'arn:'
                  - Ref: AWS::Partition
                  - ":s3:::"
                  - Ref: BucketName
          Effect: Allow
          Principal:
            Service: s3.amazonaws.com
          Resource:
            Fn::GetAtt:
            - SampleQueue
            - Arn
        Version: '2012-10-17'
      Queues:
      - Ref: SampleQueue
  SampleBucketNotification:
    Type: AWS::CloudFormation::CustomResource
    Properties:
      ServiceToken:
        Fn::Join:
        - ''
        - - 'arn:'
          - Ref: AWS::Partition
          - ":lambda:"
          - Ref: AWS::Region
          - ":"
          - Ref: AWS::AccountId
          - ":"
          - Ref: FunctionName
      BucketName:
        Ref: BucketName
      NotificationConfiguration:
        QueueConfigurations:
        - Id: SampleQueueNotification
          Events:
          - s3:ObjectCreated:*
          Filter:
            Key:
              FilterRules:
              - Name: prefix
                Value: NewCategoryA/
          QueueArn:
            Fn::GetAtt:
            - SampleQueue
            - Arn
    UpdateReplacePolicy: Delete
    DeletionPolicy: Delete
  1. Open the AWS CloudFormation.
  2. In the navigation pane, choose Stacks.
  3. Choose the name of stack A.
  4. Choose Update.
  5. Choose Replace current template, and then Upload a template file.
  6. Upload your updated YAML file.
  7. Choose Next for the next three pages.
  8. Choose Update stack.

Step 2: Verify event notification change

Verify that the new prefix filter name is updated in the S3 bucket’s properties.

  1. Navigate to the Amazon S3 console.
  2. In the navigation pane, choose Buckets.
  3. Choose your S3 bucket.
  4. Choose Properties, and confirm under Event notifications that the notification for the prefix filter CategoryA/ was updated to NewCategoryA/.

Step 3: Add a new file to S3

Add a small file to the S3 bucket. You can use any file, but beware that Amazon S3 charges for storage space by the gigabyte.

  1. Navigate to the Amazon S3 console.
  2. In the navigation pane, choose Buckets.
  3. Choose the name of your S3 bucket.
  4. Choose Create folder.
  5. For Folder name, enter NewCategoryA.
  6. Choose Create folder.
  7. Under the list of objects, choose NewCategoryA/.
  8. Choose Upload.
  9. Choose Add files, and attach any file.
  10. Choose Upload.

Step 4: Verify Amazon SQS message

Verify that the new S3 notification for the new file appears in the list of notifications.

  1. Navigate to the Amazon SQS console.
  2. In the navigation pane, choose Queues.
  3. Choose the SQS queue you created in the previous section.
  4. Choose Send and receive messages.
  5. Choose Poll for messages.
  6. Choose the message to view the event notification.

Cleanup

If you followed along using the AWS CDK project, change to the local project directory and run this terminal command to remove the resources:

cdk destroy

If you followed along using the AWS Management Console, complete these steps in order:

  1. Delete stacks A and B from the AWS CloudFormation console.
  2. Delete the Lambda function.
  3. Delete the IAM role and policy for the Lambda function.
  4. Delete the S3 bucket.

Conclusion

In this post, I showed you how to set up an AWS environment that uses a Lambda function to manage S3 Event Notifications in a shared S3 bucket. With this configuration, each stack can define its own event notifications using custom resources from AWS CloudFormation.

If you followed along using the AWS CDK project in GitHub, then you are well on your way to configuring your own environment. If you haven’t accessed the sample project yet, I encourage you to take a look because it’s designed to get you up and running quickly.

For more information about S3 Event Notifications, see the following documentation:

Philip Chen Headshot

Philip Chen

Philip Chen is a cloud application architect at AWS. He works with customers to design cloud solutions that are built to achieve business goals and outcomes. He is passionate about his work and enjoys the creativity that goes into architecting solutions. See Philip’s LinkedIn profile to connect.