Category: AWS Lambda

Fanout S3 Event Notifications to Multiple Endpoints

John Stamper John Stamper, AWS Solution Architect

S3 fan out use-case diagram


Use Cases

The above architecture is an event-driven general-purpose parallel data processing system – data enters S3, notification of new data is sent to SNS, which packages the S3 event notification as a message and delivers it to subscribers. This architecture is ideal for workloads that need more than one data derivative of an object. The purpose of the subscribers is to create a layer of processing which accommodates a wide variety of data sizes and subsequently send the results of processing to some storage layer. The architecture is not prescriptive with regard to the post-processing storage layer and is out of scope for this article. In the illustration above, black arrows depict data and blue arrows depict event notifications.

Example use cases are described below.

Image Processing
Master image must be processed to produce multiple image derivatives, e.g. resized, OpenCV result.

Application Log Processing
Application log data must be processed to produce multiple log derivatives, e.g. formatted for operations, security, marketing.

Content Transformation
Documents of one format, e.g. Microsoft Word, must be converted to multiple other formats such as PDF, RTF, MHTML, and ODT.

SNS supports message delivery to several types of subscribers, notably Lambda functions, SQS queues, and HTTP/HTTPS endpoints. Lambda functions make it easy to respond to data without the need for servers. To process data with a long-running or existing application, you can also use SNS to easily send the message to an SQS queue or HTTP/HTTPS endpoint. At this stage messages can be processed by EC2, which offers a wide range of compute/memory/storage options. Overall, the architecture can provide an event-driven parallel data processing system that can leverage the entire AWS compute offering.

This article will focus on the steps to configure a S3 bucket to send an event notification to a SNS topic and subscribe two Lambda functions to the topic. The resulting architecture is a simple implementation of S3 event notification fanout to Lambda functions for processing, which is applicable for workloads that require multiple data derivatives of the same object. In the center of the architecture is the ‘event manifold’, similar to a mechanical manifold, which intakes an event notification at one end (S3), transforms it to a message, and distributes it to all subscribers (data processing elements). This architecture allows customers to build an event-driven parallel data processing architecture that is fast, flexible, and easy to maintain over time. Below is an illustration of the architecture to be assembled.

S3 fan out simple use-case diagram


Step 1 – Create the Bucket

To create a Bucket, follow the documentation here. For this article, set the bucket name to event-manifold-bucket.


Step 2 – Create the Topic

To create a Topic, follow the documentation here. For this article, set the topic name to event-manifold-topic.


Step 3 – Update the Topic Policy to allow Event Notifications from an S3 Bucket

The Topic’s Policy must permit an S3 Bucket to publish event notifications to it. To do so, do the following:

  1. Select the event-manifold-topic
  2. Select Edit Topic Policy from the Actions button
  3. Edit Topic Policy

  4. Select the Advanced Tab
  5. Clear the existing Policy and replace it with the following policy statement:
    Topic Policy JSON

    • Replace ‘region’ with the region in which the Topic is located, e.g. us-west-1.
    • Replace ‘account id’ with the account id of the Topic, e.g. 123456789012.
    • Replace ‘topic name’ with event-manifold-topic.
    • Replace ‘bucket name’ with event-manifold-bucket.
  6. Select the Update Policy button

At this point we have the S3 bucket, a SNS Topic, and the Topic is configured to permit our specific bucket to call the Publish API on it. Next we will configure the S3 bucket to send notifications to the Topic. We will choose the event type ObjectCreated (All) for our general purpose data processing architecture.


Step 4 – Configure the S3 Bucket to send Event Notifications to the SNS Topic

  1. Select the ‘Events’ portion of the S3 bucket created in Step 1.
  2. Enter a name for the notification, e.g. s3fanout
  3. Enter an event type for the notification, e.g. ObjectCreated (All)
  4. Select SNS Topic radio button of the Send To radio button group
  5. Select Add SNS topic ARN from the SNS Topic drop down list
  6. Enter the SNS Topic ARN created in Step 2
  7. Click the Save button

The picture below is an example.

S3 Event


Step 5 – Create the IAM Role for the Lambda functions

In this step you will create an IAM role which grants permissions for the Lambda functions to write to CloudWatch Logs and read objects from the originating S3 bucket.

  1. In the IAM portion of the AWS console, click the Policies link on the left.
  2. Select the Create Policy button at the top.
  3. In the Create Policy window, select the button which corresponds to the Policy Generator
  4. In the Permissions window, add two statements
    1. Statement One
      1. Effect = Allow
      2. AWS Service = Amazon CloudWatch Logs
      3. Actions = CreateLogGroup, CreateLogStream, PullLogEvents
      4. ARN = arn:aws:logs:*:*:*
      5. Select the Add Statement button
    2. Statement Two
      1. Effect = Allow
      2. AWS Service = Amazon S3
      3. Actions = GetObject
      4. ARN = arn:aws:s3:::event-manifold-bucket
      5. Select the Add Statement button
  5. Select the Next Step button.
  6. In the Review Policy window, enter a name for the Policy, e.g. ‘Fanout-Lambda-Policy’.
  7. Select the Create Policy button.
  8. In the IAM portion of the AWS console, click the Roles link on the left.
  9. Click the Create New Role button at the top.
  10. In the Select Role Name window, enter a name for the Role, e.g. CloudWatchLogs-Write-S3Bucket-Read. Click the Next Step button.
  11. In the Select Role Type window, select the button which corresponds to AWS Lambda.
  12. In the Attach Policy window, select the policy you previously created, ‘Fanout-Lambda-Policy’.
  13. In the Review Policy window, select the Create Role button.


Step 6 – Create the Lambda functions

In this step you will create two Lambda functions in the same region as the SNS Topic, data-processor-1 and data-processor-2. Each function will be edited inline and their execution role will be the role CloudWatchLogs-Write-S3Bucket-Read created in Step 5. This role provides visibility into what the functions are doing through simple logging statements and allows the functions to read the objects from S3.

  1. In the Lambda portion of the AWS console, click the Create a Lambda Function button.
  2. On the Select Blueprint page, select the sns-message blueprint option.
  3. On the Configure event sources page, select event-manifold-topic from the SNS topic dropdown list. Click Next.
  4. On the Configure Function page, enter a name for the function, e.g. ‘data-processor-1.
  5. As an option, enter a description of the function in the Description field.
  6. Use the default Runtime, Node.js.
  7. Use the default Code entry type, Edit code inline.
  8. Use the default Handler, index.handler.
  9. Select the CloudWatchLogs-Write-S3Bucket-Read role from the Role drop down list.
  10. Click the Next button.
  11. On the Review page, select the Enable now radio button.
  12. Click the Create Lambda Function button at the bottom of the page.

Repeat steps 1-12 to create a second Lambda function, setting the name to data-processor-2 in step 4.

At this point you have two Lambda functions, each of which programmed to receive an event notification record from a SNS Topic and log to CloudWatch Logs the incoming event data and the SNS message portion of the record. The picture below shows the code.

Simple Lambda Function


Step 7 – Modify the Lambda functions to process SNS messages of S3 event notifications

The code to process a SNS message delivery is shown above. A SNS message delivery is a JSON object containing an array named ‘Records’ with one element within the array – the SNS Message Delivery Object. The element contains several items of data about the event. An example of the SNS message delivery to a Lambda function is below.

Sample SNS Message Object

For the S3 event notification fanout architecture (S3 publish event notification -> SNS Topic -> SNS message delivery of S3 publish event notification -> Lambda function), the JSON object received by the Lambda function is different from a JSON object from a SNS message delivery. The S3 event notification is contained within the Sns.Message attribute of the SNS Message Delivery Object. An example of a SNS message delivery of a S3 event notification is shown below.

SNS Message Delivery Object

Some extra code is needed for the Lambda function to process the object created in S3. First the code must capture the Sns.Message object from the incoming record. Next, that object must be processed to unbundle the JavaScript object of the S3 event notification from the Sns.Message attribute. Example Lambda code to do this is shown below.

exports.handler = function(event,context) {
   var snsMsgString = JSON.stringify(event.Records[0].Sns.Message);
   var snsMsgObject = getSNSMessageObject(snsMsgString);
   var srcBucket = snsMsgObject.Records[0];
   var srcKey = snsMsgObject.Records[0].s3.object.key;
   console.log(‘SRC Bucket: ’ + srcBucket);
   console.log(‘SRC Key: ’ + srcKey);

The function getSNSMessageObject(string) must be included in your Lambda function and is shown below.

function getSNSMessageObject(msgString) {
   var x = msgString.replace(/\\/g,’’);
   var y = x.substring(1,x.length-1);
   var z = JSON.parse(y);
   return z;

Below is the Lambda function for data-processor-1.

Advanced Lambda Function


Test the architecture

The architecture illustrated in the beginning has been created and assembled. When new objects are created in the event-manifold-bucket, S3 will send the event notification to the SNS Topic event-manifold-topic, which will subsequently deliver a message to both Lambda functions of the new object creation. You can see this by inspecting the logs in Amazon CloudWatch.

CloudWatch Log Groups

By selecting the Log Group for each Lambda function, you can see that both functions received the notification of the S3 create object event and they each have the data they need to pull the record from S3 and process it.

Data Processor 1 Log Stream
Data Processor 2 Log Stream

As an alternative to viewing the log output of the Lambda functions in CloudWatch, you can also view metrics in CloudWatch provided by SNS Topics, including NumberOfMessagesPublished, PublishSize, NumberOfNotificationsDelivered, and NumberOfNotificationsFailed.


Alternative Architecture

The architecture described in this article is one option available to customers who require an event-driven general parallel data-processing system. Another option to achieve S3 Fanout of Event Notifications is to configure a S3 bucket to send the event notification directly to a ‘master’ Lambda function. In this approach, the ‘master’ Lambda function replaces the SNS topic (the event manifold) and must be programmed to send data to the various elements of the processing layer.

Leveraging a Lambda function to serve as the event manifold provides the architect a high degree of choice with regard to the processing elements due to the flexibility offered by the current runtime environments of Lambda, Node.js and Java8. In addition, processing elements do not need to ‘unbundle’ the S3 event notification from the SNS.Message attribute of the Message Delivery Object. In exchange for high choice and reduced software maintenance, the architect receives additional maintenance of the ‘master’ Lambda function – one unit for every downstream data processing element. Below is an illustration of the alternative architecture.

S3 fan out alternative architecture


Fast, Flexible, Easy to Maintain

The speed of the system is optimal since notifications of S3 events are event-driven and are delivered in parallel to subscribers, which results in parallel processing of data.

The flexibility of the system is high – to date SNS supports two endpoints that are capable of runtime processing: Lambda and HTTP/HTTPS endpoints, and SQS facilitates processing by queue consumers. Additional processing subscribers can be easily added or removed per business requirements.

The subscriber processing layer elements and the post-processing storage layer drives the maintenance of the system. The ingest storage layer (S3) is a highly scalable, reliable, and low-latency data storage infrastructure and the event manifold component (SNS) is a highly scalable, flexible, cost-effective notification service which require no ongoing maintenance.



The above architectures describe and illustrate event-driven general purpose parallel data processing systems. The first architecture utilizes at least three AWS services: Amazon S3, Amazon SNS, and AWS Lambda, with SNS serving as the ‘event manifold’. The alternative architecture utilizes at least two AWS services: Amazon S3 and AWS Lambda, with a Lambda function serving as the ‘event manifold’. These architectures are designed to support Internet scale data processing workloads, require low operational maintenance, provide the architect the option of leveraging the entire AWS Compute family for processing, and are flexible to dynamic business requirements for derivatives of data objects.

Building Scalable and Responsive Big Data Interfaces with AWS Lambda

Tim Wagner Tim Wagner, AWS Lambda

Great post on the AWS Big Data Blog by Martin Holste, a co-founder of the Threat Analytics Platform at FireEye, on using AWS Lambda to create scalable applications without infrastructure:

Building Scalable and Responsive Big Data Interfaces with AWS Lambda.

Follow my Lambda adventures on Twitter

SquirrelBin: A Serverless Microservice Using AWS Lambda

Tim Wagner Tim Wagner, AWS Lambda General Manager

Will Gaul Will Gaul, AWS Lambda Software Developer

With the recent release of Amazon API Gateway, developers can now create custom RESTful APIs that trigger AWS Lambda functions, allowing for truly serverless backends that include built-in authorization, traffic management, monitoring, and analytics. To showcase what’s possible with this new integration and just how easy it is to build a service that runs entirely without servers, we’ve built SquirrelBin, a simple website that allows users to CRUD runnable little nuggets of code called acorns. Let’s take a look at how we made SquirrelBin…

The SquirrelBin Architecture

The following diagram illustrates SquirrelBin’s architecture:
SquirrelBin Architecture Diagram

Notice what isn’t in the diagram above: servers. In fact, no infrastructure is required for any part of the experience.

There is a clean separation between data management and presentation. The website itself is a fully client-side single page app written in Angular and hosted statically on Amazon S3, with DNS managed by Amazon Route 53. To manage acorns, the app makes REST calls to Amazon API Gateway. These endpoints then trigger Lambda functions that either interact with SquirrelBin’s underlying Amazon DynamoDB acorn-store or run the actual acorn code. Because SquirrelBin’s API is on a completely separate Lambda-powered stack it would be easy to create additional clients, such as an iOS app that calls the Lambda functions via the AWS Mobile SDK, or an Alexa Skill that enables you to run acorns with your voice.

You might have noticed that the Lambda control plane consists of five functions, one for each CRUD operation. All of these functions are just instances of the new microservice-http-endpoint blueprint available now in the Lambda console:
AWS Lambda Console Blueprints

Yep, we wrote no code for any of the CRUD operations for the site!

While identical, each function is each only responsible for handling requests for its respective endpoint. This architecture presents a number of advantages:

  • Isolated deployments and failures: A problem with your API no longer takes down your entire backend. Each Lambda function operates individually and can be edited without affecting other functions.
  • Per-endpoint analytics for free: Each function will publish metrics on request count, errors, and more to CloudWatch, enabling you to quickly answer questions like, “How many acorns have been created in the last 24 hours?”
  • Modularity, simplicity, and separation of concerns: Because each function is only responsible for doing One Thing Well TM, it becomes easier to manage end-to-end configuration, code logic, and service integrations.

That covers the CRUD operations. Executing an acorn is just as easy: A simple version of code execution that supports nodejs can be written in just four lines of code:

exports.handler = function(event, context) {
    if (event.language === 'javascript') context.succeed(eval(event.code.replace(/require/g, '')));
    else'Language not supported');

Developing SquirrelBin

We began the development process with an understanding of our desired architecture and set up each component within minutes, all from within the AWS console. (In fact, the longest part of the backend setup was waiting for the DNS record to propagate!)

The website itself was developed in tandem, at first using hard-coded mock data and Angular’s $timeout service to simulate REST calls. Once the basic page layouts were complete we swapped in the API Gateway URL and began interacting with live data. In total, SquirrelBin runs in about 150 lines of client-side JavaScript. You can see the full source self-hosted on SquirrelBin here.

We hope that this post has illustrated the simplicity and power of serverless backends made possible with Amazon API Gateway and AWS Lambda, and has inspired you to try making your own!

Until next time, happy Lambda (and SquirrelBin) coding!

-Will and Tim

Follow Tim’s Lambda adventures on Twitter

AWS NY Summit Presentations

Tim Wagner Tim Wagner, AWS Lambda

The 2015 AWS NY summit a lot of exciting content for AWS Lambda and ECS. If you weren’t able to join us there, here’s a summary of slideshares and videos with compute-related announcements and content:


Werner Vogels Keynote: Announcing Amazon API Gateway

Announcement excerpt:

Full keynote:

Werner and Matt Wood announce the new Amazon API Gateway and its integration with AWS Lambda.

Breakout session: AWS Lambda, Event-driven Code in the Cloud

AWS Lambda Breakout Slides
See the Slides
Tim’s talk on AWS Lambda, with announcements about HTTP endpoint support, new features, and a sneak peak at the upcoming release of versioning. Also a fun guest appearance by Ricky Robinett of Twilio on integration with Lambda.


Breakout session: Build and Manage Your APIs with Amazon API Gateway

See the Slides
Simon Poile’s deep dive on the new Amazon API Gateway and its integration with AWS Lambda.

Breakout session: Amazon EC2 Container Service: Manage Docker-Enabled Apps in Amazon EC2

Amazon EC2 Container Service: Manage Docker-Enabled Apps in Amazon EC2 Breakout Slides
See the Slides
Brandon Chavis, AWS Solutions Architect, discusses how Amazon ECS makes it easy to build and deploy Docker-based applications.

Breakout session: Build Your Mobile App Faster with AWS Mobile Services

Build Your Mobile App Faster with AWS Mobile Services Breakout Slides
See the Slides
John Burry, AWS Principal Solutions Architect, discusses how you can quickly deliver mobile solutions with scalable backends using AWS services such as Amazon Cognito, Amazon API Gateway, and AWS Lambda.

Follow my Lambda adventures on Twitter

Commenting Support in the AWS Compute Blog

Meta-announcement: We’re in the process of enabling commenting support for the AWS compute blog. You’ll see it starting to appear on newer posts and getting phased in for a subset of older ones. Looking forward to engaging with our readers directly!

-Tim, Deepak, and our many guest authors

Writing AWS Lambda Functions in Clojure

Tim Wagner Tim Wagner, AWS Lambda General Manager

Bryan Moffatt Bryan Moffatt, AWS Lambda Software Developer

AWS Lambda’s Java support also makes it easy to write Lambda functions in other jvm-based languages. Previously we looked at doing this for Scala; today’ll we’ll see how it can be done with Clojure.

Getting Started with Clojure

We’ll build our Clojure project with Leiningen, but you can use Boot, Maven, or another build tool as well. To follow along below, make sure you’ve first installed leiningen, that lein is on your path (for example):


and that you have a Java 8 SDK installed.

Next, open a command line prompt where you want to do your development (for example, “C:\tmp\clojure-demo”) and create the following directory structure:


At the same level as src, create a file named ‘project.clj’ with the following content:

(defproject lambda-clj-examples "0.1.0"
  :dependencies [[org.clojure/clojure "1.7.0"]
                 [org.clojure/data.json "0.2.6"]
                 [com.amazonaws/aws-lambda-java-core "1.0.0"]]
  :java-source-paths ["src/java"]
  :aot :all)

Ok, time to write some code…

Clojure Meets Lambda: Hello World!

Let’s start with the classic: Create a file in your src directory named ‘hello.clj’ with the following content:

(ns hello
   :methods [^:static [handler [String] String]]))

(defn -handler [s]
  (str "Hello " s "!"))

Now at the root of your tree, execute

lein uberjar

When this completes, it should have created a subdirectory called ‘target’ containing the file ‘lambda-clj-examples-0.1.0-standalone.jar’, which is ready to be uploaded to AWS Lambda.

You can use a command line or the Lambda console to do the upload/creation. If you’re using the cli, the command will look like the following, but you’ll need to use a valid role argument (this also assumes you’re currently in the clojure-demo directory). If you use the console, the handler is the same as the one in the command below (“hello::handler”).

$ aws lambda create-function \
  --function-name clj-hello \
  --handler hello::handler \
  --runtime java8 \
  --memory 512 \
  --timeout 10 \
  --role arn:aws:iam::awsaccountid:role/lambda_exec_role \
  --zip-file fileb://./target/lambda-clj-examples-0.1.0-standalone.jar

You can invoke and test from the command line or the console; the console view looks like this if I test it with a sample input of “Tim” (including the quotes):

Scala Upload in the AWS Lambda Console

Fun with Java

With HelloWorld under our belt, let’s tackle Java integration, as a first step on the road toward processing some Amazon S3 events. First, let’s extend our code slightly as follows:

(ns hello
   :methods [^:static [handler [String] String]]))

(defn -handler [s]
  (str "Hello " s "!"))

; Add POJO handling

(defn -handlepojo [this event]
  (str "Hello " (.getFirstName event) " " (.getLastName event)))

  :name PojoHandler
  :methods [[handlepojo [example.MyEvent] String]])

Next, create a subdirectory called “example” in src/java, and in it create a file called “” with the following content:

package example;

public class MyEvent {

    private String firstName;
    private String lastName;
    public void setFirstName(String firstName) {
        this.firstName = firstName;

    public String getFirstName() {
        return firstName;

    public void setLastName(String lastName) {
        this.lastName = lastName;

    public String getLastName() {
        return lastName;

Again in ‘C:\tmp\clojure-demo’, execute

lein uberjar

and create a new Lambda function from the command line or console; the cli command looks like

$ aws lambda create-function \
  --function-name clj-hellopojo \
  --handler PojoHandler::handlepojo \
  --runtime java8 \
  --memory 512 \
  --timeout 10 \
  --role arn:aws:iam::awsaccountid:role/lambda_exec_role \
  --zip-file fileb://./target/lambda-clj-examples-0.1.0-standalone.jar

if you test this this function with Lambda test input like

  firstName: "Tim",
  lastName:  "Wagner"

you should get a response like “Hello Tim Wagner”. Now that we have Java integration working, let’s tackle a more real-world example.

Processing an Amazon S3 Event in Clojure

Now we’ll process a more interesting type – a bucket notification sent by Amazon S3 when an object is added. We’ll also see how to make a POJO (in this case the S3 event class) fit more gracefully into Clojure.

The S3 event itself is fairly complex; here’s the sample event from the Lambda console:

  "Records": [
      "eventVersion": "2.0",
      "eventSource": "aws:s3",
      "awsRegion": "us-east-1",
      "eventTime": "1970-01-01T00:00:00.000Z",
      "eventName": "ObjectCreated:Put",
      "userIdentity": {
        "principalId": "EXAMPLE"
      "requestParameters": {
        "sourceIPAddress": ""
      "responseElements": {
        "x-amz-request-id": "C3D13FE58DE4C810",
        "x-amz-id-2": "FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
      "s3": {
        "s3SchemaVersion": "1.0",
        "configurationId": "testConfigRule",
        "bucket": {
          "name": "sourcebucket",
          "ownerIdentity": {
            "principalId": "EXAMPLE"
          "arn": "arn:aws:s3:::mybucket"
        "object": {
          "key": "HappyFace.jpg",
          "size": 1024,
          "eTag": "d41d8cd98f00b204e9800998ecf8427e"

We could tackle this just like we handled the name POJO in the previous section. To do that, add

[com.amazonaws/aws-lambda-java-events "1.0.0"]

to the list of dependencies in your lein configuration and proceed as we did above for the name POJO.

Alternatively, we can treat this complex type in a more idiomatic way within Clojure. We’ll need to integrate with Lambda’s Java environment using a raw stream handler and then craft a “native” type by parsing the content. To see this in action, let’s create a new file in src called “stream_handler.clj” that contains the following:

(ns stream-handler
   :implements [])
  (:require [ :as json]
            [clojure.string :as s]
            [ :as io]
            [clojure.pprint :refer [pprint]]))

(defn handle-event [event]
  (pprint event)
  {:who-done-it (get-in event [:records 0 :request-parameters :source-ip-address])
   :bucket-owner (get-in event [:records 0 :s3 :bucket :owner-identity :principal-id])})

(defn key->keyword [key-string]
  (-> key-string
      (s/replace #"([a-z])([A-Z])" "$1-$2")
      (s/replace #"([A-Z]+)([A-Z])" "$1-$2")

(defn -handleRequest [this is os context]
  (let [w (io/writer os)]
    (-> (json/read (io/reader is) :key-fn key->keyword)
        (json/write w))
    (.flush w)))

Again, build with

lein uberjar

and then upload with a command (or console actions) like

$ aws lambda create-function \
  --function-name clj-s3 \
  --handler stream_handler \
  --runtime java8 \
  --memory 512 \
  --timeout 10 \
  --role arn:aws:iam::awsaccountid:role/lambda_exec_role \
  --zip-file fileb://./target/lambda-clj-examples-0.1.0-standalone.jar

To test this in the Lambda console, configure the sample event template to be “S3 Put” (which is shown above) and invoke it. You should get

  "who-done-it": "",
  "bucket-owner": "EXAMPLE"

as a result.

You can see the “Clojure-ified” form of the S3 event in the logs; here’s what it looks like if you’re testing in the Lambda console:

START RequestId: 69dee059-27eb-11e5-89ff-5113959a98f1
 [{:event-name "ObjectCreated:Put",
   :event-version "2.0",
   :event-source "aws:s3",
   {:x-amz-request-id "C3D13FE58DE4C810",
   :aws-region "us-east-1",
   :event-time "1970-01-01T00:00:00.000Z",
   :user-identity {:principal-id "EXAMPLE"},
   {:s3schema-version "1.0",
    :configuration-id "testConfigRule",
    {:name "sourcebucket",
     :owner-identity {:principal-id "EXAMPLE"},
     :arn "arn:aws:s3:::mybucket"},
    {:key "HappyFace.jpg",
     :size 1024,
     :e-tag "d41d8cd98f00b204e9800998ecf8427e"}},
   :request-parameters {:source-ip-address ""}}]}
END RequestId: 69dee059-27eb-11e5-89ff-5113959a98f1

We hope this article helps developers who love Clojure get started using it in Lambda. Happy Lambda (and Clojure) coding!

-Tim and Bryan

Follow Tim’s Lambda adventures on Twitter

Continuous Integration/Deployment for AWS Lambda functions with Jenkins and Grunt – Part 2

Daniele Stroppa Daniele Stroppa, AWS Solution Architect

In a previous post we showed how to make use of tools such as Grunt (a Javascript task runner that can be used to automate tasks such as building and packaging) and the grunt-aws-lambda plugin to execute and test your Lambda function in your local environment. In this post we’ll take a step further and show how to use Jenkins to streamline the AWS Lambda functions deployment workflow.


Setup the build environment

For our build environment we’ll launch an Amazon EC2 instance using the Amazon Linux AMI and install and configure the required packages. Make sure that the security group you select for your instance allows traffic on ports TCP/22 and TCP/80 and that the IAM role you select for your EC2 instance allows the GetFunction, CreateFunction, UpdateFunctionCode and UpdateFunctionConfiguration Lambda actions and the IAM PassRole action to be able to deploy the Lambda function, e.g.:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Stmt1432812345671",
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Sid": "Stmt14328112345672",
            "Effect": "Allow",
            "Action": [
            "Resource": [


Install and configure Jenkins, Git and Nginx

Connect to your instance using your private key and switch to the root user.
First, let’s update the repositories and install Nginx and Git.

# yum update -y
# yum install -y nginx git

To install Jenkins on Amazon Linux, we need to add the Jenkins repository and install Jenkins from there.

# wget -O /etc/yum.repos.d/jenkins.repo
# rpm --import
# yum install -y jenkins

As Jenkins typically uses port TCP/8080, we’ll configure Nginx as a proxy. Edit the Nginx config file (/etc/nginx/nginx.conf) and change the server configuration to look like this:

server {
    listen       80;
    server_name  _;

    location / {

Start the Jenkins and Nginx services and make sure they will be running after a reboot:

# service jenkins start
# service nginx start
# chkconfig jenkins on
# chkconfig nginx on

Point your browser to the public DNS name of your EC2 instance (e.g. and you should be able to see the Jenkins home page:

Jenkins Home Page

The Jenkins installation is currently accessible through the Internet without any form of authentication. Before proceeding to the next step, let’s secure Jenkins. Select Manage Jenkins on the Jenkins home page, click Configure Global Security and then enable Jenkins security by selecting the Enable Security checkbox.

For the purpose of this walkthrough, select Jenkins’s Own User Database under Security realm and make sure to select the Allow users to sign up checkbox. Under Authorization, select Matrix-based security. Add a user (e.g. admin) and provide necessary privileges to this user.

Configure Global Security

After that’s complete, save your changes. Now you will be asked to provide a username and password for the user to login. Click on Create an account, provide your username – i.e. admin – and fill in the user details. Now you will be able to log in securely to Jenkins.


Install and configure the Jenkins plugins

The last step in setting up our build environment is to install and configure the Jenkins plugins required to deploy your Lambda function. We’ll also need a plugin to interact with the code repository of our choice, GitHub in our case.

From the Jenkins dashboard select Manage Jenkins and click Manage Plugins. On the Available tab, search for and select the following plugins:

Then click the Install button. After the plugin installation is completed, select Manage Jenkins from the Jenkins dashboard and click Configure System. Look for the NodeJS section click the Add NodeJS button. Add a new Node.js version, specify a name (e.g. Node.js 0.10.33) and select Install automatically. Click the Add Installer button and select Install from Select the version of Node.js that you want to install (e.g. 0.10.33) and add grunt@0.4.* in the Global npm packages to install box. Click Save to confirm your changes.

Jenkins Node JS

Note that it is recommended to install the same version of Node.js that is supported by Lambda.


Configure the AWS CLI

Now we are ready to setup and configure the AWS Command Line Interface (CLI).
Make sure that Jenkins is able to use the AWS CLI. Switch to the jenkins user and configure the AWS CLI, providing your credentials:

# aws configure

Note that you might have to change the jenkins user shell from /bin/false to /bin/bash to be able to login successfully.


Configure the Jenkins build

On the Jenkins dashboard, click on New Item, select the Freestyle project job, add a name for the job, and click OK. Configure the Jenkins job:

  • Under GitHub Project add the path of your GitHub repository – e.g. In addition to the function source code, the repository contains the Gruntfile.js and event.json as explained at the first part of this walkthrough.

Github AWSLabs Project

  • Under Source Code Management provide the Repository URL for Git, e.g., and your Github credentials.

Source Code Management

  • In the Build Triggers section, select Build when a change is pushed to GitHub
  • In the Build Environment section, select Provide Node & npm bin/ folder to PATH and choose your Node installation
  • In the Build section, add a Execute Shell step and add the commands to install the required packages and to create the Lambda function bundle:
# npm install
# grunt lambda_package
  • Add a AWS Lambda deployment step, fill in your AWS credentials, your AWS Region (e.g. us-east-1), the Lambda function name (e.g. CreateThumbnail) and change the Update Mode to Code to only update your Lambda function code. Set the Artifact Location to dist/ as this is where Grunt generates the Lambda function bundle

Deploy Lambda Function

To trigger the build process on Jenkins upon pushing to the GitHub repository we need to configure a service hook on GitHub. Go to the GitHub repository settings page, select Webhooks and Services and add a service hook for Jenkins (GitHub plugin). Add the Jenkins hook url: http://:@/github-webhook/.

Jenkins Github Plugin

Now we have configured a Jenkins job in such a way that whenever a change is committed to GitHub repository it will trigger the build process on Jenkins.


Let’s kick things off

From your local repository, push the application code to GitHub:

# git add *
# git commit -m "Kicking off Jenkins build"
# git push origin master

This will trigger the Jenkins job. After the job is completed, upload an image (e.g. sample.jpg) to your input S3 bucket (e.g. lambdapics) and verify that a thumbnail version of your image exists in the output S3 bucket (e.g. lambdapicsresized/resized-sample.jpg)



In this two-parts walkthrough we demonstrated how to use Grunt and the grunt-aws-lambda plugin to run your Lambda function in your local environment and how Jenkins can help to automate the deployment of a Lambda function. See the documentation for further information on AWS Lambda.

Continuous Integration/Deployment for AWS Lambda functions with Jenkins and Grunt – Part 1

Daniele Stroppa Daniele Stroppa, AWS Solution Architect

Developing, testing and deploying your AWS Lambda functions can be a tedious process at times: write your function in your preferred editor/IDE, package it with any additional node module, upload it to AWS and test it using the console. Ideally, you would develop and test your function locally, upload it to your repository and let your CI tool deploy it for you.

In this post we’ll show how to make use of tools such as Grunt (a Javascript task runner that can be used to automate tasks such as building and packaging) and the grunt-aws-lambda plugin to execute and test you AWS Lambda functions locally.


Create the AWS Resources

Throughout this post we’ll use the CreateThumbnail AWS Lambda function described in the AWS Lambda documentation. Note that you’ll need to install ImageMagick in your local development environment to follow this guide.

Follow Step 1: Create a Lambda Function in the Getting Started guide and create a function called CreateThumbnail and an appropriate IAM Role. Do not worry about the function code for now as we will upload it later on.

Follow Step 1.1: Create Buckets and Upload a Sample Object to create the source and destination buckets (e.g. lambdapics and lambdapicsresized).


Setup the development environment

Let’s start by preparing our local development environment.

Install the AWS CLI and Node.js

If you haven’t done so yet, install and configure the AWS CLI, providing your credentials:

# aws configure

Install Node.js following the instructions for your platform of choice. Node.js comes with npm installed so you should have a version of npm.

Create your AWS Lambda function project

Create a directory for your project and create the initial package.json file:

# mkdir create-thumbs-lambda && cd create-thumbs-lambda
# npm init

Edit the package.json file, add any dependecies for your Lambda function (e.g. the GraphicsMagick and Async modules) and add the AWS SDK, grunt and grunt-aws-lambda to the devDependencies. Your package.json should look something like the following:

   "name": "create-thumbs-lambda",
   "version": "0.0.1",
   "description": "AWS Lambda function to create an image thumbnail",
   "main": "CreateThumbnail.js",
   "dependencies": {
      "gm": "^1.17.0",
      "async": "^0.9.0"
   "devDependencies": {
      "aws-sdk": "latest",
      "grunt": "0.4.*",
      "grunt-aws-lambda": "0.8.0",
     "npm": "^2.8.3"
   "scripts": {
      "test": "echo \"Error: no test specified\" && exit 1"
   "author": "Amazon Web Services Inc.",
   "license": "Apache 2.0"

Finally, install the required Node.js modules locally:

# sudo npm install


Develop, Test, Repeat

Now let’s create the Lambda function. Copy the code from Step 2.1: Create a Lambda Function Deployment Package into a new file. Edit the Lambda function so that the last callback in the async.waterfall section will look like this:

function (err) {
   if (err) {
      msg = 'Unable to resize ' + srcBucket + '/' + srcKey +
      ' and upload to ' + dstBucket + '/' + dstKey +
      ' due to an error: ' + err;
   } else {
      msg = 'Successfully resized ' + srcBucket + '/' + srcKey +
      ' and uploaded to ' + dstBucket + '/' + dstKey;
context.done(err, msg);

This is to avoid the grunt-aws-lambda plugin to report a failure even if the function completed successfully. Save your Lambda function as CreateThumbnail.js.

Next, create a file named Gruntfile.js with the following content:

var grunt = require('grunt');

   lambda_invoke: {
      default: {
         options: {
            file_name: 'CreateThumbnail.js'
   lambda_deploy: {
      default: {
         function: 'CreateThumbnail'
   lambda_package: {
      default: {

grunt.registerTask('deploy', ['lambda_package', 'lambda_deploy'])

The grunt-aws-lambda plugin requires a event.json file containing the event that will trigger the Lambda function execution. The event.json file will look like this:


Let’s test our lambda function, making sure that it runs with no errors before uploading:

# grunt lambda_invoke

If everything ran without any error, you’ll see a message similar to this:

Successfully resized lambdapics/amazon-web-services-lambda.jpg and uploaded to lambdapicsresized/resized-amazon-web-services-lambda.jpg

Done, without errors.

Once your function is executing without errors, you can package it and upload it with a simple command:

# grunt deploy



In this walkthrough we demonstrated how to use Grunt and the grunt-aws-lambda plugin to run your Lambda function in your local environment. In the second part we’ll show how to continuosly integrate and deploy your Lambda function with Jenkins. See the documentation for further information on AWS Lambda.

AWS Lambda launches in Tokyo region

Tim Wagner Tim Wagner, AWS Lambda

We’re happy to announce our newest region: Tokyo! In addition to US East (Virginia), US West (Oregon), and EU (Ireland), AWS Lambda is now also available in the Tokyo (ap-northeast-1) region. Amazon SNS, Amazon S3, and Amazon Kinesis event sources are available in the Tokyo region.

Happy (Asia Pacific) Lambda coding!

Follow my Lambda adventures on Twitter

Hands-Free Slack: AWS Lambda meets Amazon Echo

Tim Wagner Tim Wagner, AWS Lambda

Amazon Echo is voice-based home automation. With it, you can listen to music, check the weather, or search the web…just by using your voice. And just like mobile phones and tablets, lots of exciting apps are going to be written for this new platform, using the Alexa Skills Kit (ASK) and AWS Lambda. Together, they offer a completely managed platform for voice recognition and cloud compute.

To demonstrate how easy it is to add completely new experiences to Echo (or any Alexa-powered device), I modified the Alexa “color” demo skill that’s built into the Lambda console to allow me to post messages to a Slack channel using my voice. With a command like, “Alexa, tell Slack to send, ‘The demo is ready!'”, I can send a Slack message hands-free. The following
video illustrates this in action.

Here’s the Alexa “intent schema” that I used:
Alexa Intent Schema

You’ll also need to enable incoming webhooks (requests) in your Slack channel:
Slack incoming webhook configuration

As mentioned in the video, the code is a trivial modification of the “MyColor” skill; I changed the color parameter to a message, and added an HTTP request to Slack at the point where the message is created. If you try this yourself, you’ll have to replace the http “path” option with your own Slack token to get it to work. Finally, testing your code (and Slack integration) from the Lambda console is super useful, but you’ll need to replace the “Color” section in the Alexa sample event (Alexa Event – MyColorIs) with a “Message” equivalent to get it to work with the code below.

var https = require('https');
var options = {
  host: '',
  port: 443,
  path: '/services/[put your own Slack access token here]',
  method: 'POST'

 * This sample shows how to create a simple Lambda function for handling speechlet requests.

// Route the incoming request based on type (LaunchRequest, IntentRequest,
// etc.) The JSON body of the request is provided in the event parameter.
exports.handler = function (event, context) {
    try {
        console.log("event.session.application=" + event.session.application.applicationId);

         * Uncomment this if statement and replace with yours
         * to prevent other voice applications from using this function.
        if ( !== "[your own app id goes here]") {
  "Invalid Application ID");

        if ( {
            onSessionStarted({requestId: event.request.requestId}, event.session);

        if (event.request.type === "LaunchRequest") {
                     function callback(sessionAttributes, speechletResponse) {
                        context.succeed(buildResponse(sessionAttributes, speechletResponse));
        }  else if (event.request.type === "IntentRequest") {
                     function callback(sessionAttributes, speechletResponse) {
                         context.succeed(buildResponse(sessionAttributes, speechletResponse));
        } else if (event.request.type === "SessionEndedRequest") {
            onSessionEnded(event.request, event.session);

    } catch (e) {"Exception: " + e);

 * Called when the session starts.
function onSessionStarted(sessionStartedRequest, session) {
    console.log("onSessionStarted requestId=" + sessionStartedRequest.requestId
                + ", sessionId=" + session.sessionId);

 * Called when the user launches the app without specifying what they want.
function onLaunch(launchRequest, session, callback) {
    console.log("onLaunch requestId=" + launchRequest.requestId
                + ", sessionId=" + session.sessionId);


 * Called when the user specifies an intent for this application.
function onIntent(intentRequest, session, callback) {
    console.log("onIntent requestId=" + intentRequest.requestId
                + ", sessionId=" + session.sessionId);

    var intent = intentRequest.intent;
    var intentName =;

    if ("MyMessageIntent" === intentName) {
        setMessageInSession(intent, session, callback);
    } else if ("WhatsMyMessageIntent" === intentName) {
        getMessageFromSession(intent, session, callback);
    } else {
        console.log("Unknown intent");
        throw "Invalid intent";

 * Called when the user ends the session.
 * Is not called when the app returns shouldEndSession=true.
function onSessionEnded(sessionEndedRequest, session) {
    console.log("onSessionEnded requestId=" + sessionEndedRequest.requestId
                + ", sessionId=" + session.sessionId);

 * Helpers that build all of the responses.
function buildSpeechletResponse(title, output, repromptText, shouldEndSession) {
    return {
        outputSpeech: {
            type: "PlainText",
            text: output
        card: {
            type: "Simple",
            title: "SessionSpeechlet - " + title,
            content: "SessionSpeechlet - " + output
        reprompt: {
            outputSpeech: {
                type: "PlainText",
                text: repromptText
        shouldEndSession: shouldEndSession

function buildResponse(sessionAttributes, speechletResponse) {
    return {
        version: "1.0",
        sessionAttributes: sessionAttributes,
        response: speechletResponse

 * Functions that control the app's behavior.
function getWelcomeResponse(callback) {
    // If we wanted to initialize the session to have some attributes we could add those here.
    var sessionAttributes = {};
    var cardTitle = "Welcome";
    var speechOutput = "Welcome to the Alexa and Lambda demo app, "
                + "You can give me a message to send to our team's Slack channel by saying, "
                + "my message is...";
    // If the user either does not reply to the welcome message or says something that is not
    // understood, they will be prompted again with this text.
    var repromptText = "You can give me your message by saying, "
                + "my message is...";
    var shouldEndSession = false;

             buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));

 * Sets the message in the session and prepares the speech to reply to the user.
function setMessageInSession(intent, session, callback) {
    var cardTitle =;
    var messageSlot = intent.slots.Message;
    var repromptText = "";
    var sessionAttributes = {};
    var shouldEndSession = false;
    var speechOutput = "";
    if (messageSlot) {
        message = messageSlot.value;
        console.log("Message slot contains: " + message + ".");
        sessionAttributes = createMessageAttributes(message);
        speechOutput = "Your message has been sent. You can ask me to repeat it by saying, "
                + "what's my message?";
        repromptText = "You can ask me to repeat your message by saying, what's my message?";
        var req = https.request(options, function(res) {
            res.on('data', function (chunk) {
                buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
        req.on('error', function(e) {
            console.log('problem with request: ' + e.message);
        req.write('{"channel": "#aws-lambda", "username": "webhookbot", "text": "[via Alexa]: ' + message + '", "icon_emoji": ":ghost:"}');
    } else {
        speechOutput = "I didn't hear your message clearly, please try again";
        repromptText = "I didn't hear your message clearly, you can give me your "
                + "message by saying, my message is...";
             buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));

function createMessageAttributes(message) {
    return {
        message: message

function getMessageFromSession(intent, session, callback) {
    var cardTitle =;
    var message;
    var repromptText = null;
    var sessionAttributes = {};
    var shouldEndSession = false;
    var speechOutput = "";

    if(session.attributes) {
        message = session.attributes.message;

    if(message) {
        speechOutput = "Your message is " + message + ", goodbye";
        shouldEndSession = true;
    else {
        speechOutput = "I didn't hear your message clearly. As an example, you can say, My message is 'hello, team!'";

    // Setting repromptText to null signifies that we do not want to reprompt the user. 
    // If the user does not respond or says something that is not understood, the app session 
    // closes.
             buildSpeechletResponse(, speechOutput, repromptText, shouldEndSession));

Alexa tell Slack to send, “Thanks for reading and happy Lambda coding!”

Follow my Lambda adventures on Twitter