Category: AWS Lambda


Writing AWS Lambda Functions in Scala

Tim Wagner Tim Wagner, AWS Lambda General Manager


Sean Reque Sean Reque, AWS Lambda Software Developer


AWS Lambda’s Java support also makes it easy to write Lambda functions in other jvm-based languages. Let’s take a look at how you can do that for Scala.

Getting Started with Scala

If you’re an old hand at Scala, skip ahead…otherwise: We’ll step you through the process to get up and running with Scala on a Windows machine; other platforms will be similar.

First, you’ll need to download Scala’s simple build tool (sbt):
sbt install

Next, open a command line prompt where you want to do your development (I chose “C:/tmp” to keep it simple) and run ‘sbt’, which will auto-update itself. (Depending on your settings, you might need admin privileges for this to succeed.) Then create the following directory structure:

C:\tmp\lambda-demo
  project
  src
    main
      scala
        example

Inside the ‘project’ subdirectory, create a file named ‘plugins.sbt’ with the following content:


addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.12.0")

Inside ‘/tmp/lambda-demo’ (as a peer to ‘src’ and ‘project’), add a file named ‘build.sbt’ with the following content:


javacOptions ++= Seq("-source", "1.8", "-target", "1.8", "-Xlint")

lazy val root = (project in file(".")).
  settings(
    name := "lambda-demo",
    version := "1.0",
    scalaVersion := "2.11.4",
    retrieveManaged := true,
    libraryDependencies += "com.amazonaws" % "aws-lambda-java-core" % "1.0.0",
    libraryDependencies += "com.amazonaws" % "aws-lambda-java-events" % "1.0.0"
  )

mergeStrategy in assembly <
   {
    case PathList("META-INF", xs @ _*) => MergeStrategy.discard
    case x => MergeStrategy.first
   }
}

You’re now ready to start doing Scala in Lambda!

Writing Your First Lambda-in-Scala Function

Let’s start simple with a function that can extract the object (key) name from an Amazon S3 event. We’ll return it, to make the function easy to debug, then wire it up to an S3 bucket to see event processing working end-to-end. At that point you’ll be able to start modifying the code to do whatever additional analysis or transformation you like on the actual content of the object, or extend the example to process other argument types or event sources.

First, in ‘C:\tmp\lambda-demo\src\main\scala\exampleadd\Main.scala’, add the following sample code:


package example;

import scala.collection.JavaConverters._
import java.net.URLDecoder
import com.amazonaws.services.lambda.runtime.events.S3Event

class Main {
  def decodeS3Key(key: String): String = URLDecoder.decode(key.replace("+", " "), "utf-8")

  def getSourceBuckets(event: S3Event): java.util.List[String] = {
    val result = event.getRecords.asScala.map(record => decodeS3Key(record.getS3.getObject.getKey)).asJava
    println(result)
    return result
  }
}

Next, fire up sbt in ‘lambda-demo’ and execute the ‘compile’ command followed by the ‘assembly’ command. You should end up with a Jar file ‘C:\tmp\lambda-demo\target\scala-2.11\lambda-demo-assembly-1.0.jar’. (Your version may differ depending on when you try this example.)

We’ll discuss the programming model below, but first let’s complete the process of creating and testing a Lambda function. You can do this via the CLI or the console; below I’ve illustrated what it looks like in the console:

Scala Upload in the AWS Lambda Console

I’m using the default suggestions for memory, duration, and role (basic execution). Note the handler: ‘example.Main::getSourceBuckets’. Click ‘Create Lambda function’ and you should be ready to test.

Testing Your Scala Function

At the point your Lambda function should be working, and behaves like any other Lambda function. You can use a test invoke with the “S3 Put” sample event and you should see [“HappyFace.jpg”] (the S3 key name in the PUT event sample) as the result. To test your function end to end, click on “Go to function list”, add an S3 bucket as an event source, and upload a sample file. You can go to the Amazon CloudWatch Logs page to check on the result and should see a similar output:

Amazon CloudWatch Log result of triggering the Scala function with an S3 upload

From here, you can start extending your code to retrieve and transform the content of the file, add other event sources, etc. Be sure to keep your execution role permissions consistent with the AWS operations you perform in your Scala code.

Scala Programming Model

Writing Lambda functions in Scala requires dealing with some “Javaisms” at the points of entry and exit. You can notice this in code above: the use of java.util.List and the .asScala and .asJava converters. These are all necessary because the built-in Lambda serializer doesn’t understand native Scala types. Since Java and Scala share primitives, Lambda function parameter and return types like ints and strings work fine in both languages without any explicit conversion. Java collections and POJOs (more specifically, their [de]serialization) require a little more work to coexist in Scala. You can still employ Scala in your Lambda function by using Lambda’s byte stream interface and your own serialization library, such as the Jackson Scala module. To add this library to your project, in your build.sbt file add the following line in the dependencies section:


    libraryDependencies += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.5.2"

Here’s an example that uses the Scala Jackson module: it defines a Scala class, NameInfo, and uses the byte stream interface to deserialize the argument passed to the Lambda function as this Scala class. It then outputs a greeting message as a result.


package example;

case class NameInfo(firstName: String, lastName: String)

class Main {
  import java.io.{InputStream, OutputStream, PrintStream}

  val scalaMapper = {
    import com.fasterxml.jackson.databind.ObjectMapper
    import com.fasterxml.jackson.module.scala.DefaultScalaModule
    new ObjectMapper().registerModule(new DefaultScalaModule)
  }

  def greeting(input: InputStream, output: OutputStream): Unit = {
    val name = scalaMapper.readValue(input, classOf[NameInfo])
    val result = s"Greetings ${name.firstName} ${name.lastName}." 
    output.write(result.getBytes("UTF-8"))
  }
}

Invoking this function with input like:


{
    "firstName": "Robert",
    "lastName": "Dole"
}

produces "Greetings Robert Dole" as a result.

 
We hope this article helps fans of both Scala and Lambda enjoy them together. Happy Lambda (and Scala) coding!

-Tim and Sean

AWS Lambda Announces Java Support

Tim Wagner Tim Wagner, AWS Lambda

Java comes to Lambda!

Support for Java has been one of our most requested features, so I’m very happy to announce that’s it’s here! Check out the overview on the AWS blog or the docs.

Java has a more heavyweight initialization than nodejs, so the console defaults to a larger memory setting and duration (timeout). Once it’s running however, subsequent uses of a “warm” Java process are typically faster than nodejs requests. We’re continuing to invest in speeding up the initialization time to make the cold / infrequent use case even better.

If you’re using the AWS Eclipse Plugin to author and upload Lambda functions, add the AWS Core, AWS Java SDK, EC2, and Lambda plugins, and remember to have an S3 bucket in the same region as your Lambda function for the Eclipse plugin to stage code uploads.

Happy (Java) Lambda coding!

-Tim
Follow my Lambda adventures on Twitter

50% Increase in Memory Capacity for AWS Lambda Functions

Tim Wagner Tim Wagner, AWS Lambda

AWS Lambda has increased the maximum capacity of memory for Lambda functions from 1Gb to 1.5Gb. Setting the memory for a Lambda function implicitly also sets the CPU, network, and other resource allocation, so this means you also have access to more compute power when you choose one of the new larger settings.

You access these settings when you create a function or update its configuration, and the settings are available regardless of whether you use the CLI, SDKs, or console. Here’s what it looks like in the latter:
Tim Wagner

The expanded limits are available in all regions in which Lambda operates. Happy Lambda coding!

-Tim
Follow my Lambda adventures on Twitter

New Deployment Options for AWS Lambda

Tim Wagner Tim Wagner, AWS Lambda General Manager


Emma Zhao Emma Zhao, AWS Lambda Software Developer


This blog introduces two new ways to deploy AWS Lambda functions…and as a bonus, we’ll create a “Lambda auto-deploy” service as well!

Deploying AWS Lambda code from Amazon S3 buckets

Many developers use Amazon S3, the AWS object storage system, as an easy-to-use repository for storing build and deployment artifacts. AWS Lambda now has support for uploading code directly from S3, without requiring you to first download it to a client. Using it is simple: In a call to CreateFunction or UpdateFunctionCode, you can now provide the S3 bucket, key (object name), and optional version as an alternative to supplying the code directly, and Lambda will simply load your code directly from S3. (If the bucket owner and the user making these calls aren’t the same, make sure the latter has permission to read the file.)

The CreateFunction parameters now look like this; the “S3” trio are new:

{
    "Code": {
        "S3Bucket": "string",
        "S3Key": "string",
        "S3ObjectVersion": "string",
        "ZipFile": blob
    },
    "Description": "string",
    "FunctionName": "string",
    "Handler": "string",
    "MemorySize": number,
    "Role": "string",
    "Runtime": "string",
    "Timeout": number
}

Here’s what the feature looks like in the AWS Lambda console:
Lambda Deployment From S3 Bucket

AWS CloudFormation support for Lambda Functions

Building on the new Lambda feature, AWS CloudFormation now also supports AWS Lambda functions in templates.

Here’s the CloudFormation template:

{
  "Type" : "AWS::Lambda::Function",
  "Properties" : {
    "Code" : Code,
    "Description" : String,
    "Handler" : String,
    "MemorySize" : Integer,
    "Role" : String,
    "Runtime" : String,
    "Timeout" : Integer
  }
}

and the “code” section looks like:

{
  "S3Bucket" : String,
  "S3Key" : String,
  "S3ObjectVersion" : String
}

Unsurprisingly, it looks a lot like the CreateFunction call in Lambda that it’s making on your behalf. With this new feature in CloudFormation, you can now stand up stacks of resources that include Lambda functions. For example, if you create an S3 bucket in your stack and you have a Lambda function that you use to process notification events when objects are created in that bucket, now you can deploy them together using CloudFormation, name them using stack parameters, and all the other CloudFormation goodness.

CloudFormation also supports using Lambda functions to execute custom resources in a stack, making it easy to add custom processing to a stack rollout without needing any infrastructure to execute the code.

Bonus Section: Lambda Auto-Deployer

Wouldn’t it be nice if there were a microservice that would watch for code zips being uploaded to S3 and then automatically deploy them to Lambda for you? Let’s build it – with the new S3 upload capability in Lambda and the existing S3 bucket notifications that can call Lambda functions, it’s really easy:

  • Create an S3 bucket or pick an existing one to hold your code zips.
  • Optional: Turn on versioning and retention (cleanup) policies on that bucket. Not required, but S3 offers them and they’re nice to have.
  • Create the initial version of your Lambda function. Doesn’t even have to be real code yet, just make a placeholder so you can set the configuration (memory, duration, execution role) as you like.
  • Create a “LambdaDeployment” function using the code below, and configure it to receive events from your S3 bucket. (Don’t forget to change YOUR_BUCKET_NAME, YOUR_CODE, and YOUR_FUNCTION_NAME to match your actual circumstances.)
console.log('Loading function');
var AWS = require('aws-sdk');
var lambda = new AWS.Lambda();
exports.handler = function(event, context) {
    key = event.Records[0].s3.object.key
    bucket = event.Records[0].s3.bucket.name
    version = event.Records[0].s3.object.versionId
    if (bucket == "YOUR_BUCKET_NAME" && key == "YOUR_CODE.zip" && version) {
        var functionName = "YOUR_FUNCTION_NAME";
        console.log("uploaded to lambda function: " + functionName);
        var params = {
            FunctionName: functionName,
            S3Key: key,
            S3Bucket: bucket,
            S3ObjectVersion: version
        };
        lambda.updateFunctionCode(params, function(err, data) {
            if (err) {
                console.log(err, err.stack);
                context.fail(err);
            } else {
                console.log(data);
                context.succeed(data);
            }
        });
    } else {
        context.succeed("skipping zip " + key + " in bucket " + bucket + " with version " + version);
    }
};

If you’re not using versions, skip the version check. Remember that your S3 bucket and Lambda function must be in the same region. That’s all there is to it – a durable, fault-tolerant, versioned code deployment service in 29 lines of code!

It’s also easy to extend this simple example with other bells and whistles:

  • If you want to process multiple functions, you can skip the key check and name the function using the key (the name of the zip file, minus the “.zip” suffix) or any other method you like to determine the function name based on the bucket and key.
  • If you don’t want to create the function manually the first time, you can check to see if the function exists (for example by calling GetFunction) and if not use CreateFunction instead of UpdateFunctionCode.
  • You can override the configuration with UpdateFunctionConfiguration, and you can retrieve the existing configuration with GetFunction if you want to leave some portions of the configuration unchanged while updating others.
  • You can stash the S3 event’s versionId field in the function’s description field as a reminder of which version you’re running, or include it in the function name to keep each version distinct and separately available.
  • To enable rollbacks, you can modify the function (and your S3 upload procedure) to use a layer of indirection: store the version you want to be “current” as another object in your S3 bucket, change your code to watch that file (ignoring the ZIPs themselves), and update it whenever you want to change the version. Your code will need to fetch the content of the pointer file when it changes instead of simply using the metadata in the event to make the Lambda UpdateFunctionCode call.

 

Happy Lambda coding!

Tim and Emma

Dynamic GitHub Actions with AWS Lambda

Tim Wagner Tim Wagner, AWS Lambda General Manager


Will Gaul Will Gaul, AWS Lambda Software Developer


GitHub webhooks allow you to easily generate notifications whenever certain actions occur. One built-in webhook is Amazon Simple Notification Service (SNS), which can transmit these messages to a variety of endpoints…including AWS Lambda, which means you can now easily run JavaScript code in response to something happening in a GitHub repository. In this post we’ll make a simple issue responder bot using a Lambda function, but you could use the same technique to trigger deployments or other actions.

GitHub Bots

Lots of larger GitHub projects have created bots to help manage their projects. Examples include

We’ll make our own using a Lambda function that responds to GitHub events, with Amazon SNS helping out by transmitting events between the two systems. Our sample bot will then use the GitHub APIs to comment on issues (it’s not a very sophisticated bot, but you get the idea). You’ll need both AWS and GitHub accounts to follow the steps below, along with a basic working knowledge of git and JavaScript.

Step 1: Create an SNS Topic

  1. Go to the Amazon SNS console.
  2. Click “Create topic”.
  3. Fill in the name and display name fields with whatever you’d like, then click “Create topic”.
  4. Copy the topic ARN for later use.

This topic will be the “middleman” between GitHub and Lambda: GitHub will publish event notifications to the SNS topic, and SNS in turn will invoke your Lambda function.

 

Creating the SNS Topic

Creating the SNS Topic

 

Completed SNS Topic

Completed SNS Topic

 

Step 2: Create an IAM User to Publish As

  1. Go to the Amazon IAM console.
  2. Click “Users” then “Create New Users”.

 

IAM Users Tab

Creating the IAM Publisher User

 

  1. Enter a name for the GitHub publisher user. Make sure “Generate an access key for each user” is checked.
  2. Click “Create”.

 

IAM User Creation

Completing the IAM User Creation

 

  1. Click “Show User Security Credentials”, then copy or download the access and secret keys for later use.

 

Locating IAM User Credentials

Viewing the IAM User Credentials

 

IAM User Credentials Displayed

Sample User Credentials

 

  1. Return to the main IAM console page.
  2. Click “Users”, then click the name of your newly created user to edit its properties.
  3. Scroll down to “Permissions” and ensure that section is open and that the “Inline Policies” section is expanded. Click the link (“click here”) to create a new inline policy.
  4. Select the “Custom Policy” radio button, then press “Select”.
  5. Type a name for your policy, then paste the following statements that authorize publication to the SNS topic you created in Step 1 (here’s where you use the topic ARN you were saving). Then click “Apply Policy”.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "sns:Publish"
      ],
      "Resource": [
        <SNS topic ARN goes here>
      ],
      "Effect": "Allow"
    }
  ]
}

 

Setting the IAM User's Policy

Creating the IAM User’s Publishing Policy

Choosing a Custom Policy

Choosing a Custom Policy

 

Editing the IAM Policy

Editing the Publishing Policy

 

This IAM user represents the GitHub publishing process. The policy ensures that this user is only able to publish to the topic we just made. We’ll share this user’s credentials with GitHub in a later step. As a security best practice, you should create a unique user for each system that you provide access to, rather than sharing user credentials, and you should always scope access to the minimum set of resources required (in this case, the SNS topic).

Step 3: Set up the GitHub Webhook

  1. Navigate to your GitHub repo.
  2. Click on “Settings” in the sidebar.
  3. Click on “Webhooks & Services”.
  4. Click the “Add service” dropdown, then click “AmazonSNS”.
  5. Fill out the form (supplying the IAM user credentials you created in Step 2), then click “Add service”. (Note that the label says “topic”, but it requires the entire ARN, not just the topic name.)

 

GitHub Settings

Opening the GitHub Settings Page

 

GitHub WebHook Selection

Selecting the SNS GitHub Webhook

 

GitHub SNS Configuration

Configuring the SNS GitHub Webhook

We’re halfway there: Now GitHub actions will publish to your SNS topic. Next we need to do something interesting with them when they arrive there…

Step 4: Create a Lambda Function

  1. Open the AWS Lambda console.
  2. Click on “Create a Lambda function”.
  3. Choose the “SNS Message” code template and “Basic execution role”.
  4. Click “Create Lambda function”.
  5. On the Lambda function list page, click the “Actions” dropdown then pick “Add event source”.
  6. Select “SNS” as the event source type.
  7. Choose the SNS topic you created in Step 1, then click “Submit”. (Lambda will fill in the ARN for you.)

 

Creating an AWS Lambda Function

Creating a new AWS Lambda Function

 

Configuring the Lambda Bot Function

Configuring the Lambda Bot Function

 

Submitting the Lambda Bot Function

Submitting the Lambda Bot Function

 

Assigning the SNS GitHub Topic Event Source

Assigning the SNS GitHub Topic Event Source

 

Completing the SNS GitHub Topic Event Source Wire-up

Completing the SNS GitHub Topic Event Source Wire-up

 

Lambda Function Ready for Testing

Lambda Function Ready for Testing

Now we have a basic Lambda function subscribed to the SNS topic, listening to GitHub event messages. It doesn’t do very much yet, but we’ll improve on that shortly. First, though, let’s test it to make sure everything is working properly. We’ll check everything in stages, leading up to an end-to-end integration test. This section is optional, but it will help you verify the setup and also demonstrates a number of useful debugging techniques for event processing.

Step 5: Test the Setup

  1. In the Lambda console functions list, make sure your GitHub bot function is selected, then choose “Edit/Test” from the Actions dropdown. Choose “SNS” as the sample event type, then click “Invoke” to test your function.

 

Testing the Lambda Function

Testing the Lambda Function

 

Result of Testing the Lambda Function

Result of Testing the Lambda Function

 

  1. In the AWS SNS console, open the “Topics” tab, select your GitHub publication topic, then use the “Other topic actions” to select “Delivery status”. Complete the wizard to set up CloudWatch Logs delivery confirmations, then press the “Publish to topic” button to send a test message to your topic (and from there to your Lambda function). You can then go to the CloudWatch Log console to view a confirmation of the delivery and (if everything is working correctly) also see it reflected in the CloudWatch events for your Lambda function and you Lambda function’s logs as well.

 

SNS Delivery Confirmation and Testing

Testing SNS Delivery using Delivery Confirmations

 

SNS Delivery Confirmation Dialog

Configuring SNS Delivery Confirmations

 

  1. In the “Webhooks & Services” panel in your GitHub repository, click the “Test service” button.

 

Sending a GitHub Test Event

Sending a GitHub Test Event

 

  1. Open the AWS Lambda console.
  2. In the function list, under “CloudWatch metrics at a glance” for your function, click on any one of the “logs” links.
  3. Click on the timestamp column header to sort the log streams by time of last entry.
  4. Open the most recent log stream.
  5. Verify that the event was received from GitHub.

 

Lambda Console Invoke Metrics

Viewing Invoke Metrics in the Lambda Console

 

Viewing Log Streams in the CloudWatch Logs Console

Viewing Log Streams in the CloudWatch Logs Console

 

Testing SNS Delivery

Testing SNS Delivery

 

GitHub Test Event in the CloudWatch Log

GitHub Test Event in the CloudWatch Log

 

  1. Repeat step 4 by creating a new issue in GitHub, which should have a similar result.

This demonstrates end-to-end event processing, and for some GitHub event handlers it may be all you need; refer to the GitHub event documentation for a detailed breakdown of event types and formats. For example, you can determine that an event is a “push” operation and create custom deployment actions with code like the following in your Lambda function:

if (event.Records[0].Sns.Message.hasOwnProperty('pusher')) {
    // My custom push logic goes here...
}

Often, however, your event handler will also need to work with the GitHub APIs to retrieve additional information or make changes to your repository. Continue on to see how we can add the GitHub APIs to our Lambda function to create an issue responder bot.

Step 6: Create GitHub Credentials for Your Bot

  1. Create a new GitHub account for your bot (or use your existing account).
  2. Click the gear icon in the top right.
  3. Click “Personal access tokens” in the sidebar.
  4. Click “Generate a personal access token”.
  5. Add a token description, leaving everything else as is, then click “Generate token”.
  6. Copy the token for later use.

 

Generating a GitHub API Access Token

Generating a GitHub API Access Token

 

This step sets up a GitHub account for the bot to use (optional), which requires an API access token. (For real production usage, it is recommended that you register an OAuth application instead of using personal access tokens.)

Step 7: Update your Lambda Function to be a Bot

  1. In your favorite shell do the following:
$ mkdir lambda-bot
$ cd lambda-bot
$ npm install github
$ touch index.js
  1. Open index.js in your favorite editor and change it to the following text:
var GitHubApi = require('github');
var github = new GitHubApi({
    version: '3.0.0'
});


exports.handler = function(event, context) {
    var githubEvent = event.Records[0].Sns.Message;
    console.log('Received GitHub event:', githubEvent);

    if (!githubEvent.hasOwnProperty('issue') || githubEvent.action !== 'opened') {
        // Not an event for opening an issue
        context.succeed();
    }

    // Authenticate to comment on the issue
    github.authenticate({
        type: 'oauth',
        token: 'YOUR TOKEN HERE'
    });

    var poster = githubEvent.issue.user.login;

    github.issues.createComment({
        user: githubEvent.repository.owner.login,
        repo: githubEvent.repository.name,
        number: githubEvent.issue.number,
        body: "Hi @" + poster + "!\n" +
              "\n" +
              "Thank you for your interest in this project! Unfortunately, we're " +
              "really busy at the moment, but we'll get to your issue as soon as " +
              "possible. Have a great day!"
    }, context.done);
};
  1. Replace ‘YOUR TOKEN HERE’ with your saved token credential from step 6.
  2. Back in your shell, execute
zip -r archive.zip
  1. Open the AWS Lambda console.
  2. Update your function by uploading archive.zip.
  3. Create an issue on your repo as a test.

You should see your bot reply back!

What We Did

We created a simple GitHub bot using an AWS Lambda function, with SNS serving as the event bridge. This same technique can be used to kick off automated deployment steps when new code is pushed or take any other custom action in response to a variety of GitHub events. We also saw a variety of debugging techniques that helped verify the various stages of the event pipeline.

Two things we really enjoyed in this exercise: First, there’s no need to go back and rewrite anything to “make it real”: Our setup is ready to scale to large teams and concurrent events without needing a single change, thanks to the built-in scalability that GitHub, SNS, and Lambda collectively offer. The second observation is just how quick and how much fun this was to put together – without any of the drudgery usually associated with getting servers provisioned and deployed, we were able to create and test an end-to-end solution in the time it would normally take just to get code onto a machine.

We hope this post helps illustrate the power of dynamic GitHub actions as well as the powerful integration between SNS and Lambda. Happy coding!

-Tim and Will

Easy Authorization of AWS Lambda Functions

Tim Wagner Tim Wagner, AWS Lambda

Authorization and security is a critical feature of every AWS service, including Lambda. But enabling developers to authorize and secure their Lambda functions isn’t enough — Lambda should also be easy to use, quick to set up, and flexible to configure. In this post we talk about how Lambda was designed to achieve both outcomes.

tldr: If you’re using the Lambda console to process events and they come from same account that owns your function, we take care of setting up authorization for you, and you can skip this article. Read on to learn more about how authorization works, to see the command line approach, or for advanced use cases like cross-account access.

First, let’s define a few terms:

  • Policy: A policy is a set of capabilities. It answers the “who can do what” question.
  • Role-based Authorization: In this approach to authorization, policies are attached to real users and temporary or simulated users (roles). The policy defines what the role can do, and services enforce that the policy terms are met. For example, if a user named Joe has a policy that lets him create Lambda functions, then Lambda will check this privilege each time Joe attempts to make a new function and will allow that activity to proceed.
  • Resource-based Authorization: Policies can also be attached to resources, such as Lambda functions. While not required, resource policies also often refer to “foreign” resources, such as restricting the set of S3 buckets that are allowed to send events to a specific Lambda function. Resource policies are especially useful in authorizing such “on-behalf-of” activities and for enabling cross-account access.

Note that role and resource-based authentication are additive, not exclusive, and AWS Lambda supports both types. Let’s take a look at some scenarios and see how authorization is handled in each.

Reminder: Each Lambda function has an execution role that determines the capabilities of the function (e.g., which AWS services it can call). The account that owns the Lambda function (and thus controls its behavior) is not necessarily the same as the role/user that calls the function. For clarity, we’ll distinguish invokers (and their invocation role when calling the Lambda API) from execution (and the execution role used for the Lambda function) in the descriptions below. In this article we’re mostly focusing on the invocation aspect rather than what the function is allowed to do once it starts running. See the Lambda Getting Started docs for more on the latter.

Scenario 1: Calling a Lambda function from a web service in the same account.

In this scenario the signer of the request determines the identity (user or role) of the invoker, and that in turn identifies one or more policies that specify what’s allowed. The union of those policies determines whether the function call is permitted to occur. In this scenario the invoker and the Lambda function owner are the same AWS account, but they’re not required to be the same role. Resource policies aren’t needed in this simple scenario; you just ensure that the user (or role) calling Lambda has permission to invoke functions. The following policy enables a caller to access a specific Lambda function owned by the same account in the us-east-1 region:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": ["lambda:InvokeFunction"],
      "Effect": "Allow",
      "Resource": "arn:aws:lambda:us-east-1:<account number>:<function name>"
    }
  ]
}

By changing the resource name to “arn:aws:lambda:*:*:*” you can allow access to any function in any region (the account check is still applied even if the resource doesn’t list it).

Scenario 2: Calling a Lambda function from a web service in a different account.

This is the same as the scenario above except that the account of the invoker and the account of the Lambda function owner are different. In this case resource policies are the easiest way to authorize the activity: The resource policy on the function being called will enable access by the “foreign” account of the caller. Here’s a sample of how to authorize account 012345678912 to call “MyFunction” from the command line:

$ aws lambda add-permission \
  --function-name MyFunction \
  --region us-west-2 \
  --statement-id Id-123 \
  --action "lambda:InvokeFunction" \
  --principal 012345678912 \
  --profile adminuser

You can also view the resource policies that apply to a function by calling get-policy:

$ aws lambda get-policy \
--function-name function-name \
--profile adminuser

The profile argument allows you to specify the role with which the add-permission call itself is made. If you don’t supply the profile argument, the CLI will attempt to pick up credentials from its configuration (set using the “aws configure” command) or environment variables. If you’re on EC2 and using instance credentials you can skip this argument and the CLI will automatically pick up the instance’s role. See the CLI credential documentation for more details.

Note that the user (or role) making the call still needs permission to invoke the Lambda function as in Scenario 1. Think of this as an “and” condition: As a user (or role) you’ll need permission to call Lambda functions AND you need the function owner’s permission to use his or her particular function.

Scenario 3: Triggering a Lambda function from an Amazon S3 bucket notification in another account.

In Scenario 2 the call from another account was made directly to Lambda. In this scenario the call is indirect: S3 sends the event on behalf of the bucket owner instead of that account making the call itself. The add-permission call is slightly different:

$ aws lambda add-permission \
  --function-name MyFunction \
  --region us-west-2 \
  --statement-id Id-123 \
  --action "lambda:InvokeFunction" \
  --principal s3.amazonaws.com \
  --source-arn arn:aws:s3:::<source-bucket> \
  --source-account <account number> \
  --profile adminuser

Note that the principal becomes the S3 service (it gets a service name as opposed to an account number) and the actual account number moves to the “source-account” parameter. There’s also a new argument, source-arn, which contains the name of the bucket. This mechanism gives you great flexibility in how (and what) you choose to authorize S3 to do on your behalf:

  • Specify both bucket name and owning account. This is our recommended approach because it provides the strictest security: By providing both arguments to add-permission, you authorize a specific S3 bucket owned by a specific account to send bucket notification events to your Lambda function.
  • Specify only the bucket name. This enables notification events from the named bucket to be sent to your Lambda function regardless of who currently owns the bucket.
  • Specify only the owning account. This enables events from any bucket owned by this account to be sent to your Lambda function.
  • Specify neither (not recommended). Any S3 bucket (owned by any account) can send events to your function. As the function owner you should use care in selecting this option, since you’ll have no control over the set of events reaching your function.

Note: Amazon SNS (Simple Notification Service) events sent to Lambda works the same way, with “sns.amazonaws.com” replacing “s3.amazonaws.com” as the principal.

Scenario 4: Processing Amazon Kinesis records or Amazon DynamoDB updates with AWS Lambda.

This scenario is like Scenario 1 above, except that things get turned around: Instead of authorizing a user or role to call your function, you authorize your function to read from Amazon Kinesis or Amazon DynamoDB. The policy looks similar to scenario 1’s policy, but the name of the service changes and you need a slightly different set of actions; here’s how it looks for Kinesis:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "kinesis:DescribeStream",
        "kinesis:ListStreams",
        "kinesis:GetShardIterator",
        "kinesis:GetRecords"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:kinesis:us-east-1:<account number>:stream/<kinesis stream name>"
    }
  ]
}

Your execution role also needs the “standard” capabilities to create and update logs and make calls on your behalf; see the AWS Lambda documentation for details on setting up execution roles and authorizing code to use other AWS services. You can roll the Kinesis authorization into an existing execution role or attach it as a separate managed policy. Read more about managed policies and IAM roles to help you choose among the different options. If you’re keeping it all together, here’s how it might look with both Kinesis access and standard logging access (in the policy below, we’re enabling access to any Kinesis stream owned by the same account; you can use the ARN technique above to restrict it to a specific stream):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "lambda:InvokeFunction"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "kinesis:GetRecords",
        "kinesis:GetShardIterator",
        "kinesis:DescribeStream",
        "kinesis:ListStreams",
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "*"
    }
  ]
}

I hope this post was helpful in explaining the different ways you can make your Lambda functions available to callers across different accounts and resources. As always, we appreciate your feedback and comments. Happy Lambda coding!

-Tim

Running Arbitrary Executables in AWS Lambda

In previous posts on this blog we’ve talked about how Lambda manages container lifetimes and how you can use custom JavaScript libraries in Lambda, even native ones. In this post I cover how you can run arbitrary executables, including languages like Python, and shell scripts like bash.

First, a note on security: Lambda’s built-in sandbox lets you safely run code written in any language, because Lambda doesn’t rely on the language runtime to provide isolation between functions. You get the same protections regardless of whether Lambda starts a process for you or whether you start it yourself, and regardless of the language in which it’s written. With that out of the way, let’s look at how easy it is:

To start a background process, you can use the child_process command in Node.js to execute a binary that you’ve uploaded with your function or any executable visible to your function, such as /bin/bash or /usr/bin/python. Node.js supports both child_process.spawn, which follows the common async programming pattern by accepting a callback or returning an EventEmitter, as well as child_process.spawnSync, which waits for the spawned process to exit before returning control to your code.

Including your own executables is easy; just package them in the ZIP file you upload, and then reference them (including the relative path within the ZIP file you created) when you call them from Node.js or from other processes that you’ve previously started. Ensure that you include the following at the start of your function code:

process.env[‘PATH’] = process.env[‘PATH’] + ‘:’ + process.env[‘LAMBDA_TASK_ROOT’]

You can use all the usual forms of interprocess communication as well as files in /tmp to communicate with any of the processes you create.

If you compile your own binaries, ensure that they’re either statically linked or built for the matching version of Amazon Linux. The current version of Amazon Linux in use within AWS Lambda can always be found on the Supported Versions page of the Lambda docs.

For scenarios like running a Python script, you may find it easiest to simply run the executable once for each request. If you did old school-style CGI programming, this is a modern twist on it.

Other executables might want to retain state or may take too long to initialize to restart each time. Fortunately, Lambda has you covered there as well: Processes running when your request ends will be frozen and (if your container is reused to serve a later request) thawed out again next time, so you don’t have to restart long-running background processes repeatedly. The prologue section of your JavaScript function is a good place to locate such “once per container” activities. See Understanding Container Reuse for a fuller description of container and process reuse.

AWS Lambda offers you lots of flexibility, without the hassle (or lockin) of curated libraries or platform-specific language runtimes that differ from standard releases. Bring your own code, we’re happy to run it for you!

AWS San Francisco Summit: Compute-related Presentations

Tim Wagner Tim Wagner, AWS Lambda

AWS San Francisco Summit 2015

Lots of exciting news last week in San Francisco around Event Computing and container computing.

Andy Jassy keynote, including Amazon ECS and AWS Lambda GA announcements

In summit the keynote, Andy Jassy announces that the Amazon ECS and AWS Lambda services are both now generally available for production usage.

 

Breakout session: Event-Driven Compute in the Cloud, Tim Wagner

 

Breakout session: Amazon EC2 Container Service: Manage Docker-enabled Apps in EC2, Chris Barclay

 

Breakout session: Build your Mobile App Faster with AWS Mobile Services, Jinesh Varia

Lots of exciting announcements, and more to come!

-Tim

Using Amazon SNS to Trigger Lambda Functions

Tim Wagner Tim Wagner, AWS Lambda

Using Amazon SNS to Trigger Lambda Functions

Sending messages to SNS can now also trigger Lambda functions, allowing you to add arbitrary compute capabilities to any service or application that knows how to send messages, such as Amazon CloudWatch alarms.

Intelligent IT: Triggering Code by Sending it Messages

SNS is an easy and scalable way to send notifications and already supports a variety of targets, including SQS, email, and both HTTP and mobile endpoints. SNS now can also trigger a Lambda function in response to a message, allowing you to turn existing messaging frameworks, such as CloudWatch alarms, into workflows that can execute arbitrary code and call any AWS API.

Intelligent Messaging

Now that SNS can call Lambda (and as always, Lambda can call SNS), you can do some amazing things by combining the reach of SNS messages with the ability to easy run arbitrary code in Lambda:

  • Targeting. You can generate messages using templates stored in Amazon DynamoDB or Amazon S3 and compute or fill in values on the fly.
  • Routing and filtering. Turn one message into many or filter many down to one.
  • Logging and auditing. Easily retain a full or partial copy of messages. Watch for and elide sensitive information or stamp a watermark into each message as it flows by.

The AWS Mobile blog has a great step-by-step example that demonstrates how you can create a message log in Amazon DynamoDB using a simple AWS Lambda function to record each message.

Can’t wait to see what other creative ideas developers have for using Lambda and SNS together!