Category: Amazon API Gateway

Using API Gateway with VPC endpoints via AWS Lambda

by Stefano Buliani | on | in Amazon API Gateway | | Comments

To isolate critical parts of their app’s architecture, customers often rely on Virtual Private Cloud (VPC) and private subnets. Today, Amazon API Gateway cannot directly integrate with endpoints that live within a VPC without internet access. However, it is possible to proxy calls to your VPC endpoints using AWS Lambda functions.

This post guides you through the setup necessary to configure API Gateway, Lambda, and your VPC to proxy requests from API Gateway to HTTP endpoints in your VPC private subnets. With this solution, you can use API Gateway for authentication, authorization, and throttling before a request reaches your HTTP endpoint.

For this example, we have written a very basic Express application that receives a GET and POST method on its root resource (“/”). The application is deployed on an EC2 instance within a private subnet of a VPC. We use a Lambda function that connects to our private subnet to proxy requests from API Gateway to the Express HTTP endpoint. The CloudFormation template below deploys the API Gateway API, the AWS Lambda functions, and sets the correct permissions on both resources. Our CloudFormation template requires 4 parameters:

  • The IP address or DNS name of the instance running your express application (for example,
  • The port used by the Express app (for example, 8080)
  • The security group of the EC2 instance (for example, sg-xx3xx6x0)
  • The subnet ID of your VPC’s private subnet (for example, subnet-669xx03x)

Click the link below to deploy the CloudFormation template, the rest of this blog post dives deeper on each component of the architecture.

The Express application

We have written a very simple web service using Express and Node.js. The service accepts GET and POST requests to its root resource and responds with a JSON object. You can use the sample code below to start the application on your instance. Before you create the application, make sure that you have installed Node.js on your instance.

Create a new folder on your web server called vpcproxy. In the new folder, create a new file called index.js and paste the code below in the file.

var express = require('express');
var bodyParser = require('body-parser');

var app = express();

app.get('/', function(req, res) {
        if (req.query.error) {
                res.status(403).json({error: "Random error"}).end();
        res.json({ message: 'Hello World!' });
});'/', function(req, res) {
app.listen(8080, function() {
        console.log("app started");

To install the required dependencies, from the vpcproxy folder, run the following command: npm install express body-parser

After the dependencies are installed, you can start the application by running: node index.js

API Gateway configuration

The API Gateway API declares all of the same methods that your Express application supports. Each method is configured to transform requests into a JSON structure that AWS Lambda can understand, and responses are generated using mapping templates from the Lambda output.

The first step is to transform a request into an event for Lambda. The mapping template below captures all of the request information and includes the configuration of the backend endpoint that the Lambda function should interact with. This template is applied to all requests for any endpoint.

#set($allParams = $input.params())
  "requestParams" : {
    "hostname" : "",
    "port" : "8080",
    "path" : "$context.resourcePath",
    "method" : "$context.httpMethod"
  "bodyJson" : $input.json('$'),
  "params" : {
    #foreach($type in $allParams.keySet())
      #set($params = $allParams.get($type))
      "$type" : {
        #foreach($paramName in $params.keySet())
          "$paramName" : "$util.escapeJavaScript($params.get($paramName))"
  "stage-variables" : {
    #foreach($key in $stageVariables.keySet())
      "$key" : "$util.escapeJavaScript($stageVariables.get($key))"
  "context" : {
    "account-id" : "$context.identity.accountId",
    "api-id" : "$context.apiId",
    "api-key" : "$context.identity.apiKey",
    "authorizer-principal-id" : "$context.authorizer.principalId",
    "caller" : "$context.identity.caller",
    "cognito-authentication-provider" : "$context.identity.cognitoAuthenticationProvider",
    "cognito-authentication-type" : "$context.identity.cognitoAuthenticationType",
    "cognito-identity-id" : "$context.identity.cognitoIdentityId",
    "cognito-identity-pool-id" : "$context.identity.cognitoIdentityPoolId",
    "http-method" : "$context.httpMethod",
    "stage" : "$context.stage",
    "source-ip" : "$context.identity.sourceIp",
    "user" : "$context.identity.user",
    "user-agent" : "$context.identity.userAgent",
    "user-arn" : "$context.identity.userArn",
    "request-id" : "$context.requestId",
    "resource-id" : "$context.resourceId",
    "resource-path" : "$context.resourcePath"

After the Lambda function has processed the request and response, API Gateway is configured to transform the output into an HTTP response. The output from the Lambda function is a JSON structure that contains the response status code, body, and headers:

      "message":"Hello World!"
      "content-type":"application/json; charset=utf-8",
      "date":"Wed, 25 May 2016 18:41:22 GMT",

These values are then mapped in API Gateway using header mapping expressions and mapping templates for the response body.

First, all known headers are mapped:

"responseParameters": {
    "method.response.header.etag": "integration.response.body.headers.etag",
    "method.response.header.x-powered-by": "integration.response.body.headers.x-powered-by",
    "": "",
    "method.response.header.content-length": "integration.response.body.headers.content-length"

Then the body is extracted from Lambda’s output JSON using a very simple body mapping template: $input.json('$.bodyJson')

Response codes other than 200 are handled using using regular expressions to match the status code in API Gateway (for example \{\"status\"\:400.*), and the parseJson method of the $util object to extract the response body.

#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))

All of this configuration is included in Swagger format in the CloudFormation template of this tutorial. The Swagger is generated dynamically based on the four parameters that the template requires using the Fn::Join function.

The AWS Lambda function

The proxy Lambda function is written in JavaScript and captures all of the request details forwarded by API Gateway, creates similar request using the standard Node.js http package, and forwards it to the private endpoint. Responses from the private endpoint are encapsulated in a JSON object which API Gateway turns into an HTTP response. The private endpoint configuration is passed to the Lambda function from API Gateway in the event model. The Lambda function code is also included in the CloudFormation template.

var http = require('http');

exports.myHandler = function(event, context, callback) {
    // setup request options and parameters
    var options = {
      host: event.requestParams.hostname,
      port: event.requestParams.port,
      path: event.requestParams.path,
      method: event.requestParams.method
    // if you have headers set them otherwise set the property to an empty map
    if (event.params && event.params.header && Object.keys(event.params.header).length > 0) {
        options.headers = event.params.header
    } else {
        options.headers = {};
    // Force the user agent and the "forwaded for" headers because we want to 
    // take them from the API Gateway context rather than letting Node.js set the Lambda ones
    options.headers["User-Agent"] = event.context["user-agent"];
    options.headers["X-Forwarded-For"] = event.context["source-ip"];
    // if I don't have a content type I force it application/json
    // Test invoke in the API Gateway console does not pass a value
    if (!options.headers["Content-Type"]) {
        options.headers["Content-Type"] = "application/json";
    // build the query string
    if (event.params && event.params.querystring && Object.keys(event.params.querystring).length > 0) {
        var queryString = generateQueryString(event.params.querystring);
        if (queryString !== "") {
            options.path += "?" + queryString;
    // Define my callback to read the response and generate a JSON output for API Gateway.
    // The JSON output is parsed by the mapping templates
    callback = function(response) {
        var responseString = '';
        // Another chunk of data has been recieved, so append it to `str`
        response.on('data', function (chunk) {
            responseString += chunk;
        // The whole response has been received
        response.on('end', function () {
            // Parse response to json
            var jsonResponse = JSON.parse(responseString);
            var output = {
                status: response.statusCode,
                bodyJson: jsonResponse,
                headers: response.headers
            // if the response was a 200 we can just pass the entire JSON back to
            // API Gateway for parsing. If the backend returned a non 200 status 
            // then we return it as an error
            if (response.statusCode == 200) {
            } else {
                // set the output JSON as a string inside the body property
                output.bodyJson = responseString;
                // stringify the whole thing again so that we can read it with 
                // the $util.parseJson method in the mapping templates
    var req = http.request(options, callback);
    if (event.bodyJson && event.bodyJson !== "") {
    req.on('error', function(e) {
        console.log('problem with request: ' + e.message);{
            status: 500,
            bodyJson: JSON.stringify({ message: "Internal server error" })

function generateQueryString(params) {
    var str = [];
    for(var p in params) {
        if (params.hasOwnProperty(p)) {
            str.push(encodeURIComponent(p) + "=" + encodeURIComponent(params[p]));
    return str.join("&");


You can use Lambda functions to proxy HTTP requests from API Gateway to an HTTP endpoint within a VPC without Internet access. This allows you to keep your EC2 instances and applications completely isolated from the internet while still exposing them via API Gateway. By using API Gateway to front your existing endpoints, you can configure authentication and authorization rules as well as throttling rules to limit the traffic that your backend receives.

If you have any questions or suggestions, please comment below.

Simply Serverless: Using AWS Lambda to Expose Custom Cookies with API Gateway

by Bryan Liston | on | in Amazon API Gateway, AWS Lambda | | Comments

Simply Serverless

Welcome to a new series on quick and simple hacks/tips/tricks and common use cases to using AWS Lambda and AWS API Gateway. As always, I’m listening to readers (@listonb), so if you have any questions, comments or tips you’d like to see, let me know!

This is a guest post by Jim Warner from Survata.

This first tip describes how Survata uses Lambda to drop a new cookie on API Gateway requests. Learn more about how Survata is using this during Serverless Day at the San Francisco Loft on April 28th. Register for Serverless Day now.

Step 1: Return a cookie ID from Lambda

This walkthrough assumes you have gone through the Hello World API Gateway Getting Started Guide code.

Expand upon the “Hello World” example and update it as follows:

'use strict';
exports.handler = function(event, context) {
  var date = new Date();

  // Get Unix milliseconds at current time plus 365 days
  date.setTime(+ date + (365 \* 86400000)); //24 \* 60 \* 60 \* 100
  var cookieVal = Math.random().toString(36).substring(7); // Generate a random cookie string
  var cookieString = "myCookie="+cookieVal+"; domain=my.domain; expires="+date.toGMTString()+";";
  context.done(null, {"Cookie": cookieString}); 


This makes a random string and returns it in JSON format as a proper HTTP cookie string. The result from the Lambda function is as follows:

{"Cookie": "myCookie=t81e70kke29; domain=my.domain; expires=Wed, 19 Apr 2017 20:41:27 GMT;"}

Step 2: Set the cookie in API Gateway

In the API Gateway console, go to the GET Method page and choose Method Response. Expand the default 200 HTTP status, and choose Add Header. Add a new header called “Set-Cookie.”

On the GET Method page, choose Integration Response. Under the Header Mappings section of the default 200 HTTP status, choose the pencil icon to edit the “Set-Cookie” header. In the mapping value section, put:


Make sure to save the header by choosing the check icon!

For a real production deployment, use a body mapping template to return only the parts of the JSON that you want to expose (so the cookie data wouldn’t show here).

Deploying both the Lambda function and API Gateway gets you up and cookie-ing.

How to turn Node.js projects into AWS Lambda microservices easily with ClaudiaJS

by Bryan Liston | on | in Amazon API Gateway, AWS Lambda | | Comments
This is a guest post by Gojko Adzic, creator of ClaudiaJS


While working on MindMup 2.0, we started moving parts of our API and back-end infrastructure from Heroku to AWS Lambda. The first Lambda function we created required a shell script of about 120 lines of AWS command-line calls to properly set up, and the second one had a similar number with just minor tweaks. Instead of duplicating this work for each service, we decided to create an open-source tool that can handle the deployment process for us.

Enter Claudia.js: an open-source deployment tool for Node.js microservices that makes getting started with AWS Lambda and Amazon API Gateway very easy for JavaScript developers.

Claudia takes care of AWS deployment workflows, simplifying and automating many error-prone tasks, so that you can focus on solving important business problems rather than worrying about infrastructure code. Claudia sets everything up the way JavaScript developers expect out of the box, and significantly shortens the learning curve required to get Node.js projects running inside Lambda.

Hello World

Here’s a quick ‘hello world’ example.

Create a directory, and initialize a new NPM project

npm init

Next, create app.js with the following code:

var ApiBuilder = require('claudia-api-builder'),
  api = new ApiBuilder();
module.exports = api;

api.get('/hello', function () {
  return 'hello world';

Add the Claudia API Builder as a project dependency:

npm install claudia-api-builder --save

Finally, install Claudia.js in your global path:

npm install -g claudia

That’s pretty much it. You can now install your new microservice in AWS by running the following command:

claudia create --region us-east-1 --api-module app

In a few moments, Claudia will respond with the details of the newly-installed Lambda function and REST API.

  "lambda": {
    "role": "test-executor",
    "name": "test",
    "region": "us-east-1"
  "api": {
    "id": "8x7uh8ho5k",
    "module": "app",
    "url": ""

The result contains the root URL of your new API Gateway resource. Claudia automatically created an endpoint resource for /hello, so just add /hello to the URL, and try it out in a browser or from the console. You should see the ‘hello world’ response.

That’s it! Your first Claudia-deployed Lambda function and API Gateway endpoint is now live on AWS!

What happened in the background?

In the background, Claudia.js executed the following steps:

  • Created a copy of the project.
  • Packaged all the NPM dependencies.
  • Tested that the API is deployable.
  • Zipped up your application and deployed it to Lambda.
  • Created the correct IAM access privileges.
  • Configured an API Gateway endpoint with the /hello resource.
  • Linked the new resource to the previously-deployed Lambda function.
  • Installed the correct API Gateway transformation templates.

Finally, it saved the resulting configuration into a local file (claudia.json), so that you can easily update the function without remembering any of those details.

Try this next:

Install the superb module as a project dependency:

npm install superb --save

Add a new endpoint to the API by appending these lines to app.js:

api.get('/greet', function (request) {
  var superb = require('superb');
  return + ' is ' + superb();

You can now update your existing deployed application by executing the following command:

claudia update

When the deployment completes, try out the new endpoint by adding `/greet?name=’ followed by your name.

Benefits of using Claudia

Claudia significantly reduces the learning curve for deploying and managing serverless style applications, REST API, and event-driven microservices. Developers can use Lambda and API Gateway in a way that is similar to popular lightweight JavaScript web frameworks.

All the query string arguments are immediately available to your function in the request.queryString object. HTTP Form POST variables are in, and any JSON, XML, or text content posted in as raw body text are in request.body.

Asynchronous processes are also easy; just return a Promise from the API endpoint handler, and Claudia waits until the promise resolves before responding to the caller. You can use any A+ Promise-compliant library, including the promises supported out of the box by the new AWS Lambda 4.3.2 runtime.

To make serverless-style applications easier to set up, Claudia automatically enables cross-origin resource sharing (CORS), so a client browser can call your new API directly even from a different domain. All errors are, by default, triggering the 500 HTTP code, so your API works well with most AJAX libraries. You can, of course, easily customize the API endpoints to return a different content type or HTTP response code, or include additional headers. For more information, see the Claudia API Builder documentation.


Claudia helps people get started quickly, and easily migrate existing, self-hosted or third-party-hosted APIs into Lambda. Because it’s not opinionated and does not require a particular structure or way of working, teams can easily start chopping pieces of existing infrastructure and gradually moving it over. For more information visit the git repository for Sample ClaudiaJS Projects.

Amazon API Gateway mapping improvements

by Stefano Buliani | on | in Amazon API Gateway | | Comments

Yesterday we announced the new Swagger import API. You may have also noticed a new first time user experience in the API Gateway console that automatically creates a sample Pet Store API and guides you though API Gateway features. That is not all we’ve been doing:

Over the past few weeks, we’ve made mapping requests and responses easier. This post takes you through the new features we introduced and gives practical examples of how to use them.

Multiple 2xx responses

We heard from many of you that you want to return more than one 2xx response code from your API. You can now configure Amazon API Gateway to return multiple 2xx response codes, each with its own header and body mapping templates. For example, when creating resources, you can return 201 for “created” and 202 for “accepted”.

Context variables in parameter mapping

We have added the ability to reference context variables from the parameter mapping fields. For example, you can include the identity principal or the stage name from the context variable in a header to your HTTP backend. To send the principalId returned by a custom authorizer in an X-User-ID header to your HTTP backend, use this mapping expression:


For more information, see the context variable in the Mapping Template Reference page of the documentation.

Access to raw request body

Mapping templates in API Gateway help you transform incoming requests and outgoing responses from your API’s back end. The $input variable in mapping templates enables you to read values from a JSON body and its properties. You can now also return the raw payload, whether it’s JSON, XML, or a string using the $input.body property.

For example, if you have configured your API to receive raw data and pass it to Amazon Kinesis using an AWS service proxy integration, you can use the body property to read the incoming body and the $util variable to encode it for an Amazon Kinesis stream.

  "Data" : "$util.base64Encode($input.body)",
  "PartitionKey" : "key",
  "StreamName" : "Stream"

JSON parse function

We have also added a parseJson() method to the $util object in mapping templates. The parseJson() method parses stringified JSON input into its object representation. You can manipulate this object representation in the mapping templates. For example, if you need to return an error from AWS Lambda, you can now return it like this:

exports.handler = function(event, context) {
    var myErrorObj = {
        errorType : "InternalFailure",
        errorCode : 9130,
        detailedMessage : "This is my error message",
        stackTrace : ["foo1", "foo2", "foo3"],
        data : {
            numbers : [1, 2, 3]

Then, you can use the parseJson() method in the mapping template to extract values from the error and return a meaningful message from your API, like this:

#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))
#set ($bodyObj = $util.parseJson($input.body))

"type" : "$errorMessageObj.errorType",
"code" : $errorMessageObj.errorCode,
"message" : "$errorMessageObj.detailedMessage",
"someData" : "$[2]"

This will produce a response that looks like this:

"type" : "InternalFailure",
"code" : 9130,
"message" : "This is my error message",
"someData" : "3"


We continuously release new features and improvements to Amazon API Gateway. Your feedback is extremely important and guides our priorities. Keep sending us feedback on the API Gateway forum and on social media.

Indexing Amazon DynamoDB Content with Amazon Elasticsearch Service Using AWS Lambda

by Bryan Liston | on | in Amazon API Gateway, AWS Lambda | | Comments

Stephan Hadinger
Sr Mgr, Solutions Architecture

Mathieu Cadet Account Representative

A lot of AWS customers have adopted Amazon DynamoDB for its predictable performance and seamless scalability. The main querying capabilities of DynamoDB are centered around lookups using a primary key. However, there are certain times where richer querying capabilities are required. Indexing the content of your DynamoDB tables with a search engine such as Elasticsearch would allow for full-text search.

In this post, we show how you can send changes to the content of your DynamoDB tables to an Amazon Elasticsearch Service (Amazon ES) cluster for indexing, using the DynamoDB Streams feature combined with AWS Lambda.


Architectural overview

Here’s a high-level overview of the architecture:

DynamoDB Streams to Elasticsearch bridge

We’ll cover the main steps required to put this bridge in place:

  1. Choosing the DynamoDB tables to index and enabling DynamoDB Streams on them.
  2. Creating an IAM role for accessing the Amazon ES cluster.
  3. Configuring and enabling the Lambda blueprint.


Choosing the DynamoDB table to index

In this post, you look at indexing the content of a product catalog in order to provide full-text search capabilities. You’ll index the content of a DynamoDB table called all_products, which is acting as the catalog of all products.

Here’s an example of an item stored in that table:

  "product_id": "B016JOMAEE",
  "name": "Serverless Single Page Apps: Fast, Scalable, and Available",
  "category": "ebook",
  "description": "AWS Lambda - A Guide to Serverless Microservices
                  takes a comprehensive look at developing 
                  serverless workloads using the new
                  Amazon Web Services Lambda service.",
  "author": "Matthew Fuller",
  "price": 15.0,
  "rating": 4.8

Enabling DynamoDB Streams

In the DynamoDB console, enable the DynamoDB Streams functionality on the all_products table by selecting the table and choosing Manage Stream.

Enabling DynamoDB Streams

Multiple options are available for the stream. For this use case, you need new items to appear in the stream; choose either New image or New and old images. For more information, see Capturing Table Activity with DynamoDB Streams.

DynamoDB Streams Options

After the stream is set up, make a good note of the stream ARN. You’ll need that information later, when configuring the access permissions.

Finding a DynamoDB Stream ARN

Creating a new IAM role

The Lambda function needs read access to the DynamoDB stream just created. In addition, the function also requires access to the Amazon ES cluster to submit new records for indexing.

In the AWS Identity and Access Management (IAM) console, create a new role for the Lambda function and call it ddb-elasticsearch-bridge.

Creating new IAM role

As this role will be used by the Lambda function, choose AWS Lambda from the AWS Service Roles list.

Attaching policy to the role

On the following screens, choose the AWSLambdaBasicExecutionRole managed policy, which allows the Lambda function to send logs to Amazon CloudWatch Logs.

Configuring access to the Amazon ES cluster

First, you need a running Amazon ES cluster. In this example, create a search domain called inventory. After the domain has been created, note its ARN:

Attaching policy to the role

In the IAM console, select the ddb-elasticsearch-bridge role created earlier and add two inline policies to that role:

Attaching policy to the role

Here’s the policy to add to allow the Lambda code to push new documents to Amazon ES (replace the resource ARN with the ARN of your Amazon ES cluster):

    "Version": "2012-10-17",
    "Statement": [
            "Action": [
            "Effect": "Allow",
            "Resource": "arn:aws:es:us-east-1:0123456789:domain/inventory/*"

Important: you need to add /* to the resource ARN as depicted above.

Next, add a second policy for read access to the DynamoDB stream (replace the resource ARN with the ARN of your DynamoDB stream):

    "Version": "2012-10-17",
    "Statement": [
            "Action": [
            "Effect": "Allow",
            "Resource": [

Enabling the Lambda blueprint

When you log into the Lambda console and choose Create a Lambda Function, you are presented with a list of blueprints to use. Select the blueprint called dynamodb-to-elasticsearch.

dynamodb-to-elasticsearch blueprint

Next, select the DynamoDB table all_products as the event source:

Lambda event source

Then, customize the Lambda code to specify the Elasticsearch endpoint:

Customizing the blueprint

Finally, select the ddb-elasticsearch-bridge role created earlier to give the Lambda function the permissions required to interact with DynamoDB and the Amazon ES cluster:

Choosing a role

Testing the result

You’re all set!

After a few records have been added to your DynamoDB table, you can go back to the Amazon ES console and validate that a new index for your items has been automatically created:

Amazon ES indices

Playing with Kibana (Optional)

Elasticsearch is commonly used with Kibana for visual exploration of data.

To start querying the indexed data, create an index pattern in Kibana. Use the name of the DynamoDB table as an index pattern:

Kibana Index pattern

Kibana automatically determines the best type for each field:

Kibana Index pattern

Use a simple query to search the product catalog for all items in the category book containing the word aws in any field:

Kibana Index pattern

Other considerations

Indexing pre-existing content

The solution presented earlier is ideal to ensure that new data is indexed as soon it is added to the DynamoDB table. But what about pre-existing data stored in the table?

Luckily, the Lambda function used earlier can also be used to process data from an Amazon Kinesis stream, as long as the format of the data is similar to the DynamoDB Streams records.

Provided that you have an Amazon Kinesis stream set up as an additional input source for the Lambda code above, you can use the (very naive) sample Python3 code below to read the entire content of a DynamoDB table and push it to an Amazon Kinesis stream called ddb-all-products for indexing in Amazon ES.

import json
import boto3
import boto3.dynamodb.types

# Load the service resources in the desired region.
# Note: AWS credentials should be passed as environment variables
# or through IAM roles.
dynamodb = boto3.resource('dynamodb', region_name="us-east-1")
kinesis = boto3.client('kinesis', region_name="us-east-1")

# Load the DynamoDB table.
ddb_table_name = "all_products"
ks_stream_name = "ddb-all-products"
table = dynamodb.Table(ddb_table_name)

# Get the primary keys.
ddb_keys_name = [a['AttributeName'] for a in table.attribute_definitions]

# Scan operations are limited to 1 MB at a time.
# Iterate until all records have been scanned.
response = None
while True:
    if not response:
        # Scan from the start.
        response = table.scan()
        # Scan from where you stopped previously.
        response = table.scan(ExclusiveStartKey=response['LastEvaluatedKey'])

    for i in response["Items"]:
        # Get a dict of primary key(s).
        ddb_keys = {k: i[k] for k in i if k in ddb_keys_name}
        # Serialize Python Dictionaries into DynamoDB notation.
        ddb_data = boto3.dynamodb.types.TypeSerializer().serialize(i)["M"]
        ddb_keys = boto3.dynamodb.types.TypeSerializer().serialize(ddb_keys)["M"]
        # The record must contain "Keys" and "NewImage" attributes to be similar
        # to a DynamoDB Streams record. Additionally, you inject the name of
        # the source DynamoDB table in the record so you can use it as an index
        # for Amazon ES.
        record = {"Keys": ddb_keys, "NewImage": ddb_data, "SourceTable": ddb_table_name}
        # Convert the record to JSON.
        record = json.dumps(record)
        # Push the record to Amazon Kinesis.
        res = kinesis.put_record(

    # Stop the loop if no additional records are
    # available.
    if 'LastEvaluatedKey' not in response:

Note: In the code example above, you are passing the name of the source DynamoDB table as an extra record attribute SourceTable. The Lambda function uses that attribute to build the Amazon ES index name. Another approach for passing that information is tagging the Amazon Kinesis stream.

Now, create the Amazon Kinesis stream ddb-all-productsand then add permissions to the ddb-elasticsearch-bridge role in IAM to allow the Lambda function to read from the stream:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

Finally, set the Amazon Kinesis stream as an additional input source to the Lambda function:

Amazon Kinesis input source

Neat tip: Doing a full re-index of the content this way will not create duplicate entries in Amazon ES.

Paying attention to attribute types

With DynamoDB, you can use different types for the same attribute on different records, but Amazon ES expects a given attribute to be of only one type. Similarly, changing the type of an existing attribute after it has been indexed in Amazon ES causes problems and some searches won’t work as expected.

In these cases, you must rebuild the Amazon ES index. For more information, see Reindexing Your Data in the Elasticsearch documentation.


In this post, you have seen how you can use AWS Lambda with DynamoDB to index your table content in Amazon ES as changes happen.

Because you are relying entirely on Lambda for the business logic, you don’t have to deal with servers at any point: everything is managed by the AWS platform in a highly available and scalable fashion. To learn more about Lambda and serverless infrastructures, see the Microservices without the Servers blog post.

Now that you have added full-text search to your DynamoDB table, you might be interested in exposing its content through a small REST API. For more information, see Using Amazon API Gateway as a proxy for DynamoDB.

Using Amazon API Gateway as a proxy for DynamoDB

by Stefano Buliani | on | in Amazon API Gateway | | Comments

Andrew Baird Andrew Baird, AWS Solutions Architect

Amazon API Gateway has a feature that enables customers to create their own API definitions directly in front of an AWS service API. This tutorial will walk you through an example of doing so with Amazon DynamoDB.

Why use API Gateway as a proxy for AWS APIs?

Many AWS services provide APIs that applications depend on directly for their functionality. Examples include:

  • Amazon DynamoDB – An API-accessible NoSQL database.
  • Amazon Kinesis – Real-time ingestion of streaming data via API.
  • Amazon CloudWatch – API-driven metrics collection and retrieval.

If AWS already exposes internet-accessible APIs, why would you want to use API Gateway as a proxy for them? Why not allow applications to just directly depend on the AWS service API itself?

Here are a few great reasons to do so:

  1. You might want to enable your application to integrate with very specific functionality that an AWS service provides, without the need to manage access keys and secret keys that AWS APIs require.
  2. There may be application-specific restrictions you’d like to place on the API calls being made to AWS services that you would not be able to enforce if clients integrated with the AWS APIs directly.
  3. You may get additional value out of using a different HTTP method from the method that is used by the AWS service. For example, creating a GET request as a proxy in front of an AWS API that requires an HTTP POST so that the response will be cached.
  4. You can accomplish the above things without having to introduce a server-side application component that you need to manage or that could introduce increased latency. Even a lightweight Lambda function that calls a single AWS service API is code that you do not need to create or maintain if you use API Gateway directly as an AWS service proxy.

Here, we will walk you through a hypothetical scenario that shows how to create an Amazon API Gateway AWS service proxy in front of Amazon DynamoDB.

The Scenario

You would like the ability to add a public Comments section to each page of your website. To achieve this, you’ll need to accept and store comments and you will need to retrieve all of the comments posted for a particular page so that the UI can display them.

We will show you how to implement this functionality by creating a single table in DynamoDB, and creating the two necessary APIs using the AWS service proxy feature of Amazon API Gateway.

Defining the APIs

The first step is to map out the APIs that you want to create. For both APIs, we’ve linked to the DynamoDB API documentation. Take note of how the API you define below differs in request/response details from the native DynamoDB APIs.

Post Comments

First, you need an API that accepts user comments and stores them in the DynamoDB table. Here’s the API definition you’ll use to implement this functionality:

Resource: /comments
HTTP Request Body:
  "pageId":   "example-page-id",
  "userName": "ExampleUserName",
  "message":  "This is an example comment to be added."

After you create it, this API becomes a proxy in front of the DynamoDB API PutItem.

Get Comments

Second, you need an API to retrieve all of the comments for a particular page. Use the following API definition:

Resource: /comments/{pageId}
HTTP Method: GET

The curly braces around {pageId} in the URI path definition indicate that pageId will be treated as a path variable within the URI.

This API will be a proxy in front of the DynamoDB API Query. Here, you will notice the benefit: your API uses the GET method, while the DynamoDB GetItem API requires an HTTP POST and does not include any cache headers in the response.

Creating the DynamoDB Table

First, Navigate to the DynamoDB console and select Create Table. Next, name the table Comments, with commentId as the Primary Key. Leave the rest of the default settings for this example, and choose Create.

After this table is populated with comments, you will want to retrieve them based on the page that they’ve been posted to. To do this, create a secondary index on an attribute called pageId. This secondary index enables you to query the table later for all comments posted to a particular page. When viewing your table, choose the Indexes tab and choose Create index.

When querying this table, you only want to retrieve the pieces of information that matter to the client: in this case, these are the pageId, the userName, and the message itself. Any other data you decide to store with each comment does not need to be retrieved from the table for the publically accessible API. Type the following information into the form to capture this and choose Create index:

Creating the APIs

Now, using the AWS service proxy feature of Amazon API Gateway, we’ll demonstrate how to create each of the APIs you defined. Navigate to the API Gateway service console, and choose Create API. In API name, type CommentsApi and type a short description. Finally, choose Create API.

Now you’re ready to create the specific resources and methods for the new API.

Creating the Post Comments API

In the editor screen, choose Create Resource. To match the description of the Post Comments API above, provide the appropriate details and create the first API resource:

Now, with the resource created, set up what happens when the resource is called with the HTTP POST method. Choose Create Method and select POST from the drop down. Click the checkmark to save.

To map this API to the DynamoDB API needed, next to Integration type, choose Show Advanced and choose AWS Service Proxy.

Here, you’re presented with options that define which specific AWS service API will be executed when this API is called, and in which region. Fill out the information as shown, matching the DynamoDB table you created a moment ago. Before you proceed, create an AWS Identity and Access Management (IAM) role that has permission to call the DynamoDB API PutItem for the Comments table; this role must have a service trust relationship to API Gateway. For more information on IAM policies and roles, see the Overview of IAM Policies topic.

After inputting all of the information as shown, choose Save.

If you were to deploy this API right now, you would have a working service proxy API that only wraps the DynamoDB PutItem API. But, for the Post Comments API, you’d like the client to be able to use a more contextual JSON object structure. Also, you’d like to be sure that the DynamoDB API PutItem is called precisely the way you expect it to be called. This eliminates client-driven error responses and removes the possibility that the new API could be used to call another DynamoDB API or table that you do not intend to allow.

You accomplish this by creating a mapping template. This enables you to define the request structure that your API clients will use, and then transform those requests into the structure that the DynamoDB API PutItem requires.

From the Method Execution screen, choose Integration Request:

In the Integration Request screen expand the Mapping Templates section and choose Add mapping template. Under Content-Type, type application/json and then choose the check mark:

Next, choose the pencil icon next to Input passthrough and choose Mapping template from the dropdown. Now, you’ll be presented with a text box where you create the mapping template. For more information on creating mapping templates, see API Gateway Mapping Template Reference.

The mapping template will be as follows. We’ll walk through what’s important about it next:

    "TableName": "Comments",
    "Item": {
	"commentId": {
            "S": "$context.requestId"
        "pageId": {
            "S": "$input.path('$.pageId')"
        "userName": {
            "S": "$input.path('$.userName')"
        "message": {
            "S": "$input.path('$.message')"

This mapping template creates the JSON structure required by the DynamoDB PutItem API. The entire mapping template is static. The three input variables are referenced from the request JSON using the $input variable and each comment is stamped with a unique identifier. This unique identifier is the commentId and is extracted directly from the API request’s $context variable. This $context variable is set by the API Gateway service itself. To review other parameters that are available to a mapping template, see API Gateway Mapping Template Reference. You may decide that including information like sourceIp or other headers could be valuable to you.

With this mapping template, no matter how your API is called, the only variance from the DynamoDB PutItem API call will be the values of pageId, userName, and message. Clients of your API will not be able to dictate which DynamoDB table is being targeted (because “Comments” is statically listed), and they will not have any control over the object structure that is specified for each item (each input variable is explicitly declared a string to the PutItem API).

Back in the Method Execution pane click TEST.

Create an example Request Body that matches the API definition documented above and then choose Test. For example, your request body could be:

  "pageId":   "breaking-news-story-01-18-2016",
  "userName": "Just Saying Thank You",
  "message":  "I really enjoyed this story!!"

Navigate to the DynamoDB console and view the Comments table to show that the request really was successfully processed:

Great! Try including a few more sample items in the table to further test the Get Comments API.

If you deployed this API, you would be all set with a public API that has the ability to post public comments and store them in DynamoDB. For some use cases you may only want to collect data through a single API like this: for example, when collecting customer and visitor feedback, or for a public voting or polling system. But for this use case, we’ll demonstrate how to create another API to retrieve records from a DynamoDB table as well. Many of the details are similar to the process above.

Creating the Get Comments API

Return to the Resources view, choose the /comments resource you created earlier and choose Create Resource, like before.

This time, include a request path parameter to represent the pageId of the comments being retrieved. Input the following information and then choose Create Resource:

In Resources, choose your new /{pageId} resource and choose Create Method. The Get Comments API will be retrieving data from our DynamoDB table, choose POST for the HTTP method since all DynamoDB Queries are POST.

In the method configuration screen choose Show advanced and then select AWS Service Proxy. Fill out the form to match the following. Make sure to use the appropriate AWS Region and IAM execution role; these should match what you previously created. Finally, choose Save.

Modify the Integration Request and create a new mapping template. This will transform the simple pageId path parameter on the GET request to the needed DynamoDB Query API, which requires an HTTP POST. Here is the mapping template:

    "TableName": "Comments",
    "IndexName": "pageId-index",
    "KeyConditionExpression": "pageId = :v1",
    "ExpressionAttributeValues": {
        ":v1": {
            "S": "$input.params('pageId')"

Now test your mapping template. Navigate to the Method Execution pane and choose the Test icon on the left. Provide one of the pageId values that you’ve inserted into your Comments table and choose Test.

You should see a response like the following; it is directly passing through the raw DynamoDB response:

Now you’re close! All you need to do before you deploy your API is to map the raw DynamoDB response to the similar JSON object structure that you defined on the Post Comment API.

This will work very similarly to the mapping template changes you already made. But you’ll configure this change on the Integration Response page of the console by editing the default mapping response’s mapping template.

Navigate to Integration Response and expand the 200 response code by choosing the arrow on the left. In the 200 response, expand the Mapping Templates section. In Content-Type choose application/json then choose the pencil icon next to Output Passthrough.

Now, create a mapping template that extracts the relevant pieces of the DynamoDB response and places them into a response structure that matches our use case:

#set($inputRoot = $input.path('$'))
    "comments": [
        #foreach($elem in $inputRoot.Items) {
            "commentId": "$elem.commentId.S",
            "userName": "$elem.userName.S",
            "message": "$elem.message.S"

Now choose the check mark to save the mapping template, and choose Save to save this default integration response. Return to the Method Execution page and test your API again. You should now see a formatted response.

Now you have two working APIs that are ready to deploy! See our documentation to learn about how to deploy API stages.

But, before you deploy your API, here are some additional things to consider:

  • Authentication: you may want to require that users authenticate before they can leave comments. Amazon API Gateway can enforce IAM authentication for the APIs you create. To learn more, see Amazon API Gateway Access Permissions.
  • DynamoDB capacity: you may want to provision an appropriate amount of capacity to your Comments table so that your costs and performance reflect your needs.
  • Commenting features: Depending on how robust you’d like commenting to be on your site, you might like to introduce changes to the APIs described here. Examples are attributes that track replies or timestamp attributes.


Now you’ve got a fully functioning public API to post and retrieve public comments for your website. This API communicates directly with the Amazon DynamoDB API without you having to manage a single application component yourself!

Using Amazon API Gateway with microservices deployed on Amazon ECS

by Stefano Buliani | on | in Amazon API Gateway | | Comments

Rudy Krol Rudy Krol, AWS Solutions Architect

One convenient way to run microservices is to deploy them as Docker containers. Docker containers are quick to provision, easily portable, and provide process isolation. Amazon EC2 Container Service (Amazon ECS) provides a highly scalable, high performance container management service. This service supports Docker containers and enables you to easily run microservices on a managed cluster of Amazon EC2 instances.

Microservices usually expose REST APIs for use in front ends, third-party applications, and other microservices. A best practice is to manage these APIs with an API gateway. This provides a unique entry point for all of your APIs and also eliminates the need to implement API-specific code for things like security, caching, throttling, and monitoring for each of your microservices. You can implement this pattern in a few minutes using Amazon API Gateway. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.

In this post, we’ll explain how to use Amazon API Gateway to expose APIs for microservices running on Amazon ECS by leveraging the HTTP proxy mode of Amazon API Gateway. Amazon API Gateway can make proxy calls to any publicly accessible endpoint; for example, an Elastic Load Balancing load balancer endpoint in front of a microservice that is deployed on Amazon ECS. The following diagram shows the high level architecture described in this article:

You will see how you can benefit from stage variables to dynamically set the endpoint value depending on the stage of the API deployment.

In the first part of this post, we will walk through the AWS Management Console to create the dev environment (ECS cluster, ELB load balancers, and API Gateway configuration). The second part explains how to automate the creation of a production environment with AWS CloudFormation and AWS CLI.

Creating a dev environment with the AWS Management Console

Let’s begin by provisioning a sample helloworld microservice using the Getting Started wizard.

Sign in to Amazon ECS console. If this is the first time you’re using the Amazon ECS console, you’ll see a welcome page. Otherwise, you’ll see the console home page and the Create Cluster button.

Step 1: Create a task definition

  1. In the Amazon ECS console, do one of the following:
  2. Optional: (depending on the AWS Region) Deselect the Store container images securely with Amazon ECR checkbox and choose Continue.
  3. For Task definition name, type ecsconsole-helloworld.
  4. For Container name, type helloworld.
  5. Choose Advanced options and type the following text in the Command field: /bin/sh -c "echo '{ \"hello\" : \"world\" }' > /usr/local/apache2/htdocs/index.html && httpd-foreground"
  6. Choose Update and then choose Next step

Step 2: Configure service

  1. For Service name, type ecsconsole-service-helloworld.
  2. For Desired number of tasks, type 2.
  3. In the Elastic load balancing section, for Container name: host port, choose helloworld:80.
  4. For Select IAM role for service, choose Create new role or use an existing ecsServiceRole if you already created the required role.
  5. Choose Next Step.

Step 3: Configure cluster

  1. For Cluster name, type dev.
  2. For Number of instances, type 2.
  3. For Select IAM role for service, choose Create new role or use an existing ecsInstanceRole if you already created the required role.
  4. Choose Review and Launch and then choose Launch Instance & Run Service.

At this stage, after a few minutes of pending process, the helloworld microservice will be running in the dev ECS cluster with an ELB load balancer in front of it. Make note of the DNS Name of the ELB load balancer for later use; you can find it in the Load Balancers section of the EC2 console.

Configuring API Gateway

Now, let’s configure API Gateway to expose the APIs of this microservice. Sign in to the API Gateway console. If this is your first time using the API Gateway console, you’ll see a welcome page. Otherwise, you’ll see the API Gateway console home page and the Create API button.

Step 1: Create an API

  1. In the API Gateway console, do one of the following:
    • If Get Started Now is displayed, choose it.
    • If Create API is displayed, choose it.
    • If neither is displayed, in the secondary navigation bar, choose the API Gateway console home button, and then choose Create API.
  2. For API name, type EcsDemoAPI.
  3. Choose Create API.

Step 2: Create Resources

  1. In the API Gateway console, choose the root resource (/), and then choose Create Resource.
  2. For Resource Name, type HelloWorld.
  3. For Resource Path, leave the default value of /helloworld.
  4. Choose Create Resource.

Step 3: Create GET Methods

  1. In the Resources pane, choose /helloworld, and then choose Create Method.
  2. For the HTTP method, choose GET, and then save your choice.

Step 4: Specify Method Settings

  1. In the Resources pane, in /helloworld, choose GET.
  2. In the Setup pane, for Integration type, choose HTTP Proxy.
  3. For HTTP method, choose GET.
  4. For Endpoint URL, type http://${stageVariables.helloworldElb}
  5. Choose Save.

Step 5: Deploy the API

  1. In the Resources pane, choose Deploy API.
  2. For Deployment stage, choose New Stage.
  3. For Stage name, type dev.
  4. Choose Deploy.
  5. In the stage settings page, choose the Stage Variables tab.
  6. Choose Add Stage Variable, type helloworldElb for Name, type the DNS Name of the ELB in the Value field and then save.

Step 6: Test the API

  1. In the Stage Editor pane, next to Invoke URL, copy the URL to the clipboard. It should look something like this:
  2. Paste this URL in the address box of a new browser tab.
  3. Append /helloworld to the URL and validate. You should see the following JSON document: { "hello": "world" }

Automating prod environment creation

Now we’ll improve this setup by automating the creation of the prod environment. We use AWS CloudFormation to set up the prod ECS cluster, deploy the helloworld service, and create an ELB in front of the service. You can use the template with your preferred method:


aws cloudformation create-stack --stack-name EcsHelloworldProd --template-url --parameters ParameterKey=AsgMaxSize,ParameterValue=2 ParameterKey=CreateElasticLoadBalancer,ParameterValue=true ParameterKey=EcsInstanceType,ParameterValue=t2.micro

Using AWS console
Launch the AWS CloudFormation stack with the Launch Stack button below and use these parameter values:

  • AsgMaxSize: 2
  • CreateElasticLoadBalancer: true
  • EcsInstanceType: t2.micro

Configuring API Gateway with AWS CLI

We’ll use the API Gateway configuration that we created earlier and simply add the prod stage.

Here are the commands to create the prod stage and configure the stage variable to point to the ELB load balancer:

#Retrieve API ID
API_ID=$(aws apigateway get-rest-apis --output text --query "items[?name=='EcsDemoAPI'].{ID:id}")

#Retrieve ELB DNS name from CloudFormation Stack outputs
ELB_DNS=$(aws cloudformation describe-stacks --stack-name EcsHelloworldProd --output text --query "Stacks[0].Outputs[?OutputKey=='EcsElbDnsName'].{DNS:OutputValue}")

#Create prod stage and set helloworldElb variable
aws apigateway create-deployment --rest-api-id $API_ID --stage-name prod --variables helloworldElb=$ELB_DNS

You can then test the API on the prod stage using this simple cURL command:

AWS_REGION=$(aws configure get region)
curl https://$API_ID.execute-api.$

You should see { "hello" : "world" } as the result of the cURL request. If the result is an error message like {"message": "Internal server error"}, verify that you have healthy instances behind your ELB load balancer. It can take some time to pass the health checks, so you’ll have to wait for a minute before trying again.

From the stage settings page you also have the option to export the API configuration to a Swagger file, including the API Gateway extension. Exporting the API configuration as a Swagger file enables you to keep the definition in your source repository. You can then import it at any time, either by overwriting the existing API or by importing it as a brand new API. The API Gateway import tool helps you parse the Swagger definition and import it into the service.


In this post, we looked at how to use Amazon API Gateway to expose APIs for microservices deployed on Amazon ECS. The integration with the HTTP proxy mode pointing to ELB load balancers is a simple method to ensure the availability and scalability of your microservice architecture. With ELB load balancers, you don’t have to worry about how your containers are deployed on the cluster.

We also saw how stage variables help you connect your APIs on different ELB load balancers, depending on the stage where the API is deployed.

Introducing custom authorizers in Amazon API Gateway

by Stefano Buliani | on | in Amazon API Gateway | | Comments

Today Amazon API Gateway is launching custom request authorizers. With custom request authorizers, developers can authorize their APIs using bearer token authorization strategies, such as OAuth using an AWS Lambda function. For each incoming request, API Gateway verifies whether a custom authorizer is configured, and if so, API Gateway calls the Lambda function with the authorization token. You can use Lambda to implement various authorization strategies (e.g., JWT verification, OAuth provider callout). Custom authorizers must return AWS Identity and Access Management (IAM) policies. These policies are used to authorize the request. If the policy returned by the authorizer is valid, API Gateway caches the returned policy associated with the incoming token for up to 1 hour so that your Lambda function doesn’t need to be invoked again.

Configuring custom authorizers

You can configure custom authorizers from the API Gateway console or using the APIs. In the console, we have added a new section called custom authorizers inside your API.

An API can have multiple custom authorizers and each method within your API can use a different authorizer. For example, the POST method for the /login resource can use a different authorizer than the GET method for the /pets resource.

To configure an authorizer you must specify a unique name and select a Lambda function to act as the authorizer. You also need to indicate which field of the incoming request contains your bearer token. API Gateway will pass the value of the field to your Lambda authorizer. For example, in most cases your bearer token will be in the Authorization header; you can select this field using the method.request.header.Authorization mapping expression. Optionally, you can specify a regular expression to validate the incoming token before your authorizer is triggered and you can also specify a TTL for the policy cache.

Once you have configured a custom authorizer, you can simply select it from the authorization dropdown in the method request page.

The authorizer function in AWS Lambda

API Gateway invokes the Lambda authorizer by passing in the Lambda event. The Lambda event includes the bearer token from the request and full ARN of the API method being invoked. The authorizer Lambda event looks like this:

    "authorizationToken":"<Incoming bearer token>",
    "methodArn":"arn:aws:execute-api:<Region id>:<Account id>:<API id>/<Stage>/<Method>/<Resource path>"

Your Lambda function must return a valid IAM policy. API Gateway uses this policy to make authorization decisions for the token. For example, if you use JWT tokens, you can use the Lambda function to open the token and then generate a policy based on the scopes included in the token. Later today we will publish authorizer Lambda blueprints for Node.js and Python that include a policy generator object. This sample function uses AWS Key Management Service (AWS KMS) to decrypt the signing key for the token, the nJwt library for Node.js to validate a token, and then the policy generator object included in the Lambda blueprint to generate and return a valid policy to Amazon API Gateway.

var nJwt = require('njwt');
var AWS = require('aws-sdk');
var signingKey = "CiCnRmG+t+ BASE 64 ENCODED ENCRYPTED SIGNING KEY Mk=";

exports.handler = function(event, context) {
  console.log('Client token: ' + event.authorizationToken);
  console.log('Method ARN: ' + event.methodArn);
  var kms = new AWS.KMS();

  var decryptionParams = {
    CiphertextBlob : new Buffer(signingKey, 'base64')

  kms.decrypt(decryptionParams, function(err, data) {
    if (err) {
      console.log(err, err.stack);"Unable to load encryption key");
    } else {
      key = data.Plaintext;

      try {
        verifiedJwt = nJwt.verify(event.authorizationToken, key);

        // parse the ARN from the incoming event
        var apiOptions = {};
        var tmp = event.methodArn.split(':');
        var apiGatewayArnTmp = tmp[5].split('/');
        var awsAccountId = tmp[4];
        apiOptions.region = tmp[3];
        apiOptions.restApiId = apiGatewayArnTmp[0];
        apiOptions.stage = apiGatewayArnTmp[1];
        policy = new AuthPolicy(verifiedJwt.body.sub, awsAccountId, apiOptions);

        if (verifiedJwt.body.scope.indexOf("admins") > -1) {
        } else {
          policy.allowMethod(AuthPolicy.HttpVerb.GET, "*");
          policy.allowMethod(AuthPolicy.HttpVerb.POST, "/users/" + verifiedJwt.body.sub);


      } catch (ex) {
        console.log(ex, ex.stack);"Unauthorized");

You can also generate a policy in your code instead of using the provided AuthPolicy object. Valid policies include the principal identifier associated with the token and a named IAM policy that can be cached and used to authorize future API calls with the same token. The principalId will be accessible in the mapping template.

  "principalId": "xxxxxxx", // the principal user identification associated with the token send by the client
  "policyDocument": { // example policy shown below, but this value is any valid policy
    "Version": "2012-10-17",
    "Statement": [
        "Effect": "Allow",
        "Action": [
        "Resource": [

To learn more about the possible options in a policy, see the public access permissions reference for API Gateway. All of the variables that are normally available in IAM policies are also available to custom authorizer policies. For example, you could restrict access using the ${aws:sourceIp} variable. To learn more, see the policy variables reference.

Because policies are cached for a configured TTL, API Gateway only invokes your Lambda function the first time it sees a token; all of the calls that follow during the TTL period are authorized by API Gateway using the cached policy.


You can use custom authorizers in API Gateway to support any bearer token. This allows you to authorize access to your APIs using tokens from an OAuth flow or SAML assertions. Further, you can leverage all of the variables available to IAM policies without setting up your API to use IAM authorization.

Custom authorizers are available in the API Gateway console and APIs now, and authorizer Lambda blueprints will follow later today. Get in touch through the API Gateway forum if you have questions or feedback about custom authorizers.

Using API Gateway mapping templates to handle changes in your back-end APIs

by Stefano Buliani | on | in Amazon API Gateway | | Comments

Maitreya Ranganath Maitreya Ranganath, AWS Solutions Architect

Changes to APIs are always risky, especially if changes are made in ways that are not backward compatible. In this blog post, we show you how to use Amazon API Gateway mapping templates to isolate your API consumers from API changes. This enables your API consumers to migrate to new API versions on their own schedule.

For an example scenario, we start with a very simple Store Front API with one resource for orders and one GET method. For this example, the API target is implemented in AWS Lambda to keep things simple – but you can of course imagine the back end being your own endpoint.

The structure of the API V1 is:

Method:		GET
Path:		/orders
Query Parameters:
	start = timestamp
	end = timestamp

    “orderId” : string,
    “orderTs” : string,
    “orderAmount” : number

The initial version (V1) of the API was implemented when there were few orders per day. The API was not paginated; if the number of orders that match the query is larger than 5, an error returns. The API consumer must then submit a request with a smaller time range.

The API V1 is exposed through API Gateway and you have several consumers of this API in Production.

After you upgrade the back end, the API developers make a change to support pagination. This makes the API more scalable and allows the API consumers to handle large lists of orders by paging through them with a token. This is a good design change but it breaks backward compatibility. It introduces a challenge because you have a large base of API consumers using V1 and their code can’t handle the changed nesting structure of this response.

The structure of API V2 is:

Method:		GET
Path:		/orders
Query Parameters:
	start =	timestamp
	end =	timestamp
	token =	string (optional)

  “nextToken” : string,
  “orders” : [
      “orderId” : string,
      “orderTs” :  string
      “orderAmount” : number

Using mapping templates, you can isolate your API consumers from this change: your existing V1 API consumers will not be impacted when you publish V2 of the API in parallel. You want to let your consumers migrate to V2 on their own schedule.

We’ll show you how to do that in this blog post. Let’s get started.

Deploying V1 of the API

To deploy V1 of the API, create a simple Lambda function and expose that through API Gateway:

  1. Sign in to the AWS Lambda console.
  2. Choose Create a Lambda function.
  3. In Step 1: Select blueprint, choose Skip; you’ll enter the details for the Lambda function manually.
  4. In Step 2: Configure function, use the following values:
    • In Name, type getOrders.
    • In Description, type Returns orders for a time-range.
    • In Runtime, choose Node.js.
    • For Code entry type, choose Edit code inline. Copy and paste the code snippet below into the code input box.
MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
    console.log('start =', event.start);
    console.log('end =', event.end);
    start = Date.parse(decodeURIComponent(event.start));
    end = Date.parse(decodeURIComponent(event.end));
    if(isNaN(start)) {"Invalid parameter 'start'");
    if(isNaN(end)) {"Invalid parameter 'end'");

    duration = end - start;
    if(duration  5 * MILISECONDS_DAY) {"Too many results, try your request with a shorter duration");
    orderList = [];
    count = 0;
    for(d = start; d < end; d += MILISECONDS_DAY) {
        order = {
            "orderId" : "order-" + count,
            "orderTs" : (new Date(d).toISOString()),
            "orderAmount" : Math.round(Math.random()*100.0)
        count += 1;
    console.log('Generated', count, 'orders');
    • In Handler, leave the default value of index.handler.
    • In Role, choose Basic execution role or choose an existing role if you’ve created one for Lambda before.
    • In Advanced settings, leave the default values and choose Next.

Finally, review the settings in the next page and choose Create function.

Your Lambda function is now created. You can test it by sending a test event. Enter the following for your test event:

  "start": "2015-10-01T00:00:00Z",
  "end": "2015-10-04T00:00:00Z"

Check the execution result and log output to see the results of your test.

Next, choose the API endpoints tab and then choose Add API endpoint. In Add API endpoint, use the following values:

  • In API endpoint type, choose API Gateway
  • In API name, type StoreFront
  • In Resource name, type /orders
  • In Method, choose GET
  • In Deployment stage, use the default value of prod
  • In Security, choose Open to allow the API to be publicly accessed
  • Choose Submit to create the API

The API is created and the API endpoint URL is displayed for the Lambda function.

Next, switch to the API Gateway console and verify that the new API appears on the list of APIs. Choose StoreFront to view its details.

To view the method execution details, in the Resources pane, choose GET. Choose Integration Request to edit the method properties.

On the Integration Request details page, expand the Mapping Templates section and choose Add mapping template. In Content-Type, type application/json and choose the check mark to accept.

Choose the edit icon to the right of Input passthrough. From the drop down, choose Mapping template and copy and paste the mapping template text below into the Template input box. Choose the check mark to create the template.

#set($queryMap = $input.params().querystring)

#foreach( $key in $queryMap.keySet())
  "$key" : "$queryMap.get($key)"

This step is needed because the Lambda function requires its input as a JSON document. The mapping template takes query string parameters from the GET request and creates a JSON input document. Mapping templates use Apache Velocity, expose a number of utility functions, and give you access to all of the incoming requests data and context parameters. You can learn more from the mapping template reference page.

Back to the GET method configuration page, in the left pane, choose the GET method and then open the Method Request settings. Expand the URL Query String Parameters section and choose Add query string. In Name, type start and choose the check mark to accept. Repeat the process to create a second parameter named end.

From the GET method configuration page, in the top left, choose Test to test your API. Type the following values for the query string parameters and then choose Test:

  • In start, type 2015-10-01T00:00:00Z
  • In end, type 2015-10-04T00:00:00Z

Verify that the response status is 200 and the response body contains a JSON response with 3 orders.

Now that your test is successful, you can deploy your changes to the production stage. In the Resources pane, choose Deploy API. In Deployment stage, choose prod. In Deployment description, type a description of the deployment, and then choose Deploy.

The prod Stage Editor page appears, displaying the Invoke URL. In the CloudWatch Settings section, choose Enable CloudWatch Logs so you can see logs and metrics from this stage. Keep in mind that CloudWatch logs are charged to your account separately from API Gateway.

You have now deployed an API that is backed by V1 of the Lambda function.

Testing V1 of the API

Now you’ll test V1 of the API with curl and confirm its behavior. First, copy the Invoke URL and add the query parameters ?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z and make a GET invocation using curl.

$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z" 

    "orderId": "order-0",
    "orderTs": "2015-10-01T00:00:00.000Z",
    "orderAmount": 82
    "orderId": "order-1",
    "orderTs": "2015-10-02T00:00:00.000Z",
    "orderAmount": 3
    "orderId": "order-2",
    "orderTs": "2015-10-03T00:00:00.000Z",
    "orderAmount": 75

This should output a JSON response with 3 orders. Next, check what happens if you use a longer time-range by changing the end timestamp to 2015-10-15T00:00:00Z:

$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"
  "errorMessage": "Too many results, try your request with a shorter duration"

You see that the API returns an error indicating the time range is too long. This is correct V1 API behavior, so you are all set.

Updating the Lambda Function to V2

Next, you will update the Lambda function code to V2. This simulates the scenario of the back end of your API changing in a manner that is not backward compatible.

Switch to the Lambda console and choose the getOrders function. In the code input box, copy and paste the code snippet below. Be sure to replace all of the existing V1 code with V2 code.

MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
    console.log('start =', event.start);
    console.log('end =', event.end);
    start = Date.parse(decodeURIComponent(event.start));
    end = Date.parse(decodeURIComponent(event.end));
    token = NaN;
    if(event.token) {
        s = new Buffer(event.token, 'base64').toString();
        token = Date.parse(s);

    if(isNaN(start)) {"Invalid parameter 'start'");
    if(isNaN(end)) {"Invalid parameter 'end'");
    if(!isNaN(token)) {
        start = token;

    duration = end - start;
    if(duration <= 0) {"Invalid parameters 'end' must be greater than 'start'");
    orderList = [];
    count = 0;
    console.log('start=', start, ' end=', end);
    for(d = start; d < end && count < 5; d += MILISECONDS_DAY) {
        order = {
            "orderId" : "order-" + count,
            "orderTs" : (new Date(d).toISOString()),
            "orderAmount" : Math.round(Math.random()*100.0)
        count += 1;

    nextToken = null;
    if(d < end) {
        nextToken = new Buffer(new Date(d).toISOString()).toString('base64');
    console.log('Generated', count, 'orders');

    result = {
        orders : orderList,

    if(nextToken) {
        result.nextToken = nextToken;

Choose Save to save V2 of the code. Then choose Test. Note that the output structure is different in V2 and there is a second level of nesting in the JSON document. This represents the updated V2 output structure that is different from V1.

Next, repeat the curl tests from the previous section. First, do a request for a short time duration. Notice that the response structure is nested differently from V1 and this is a problem for our API consumers that expect V1 responses.

$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z" 

  "orders": [
      "orderId": "order-0",
      "orderTs": "2015-10-01T00:00:00.000Z",
      "orderAmount": 8
      "orderId": "order-1",
      "orderTs": "2015-10-02T00:00:00.000Z",
      "orderAmount": 92
      "orderId": "order-2",
      "orderTs": "2015-10-03T00:00:00.000Z",
      "orderAmount": 84

Now, repeat the request for a longer time range and you’ll see that instead of an error message, you now get the first page of information with 5 orders and a nextToken that will let you request the next page. This is the paginated behavior of V2 of the API.

$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

  "orders": [
      "orderId": "order-0",
      "orderTs": "2015-10-01T00:00:00.000Z",
      "orderAmount": 62
      "orderId": "order-1",
      "orderTs": "2015-10-02T00:00:00.000Z",
      "orderAmount": 59
      "orderId": "order-2",
      "orderTs": "2015-10-03T00:00:00.000Z",
      "orderAmount": 21
      "orderId": "order-3",
      "orderTs": "2015-10-04T00:00:00.000Z",
      "orderAmount": 95
      "orderId": "order-4",
      "orderTs": "2015-10-05T00:00:00.000Z",
      "orderAmount": 84
  "nextToken": "MjAxNS0xMC0wNlQwMDowMDowMC4wMDBa"

It is clear from these tests that V2 will break the current V1 consumer’s code. Next, we show how to isolate your V1 consumers from this change using API Gateway mapping templates.

Cloning the API

Because you want both V1 and V2 of the API to be available simultaneously to your API consumers, you first clone the API to create a V2 API. You then modify the V1 API to make it behave as your V1 consumers expect.

Go back to the API Gateway console, and choose Create API. Configure the new API with the following values:

  • In API name, type StoreFrontV2
  • In Clone from API, choose StoreFront
  • In Description, type a description
  • Choose Create API to clone the StoreFront API as StoreFrontV2

Open the StoreFrontV2 API and choose the GET method of the /orders resource. Next, choose Integration Request. Choose the edit icon next to the getOrders Lambda function name.

Keep the name as getOrders and choose the check mark to accept. In the pop up, choose OK to allow the StoreFrontV2 to invoke the Lambda function.

Once you have granted API Gateway permissions to access your Lambda function, choose Deploy API. In Deployment stage, choose New stage. In Stage name, type prod, and then choose Deploy. Now you have a new StoreFrontV2 API that invokes the same Lambda function. Confirm that the API has V2 behavior by testing it with curl. Use the Invoke URL for the StoreFrontV2 API instead of the previously used Invoke URL.

Update the V1 of the API

Now you will use mapping templates to update the original StoreFront API to preserve V1 behavior. This enables existing consumers to continue to consume the API without having to make any changes to their code.

Navigate to the API Gateway console, choose the StoreFront API and open the GET method of the /orders resource. On the Method Execution details page, choose Integration Response.

Expand the default response mapping (HTTP status 200), and expand the Mapping Templates section. Choose Add Mapping Template.

In Content-type, type application/json and choose the check mark to accept. Choose the edit icon next to Output passthrough to edit the mapping templates. Select Mapping template from the drop down and copy and paste the mapping template below into the Template input box.

#set($nextToken = $input.path('$.nextToken'))

#if($nextToken && $nextToken.length() != 0)
    "errorMessage" : "Too many results, try your request with a shorter duration"

Choose the check mark to accept and save. The mapping template transforms the V2 output from the Lambda function into the original V1 response. The mapping template also generates an error if the V2 response indicates that there are more results than can fit in one page. This emulates V1 behavior.

Finally click Save on the response mapping page. Deploy your StoreFront API and choose prod as the stage to deploy your changes.

Verify V1 behavior

Now that you have updated the original API to emulate V1 behavior, you can verify that using curl again. You will essentially repeat the tests from the earlier section. First, confirm that you have the Invoke URL for the original StoreFront API. You can always find the Invoke URL by looking at the stage details for the API.

Try a test with a short time range.

$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

    "orderId": "order-0",
    "orderTs": "2015-10-01T00:00:00.000Z",
    "orderAmount": 50
    "orderId": "order-1",
    "orderTs": "2015-10-02T00:00:00.000Z",
    "orderAmount": 16
    "orderId": "order-2",
    "orderTs": "2015-10-03T00:00:00.000Z",
    "orderAmount": 14

Try a test with a longer time range and note that the V1 behavior of returning an error is recovered.

$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

  "errorMessage": "Too many results, try your request with a shorter duration"

Congratulations, you have successfully used Amazon API Gateway mapping templates to expose both V1 and V2 versions of your API allowing your API consumers to migrate to V2 on their own schedule.

Be sure to delete the two APIs and the AWS Lambda function that you created for this walkthrough to avoid being charged for their use.

Using API Gateway stage variables to manage Lambda functions

by Stefano Buliani | on | in Amazon API Gateway | | Comments

Ed Lima Ed Lima, Cloud Support Engineer

There’s a new feature on Amazon API Gateway called stage variables. Stage variables act like environment variables and can be used to change the behavior of your API Gateway methods for each deployment stage; for example, making it possible to reach a different back end depending on which stage the API is running on. This blog post will demonstrate how to use stage variables with two different AWS Lambda functions.

For this example we will use the sample functions from the Lambda Walkthrough. Sign in to the AWS Management console, open the Lambda console, and create the required functions (make sure you’re using the appropriate IAM execution role:


console.log('Loading event');

exports.handler = function(event, context) {
  context.done(null, {"Hello":"World"});  // SUCCESS with message


console.log('Loading event');
exports.handler = function(event, context) {
  var name = ( === undefined ? 'No-Name' :;
  context.done(null, {"Hello":name}); // SUCCESS with message

In the API Gateway console, create a new API called LambdaVar:

In the root resource, create a new GET method. In Integration type for the new method, choose Lambda Function, then select your Lambda Region, and type ${stageVariables.lbfunc} in the Lambda Function field. This tells API Gateway to read the value for this field from a stage variable at runtime:

The console detects the stage variable and displays the Add Permission to Lambda Function message:

Next, you manually give permissions to your Lambda functions, using the AWS CLI. This enables API Gateway to execute the functions. The CLI command must be issued with credentials that have permission to call the “add-permission” action of the Lambda APIs. The output from the AWS CLI will contain the policy statement that was set on the Lambda function resource policies.

aws lambda add-permission --function-name arn:aws:lambda:us-west-2:XXXXXXXXXXXXX:function:GetHelloWithName --source-arn arn:aws:execute-api:us-west-2:XXXXXXXXXXXXX:y91j2l4bnd/*/GET/ --principal --statement-id 95486b16-7d8a-4aca-9322-5f883ab702a6 --action lambda:InvokeFunction

# expected output
    "Statement": "{\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:us-west-2: XXXXXXXXXXXX:y91j2l4bnd/*/GET/\"}},\"Action\":[\"lambda:InvokeFunction\"],\"Resource\":\"arn:aws:lambda:us-west-2:XXXXXXXXXXXX:function:GetHelloWithName\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"\"},\"Sid\":\"95486b16-7d8a-4aca-9322-5f883ab702a6\"}"

aws lambda add-permission --function-name arn:aws:lambda:us-west-2:XXXXXXXXXXXXX:function:GetHelloWorld --source-arn arn:aws:execute-api:us-west-2:XXXXXXXXXXXXX:y91j2l4bnd/*/GET/ --principal --statement-id 95486b16-7d8a-4aca-9322-5f883ab702a6 --action lambda:InvokeFunction

# expected output
    "Statement": "{\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:us-west-2: XXXXXXXXXXXXX:y91j2l4bnd/*/GET/\"}},\"Action\":[\"lambda:InvokeFunction\"],\"Resource\":\"arn:aws:lambda:us-west-2: XXXXXXXXXXXXX:function:GetHelloWorld\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"\"},\"Sid\":\"95486b16-7d8a-4aca-9322-5f883ab702a6\"}"

Back in the console, you can now create your first stage. Choose Deploy API. In Stage name, type dev. In Stage description, type a description for your new stage, and then choose Deploy.

After the API deploys, on the Stage Editor page, choose the Stage Variables tab and add the stage variable from your API configuration, lbfunc. As you can see from the screenshot the value assigned to the new stage variable is the name of the Lambda function we want to invoke:

The second Lambda function, GetHelloWithName, can also receive a name parameter. You can configure the API to read the incoming parameter from the query string and pass it to the JSON body for the Lambda function by using mapping templates. To do this, go back to the GET method and choose Integration Request. Under Mapping Templates, add the following mapping template for the application/json content type:

{ "name": "$input.params('name')" }

To apply the change, deploy the API to a new stage called prod:

Next, set up the stage variable in the new deployment stage to point to the second Lambda function, GetHelloWithName:

Now you are ready to test!

The dev stage invoke URL directs you to the GetHelloWorld Lambda function:

The prod stage invoke URL with the appropriate query string directs you to the GetHelloWithName Lambda function and returns a value:

If you try to use the query string on the first stage, the query string will simply be ignored because the Lambda function is not configured to handle the parameter:

There it is: a nice way to optimize your Amazon API Gateway resources by using a single method with 2 different stages that use 2 different Lambda functions.

Alternatively, you can mix and match static names with stage variables in the integrations. For example, instead of having 2 different Lambda functions you can have a single Lambda function with multiple versions and aliases. Then, in the integration setup, you can simply use the stage variables to point to the correct alias. For instance, using an alias to one of the Lambda functions above, add the following: GetHelloWithName:${stageVariables.lambdaAlias} as Integration Type:

In your stage, add the lambdaAlias stage variable accordingly:

This variable will refer to the Lambda alias of your function:

As you can see, the new stage variables feature enables you to dynamically access different back ends, using fewer configuration steps and resources/methods in your API Gateway. The variables add even more flexibility to stages when deploying your API, which can enable different use cases in your environments.