Category: AWS Lambda
Surviving the Zombie Apocalypse with Serverless Microservices
Run Apps without the Bite!
by: Kyle Somers – Associate Solutions Architect
Let’s face it, managing servers is a pain! Capacity management and scaling is even worse. Now imagine dedicating your time to SysOps during a zombie apocalypse — barricading the door from flesh eaters with one arm while patching an OS with the other.
This sounds like something straight out of a nightmare. Lucky for you, this doesn’t have to be the case. Over at AWS, we’re making it easier than ever to build and power apps at scale with powerful managed services, so you can focus on your core business – like surviving – while we handle the infrastructure management that helps you do so.
Join the AWS Lambda Signal Corps!
At AWS re:Invent in 2015, we piloted a workshop where participants worked in groups to build a serverless chat application for zombie apocalypse survivors, using Amazon S3, Amazon DynamoDB, Amazon API Gateway, and AWS Lambda. Participants learned about microservices design patterns and best practices. They then extended the functionality of the serverless chat application with various add-on functionalities – such as mobile SMS integration, and zombie motion detection – using additional services like Amazon SNS and Amazon Elasticsearch Service.
Between the widespread interest in serverless architectures and AWS Lambda by our customers, we’ve recognized the excitement around this subject. Therefore, we are happy to announce that we’ll be taking this event on the road in the U.S. and abroad to recruit new developers for the AWS Lambda Signal Corps!
Help us save humanity! Learn More and Register Here!
Washington, DC | March 10 – Mission Accomplished!
San Francisco, CA @ AWS Loft | March 24 – Mission Accomplished!
New York City, NY @ AWS Loft | April 13 – Mission Accomplished!
London, England @ AWS Loft | April 25
San Francisco, CA @ AWS Loft | August 16
New York City, NY @ AWS Loft | August 18
If you’re unable to join us at one of these workshops, that’s OK! In this post, I’ll show you how our survivor chat application incorporates some important microservices design patterns and how you can power your apps in the same way using a serverless architecture.
What Are Serverless Architectures?
At AWS, we know that infrastructure management can be challenging. We also understand that customers prefer to focus on delivering value to their business and customers. There’s a lot of undifferentiated heavy lifting to be building and running applications, such as installing software, managing servers, coordinating patch schedules, and scaling to meet demand. Serverless architectures allow you to build and run applications and services without having to manage infrastructure. Your application still runs on servers, but all the server management is done for you by AWS. Serverless architectures can make it easier to build, manage, and scale applications in the cloud by eliminating much of the heavy lifting involved with server management.
Key Benefits of Serverless Architectures
- No Servers to Manage: There are no servers for you to provision and manage. All the server management is done for you by AWS.
- Increased Productivity: You can now fully focus your attention on building new features and apps because you are freed from the complexities of server management, allowing you to iterate faster and reduce your development time.
- Continuous Scaling: Your applications and services automatically scale up and down based on size of the workload.
What Should I Expect to Learn at a Zombie Microservices Workshop?
The workshop content we developed is designed to demonstrate best practices for serverless architectures using AWS. In this post we’ll discuss the following topics:
- Which services are useful when designing a serverless application on AWS (see below!)
- Design considerations for messaging, data transformation, and business or app-tier logic when building serverless microservices.
- Best practices demonstrated in the design of our zombie survivor chat application.
- Next steps for you to get started building your own serverless microservices!
Several AWS services were used to design our zombie survivor chat application. Each of these services are managed and highly scalable. Let’s take a quick at look at which ones we incorporated in the architecture:
- AWS Lambda allows you to run your code without provisioning or managing servers. Just upload your code (currently Node.js, Python, or Java) and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. Lambda is used to power many use cases, such as application back ends, scheduled administrative tasks, and even big data workloads via integration with other AWS services such as Amazon S3, DynamoDB, Redshift, and Kinesis.
- Amazon Simple Storage Service (Amazon S3) is our object storage service, which provides developers and IT teams with secure, durable, and scalable storage in the cloud. S3 is used to support a wide variety of use cases and is easy to use with a simple interface for storing and retrieving any amount of data. In the case of our survivor chat application, it can even be used to host static websites with CORS and DNS support.
- Amazon API Gateway makes it easy to build RESTful APIs for your applications. API Gateway is scalable and simple to set up, allowing you to build integrations with back-end applications, including code running on AWS Lambda, while the service handles the scaling of your API requests.
- Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.
Overview of the Zombie Survivor Chat App
The survivor chat application represents a completely serverless architecture that delivers a baseline chat application (written using AngularJS) to workshop participants upon which additional functionality can be added. In order to deliver this baseline chat application, an AWS CloudFormation template is provided to participants, which spins up the environment in their account. The following diagram represents a high level architecture of the components that are launched automatically:
High-Level Architecture of Survivor Serverless Chat App
- Amazon S3 bucket is created to store the static web app contents of the chat application.
- AWS Lambda functions are created to serve as the back-end business logic tier for processing reads/writes of chat messages.
- API endpoints are created using API Gateway and mapped to Lambda functions. The API Gateway POST method points to a WriteMessages Lambda function. The GET method points to a GetMessages Lambda function.
- A DynamoDB messages table is provisioned to act as our data store for the messages from the chat application.
Serverless Survivor Chat App Hosted on Amazon S3
With the CloudFormation stack launched and the components built out, the end result is a fully functioning chat app hosted in S3, using API Gateway and Lambda to process requests, and DynamoDB as the persistence for our chat messages.
With this baseline app, participants join in teams to build out additional functionality, including the following:
- Integration of SMS/MMS via Twilio. Send messages to chat from SMS.
- Motion sensor detection of nearby zombies with Amazon SNS and Intel® Edison and Grove IoT Starter Kit. AWS provides a shared motion sensor for the workshop, and you consume its messages from SNS.
- Help-me panic button with IoT.
- Integration with Slack for messaging from another platform.
- Typing indicator to see which survivors are typing.
- Serverless analytics of chat messages using Amazon Elasticsearch Service (Amazon ES).
- Any other functionality participants can think of!
As a part of the workshop, AWS provides guidance for most of these tasks. With these add-ons completed, the architecture of the chat system begins to look quite a bit more sophisticated, as shown below:
Architecture of Survivor Chat with Additional Add-on Functionality
Architectural Tenants of the Serverless Survivor Chat
For the most part, the design patterns you’d see in a traditional server-yes environment you will also find in a serverless environment. No surprises there. With that said, it never hurts to revisit best practices while learning new ones. So let’s review some key patterns we incorporated in our serverless application.
Decoupling Is Paramount
In the survivor chat application, Lambda functions are serving as our tier for business logic. Since users interact with Lambda at the function level, it serves you well to split up logic into separate functions as much as possible so you can scale the logic tier independently from the source and destinations upon which it serves.
As you’ll see in the architecture diagram in the above section, the application has separate Lambda functions for the chat service, the search service, the indicator service, etc. Decoupling is also incorporated through the use of API Gateway, which exposes our back-end logic via a unified RESTful interface. This model allows us to design our back-end logic with potentially different programming languages, systems, or communications channels, while keeping the requesting endpoints unaware of the implementation. Use this pattern and you won’t cry for help when you need to scale, update, add, or remove pieces of your environment.
Separate Your Data Stores
Treat each data store as an isolated application component of the service it supports. One common pitfall when following microservices architectures is to forget about the data layer. By keeping the data stores specific to the service they support, you can better manage the resources needed at the data layer specifically for that service. This is the true value in microservices.
In the survivor chat application, this practice is illustrated with the Activity and Messages DynamoDB tables. The activity indicator service has its own data store (Activity table) while the chat service has its own (Messages). These tables can scale independently along with their respective services. This scenario also represents a good example of statefuless. The implementation of the talking indicator add-on uses DynamoDB via the Activity table to track state information about which users are talking. Remember, many of the benefits of microservices are lost if the components are still all glued together at the data layer in the end, creating a messy common denominator for scaling.
Leverage Data Transformations up the Stack
When designing a service, data transformation and compatibility are big components. How will you handle inputs from many different clients, users, systems for your service? Will you run different flavors of your environment to correspond with different incoming request standards? Absolutely not!
With API Gateway, data transformation becomes significantly easier through built-in models and mapping templates. With these features you can build data transformation and mapping logic into the API layer for requests and responses. This results in less work for you since API Gateway is a managed service. In the case of our survivor chat app, AWS Lambda and our survivor chat app require JSON while Twilio likes XML for the SMS integration. This type of transformation can be offloaded to API Gateway, leaving you with a cleaner business tier and one less thing to design around!
Use API Gateway as your interface and Lambda as your common backend implementation. API Gateway uses Apache Velocity Template Language (VTL) and JSONPath for transformation logic. Of course, there is a trade-off to be considered, as a lot of transformation logic could be handled in your business-logic tier (Lambda). But, why manage that yourself in application code when you can transparently handle it in a fully managed service through API Gateway? Here are a few things to keep in mind when handling transformations using API Gateway and Lambda:
- Transform first; then call your common back-end logic.
- Use API Gateway VTL transformations first when possible.
- Use Lambda to preprocess data in ways that VTL can’t.
Using API Gateway VTL for Input/Output Data Transformations
Security Through Service Isolation and Least Privilege
As a general recommendation when designing your services, always utilize least privilege and isolate components of your application to provide control over access. In the survivor chat application, a permissions-based model is used via AWS Identity and Access Management (IAM). IAM is integrated in every service on the AWS platform and provides the capability for services and applications to assume roles with strict permission sets to perform their least-privileged access needs. Along with access controls, you should implement audit and access logging to provide the best visibility into your microservices. This is made easy with Amazon CloudWatch Logs and AWS CloudTrail. CloudTrail enables audit capability of API calls made on the platform while CloudWatch Logs enables you to ship custom log data to AWS. Although our implementation of Amazon Elasticsearch in the survivor chat is used for analyzing chat messages, you can easily ship your log data to it and perform analytics on your application. You can incorporate security best practices in the following ways with the survivor chat application:
- Each Lambda function should have an IAM role to access only the resources it needs. For example, the GetMessages function can read from the Messages table while the WriteMessages function can write to it. But they cannot access the Activities table that is used to track who is typing for the indicator service.
- Each API Gateway endpoint must have IAM permissions to execute the Lambda function(s) it is tied to. This model ensures that Lambda is only executed from the principle that is allowed to execute it, in this case the API Gateway method that triggers the back end function.
- DynamoDB requires read/write permissions via IAM, which limits anonymous database activity.
- Use AWS CloudTrail to audit API activity on the platform and among the various services. This provides traceability, especially to see who is invoking your Lambda functions.
- Design Lambda functions to publish meaningful outputs, as these are logged to CloudWatch Logs on your behalf.
FYI, in our application, we allow anonymous access to the chat API Gateway endpoints. We want to encourage all survivors to plug into the service without prior registration and start communicating. We’ve assumed zombies aren’t intelligent enough to hack into our communication channels. Until the apocalypse, though, stay true to API keys and authorization with signatures, which API Gateway supports!
Don’t Abandon Dev/Test
When developing with microservices, you can still leverage separate development and test environments as a part of the deployment lifecycle. AWS provides several features to help you continue building apps along the same trajectory as before, including these:
- Lambda function versioning and aliases: Use these features to version your functions based on the stages of deployment such as development, testing, staging, pre-production, etc. Or perhaps make changes to an existing Lambda function in production without downtime.
- Lambda service blueprints: Lambda comes with dozens of blueprints to get you started with prewritten code that you can use as a skeleton, or a fully functioning solution, to complete your serverless back end. These include blueprints with hooks into Slack, S3, DynamoDB, and more.
- API Gateway deployment stages: Similar to Lambda versioning, this feature lets you configure separate API stages, along with unique stage variables and deployment versions within each stage. This allows you to test your API with the same or different back ends while it progresses through changes that you make at the API layer.
- Mock Integrations with API Gateway: Configure dummy responses that developers can use to test their code while the true implementation of your API is being developed. Mock integrations make it faster to iterate through the API portion of a development lifecycle by streamlining pieces that used to be very sequential/waterfall.
Using Mock Integrations with API Gateway
Stay Tuned for Updates!
Now that you’ve got the necessary best practices to design your microservices, do you have what it takes to fight against the zombie hoard? The serverless options we explored are ready for you to get started with and the survivors are counting on you!
Be sure to keep an eye on the AWS GitHub repo. Although I didn’t cover each component of the survivor chat app in this post, we’ll be deploying this workshop and code soon for you to launch on your own! Keep an eye out for Zombie Workshops coming to your city, or nominate your city for a workshop here.
For more information on how you can get started with serverless architectures on AWS, refer to the following resources:
Whitepaper – AWS Serverless Multi-Tier Architectures
Reference Architectures and Sample Code
*Special thanks to my colleagues Ben Snively, Curtis Bray, Dean Bryen, Warren Santner, and Aaron Kao at AWS. They were instrumental to our team developing the content referenced in this post.
Simply Serverless: Using AWS Lambda to Expose Custom Cookies with API Gateway
Simply Serverless
Welcome to a new series on quick and simple hacks/tips/tricks and common use cases to using AWS Lambda and AWS API Gateway. As always, I’m listening to readers (@listonb), so if you have any questions, comments or tips you’d like to see, let me know!
This is a guest post by Jim Warner from Survata.
This first tip describes how Survata uses Lambda to drop a new cookie on API Gateway requests. Learn more about how Survata is using this during Serverless Day at the San Francisco Loft on April 28th. Register for Serverless Day now.
Step 1: Return a cookie ID from Lambda
This walkthrough assumes you have gone through the Hello World API Gateway Getting Started Guide code.
Expand upon the “Hello World” example and update it as follows:
'use strict';
exports.handler = function(event, context) {
console.log("{'Cookie':event['Cookie']}");
var date = new Date();
// Get Unix milliseconds at current time plus 365 days
date.setTime(+ date + (365 \* 86400000)); //24 \* 60 \* 60 \* 100
var cookieVal = Math.random().toString(36).substring(7); // Generate a random cookie string
var cookieString = "myCookie="+cookieVal+"; domain=my.domain; expires="+date.toGMTString()+";";
context.done(null, {"Cookie": cookieString});
};
This makes a random string and returns it in JSON format as a proper HTTP cookie string. The result from the Lambda function is as follows:
{"Cookie": "myCookie=t81e70kke29; domain=my.domain; expires=Wed, 19 Apr 2017 20:41:27 GMT;"}
Step 2: Set the cookie in API Gateway
In the API Gateway console, go to the GET Method page and choose Method Response. Expand the default 200 HTTP status, and choose Add Header. Add a new header called “Set-Cookie.”
On the GET Method page, choose Integration Response. Under the Header Mappings section of the default 200 HTTP status, choose the pencil icon to edit the “Set-Cookie” header. In the mapping value section, put:
integration.response.body.Cookie
Make sure to save the header by choosing the check icon!
For a real production deployment, use a body mapping template to return only the parts of the JSON that you want to expose (so the cookie data wouldn’t show here).
Deploying both the Lambda function and API Gateway gets you up and cookie-ing.
How to turn Node.js projects into AWS Lambda microservices easily with ClaudiaJS
This is a guest post by Gojko Adzic, creator of ClaudiaJS
While working on MindMup 2.0, we started moving parts of our API and back-end infrastructure from Heroku to AWS Lambda. The first Lambda function we created required a shell script of about 120 lines of AWS command-line calls to properly set up, and the second one had a similar number with just minor tweaks. Instead of duplicating this work for each service, we decided to create an open-source tool that can handle the deployment process for us.
Enter Claudia.js: an open-source deployment tool for Node.js microservices that makes getting started with AWS Lambda and Amazon API Gateway very easy for JavaScript developers.
Claudia takes care of AWS deployment workflows, simplifying and automating many error-prone tasks, so that you can focus on solving important business problems rather than worrying about infrastructure code. Claudia sets everything up the way JavaScript developers expect out of the box, and significantly shortens the learning curve required to get Node.js projects running inside Lambda.
Hello World
Here’s a quick ‘hello world’ example.
Create a directory, and initialize a new NPM project
npm init
Next, create app.js with the following code:
var ApiBuilder = require('claudia-api-builder'),
api = new ApiBuilder();
module.exports = api;
api.get('/hello', function () {
return 'hello world';
});
Add the Claudia API Builder as a project dependency:
npm install claudia-api-builder --save
Finally, install Claudia.js in your global path:
npm install -g claudia
That’s pretty much it. You can now install your new microservice in AWS by running the following command:
claudia create --region us-east-1 --api-module app
In a few moments, Claudia will respond with the details of the newly-installed Lambda function and REST API.
{
"lambda": {
"role": "test-executor",
"name": "test",
"region": "us-east-1"
},
"api": {
"id": "8x7uh8ho5k",
"module": "app",
"url": "https://8x7uh8ho5k.execute-api.us-east-1.amazonaws.com/latest"
}
}
The result contains the root URL of your new API Gateway resource. Claudia automatically created an endpoint resource for /hello
, so just add /hello
to the URL, and try it out in a browser or from the console. You should see the ‘hello world’ response.
That’s it! Your first Claudia-deployed Lambda function and API Gateway endpoint is now live on AWS!
What happened in the background?
In the background, Claudia.js executed the following steps:
- Created a copy of the project.
- Packaged all the NPM dependencies.
- Tested that the API is deployable.
- Zipped up your application and deployed it to Lambda.
- Created the correct IAM access privileges.
- Configured an API Gateway endpoint with the
/hello
resource. - Linked the new resource to the previously-deployed Lambda function.
- Installed the correct API Gateway transformation templates.
Finally, it saved the resulting configuration into a local file (claudia.json
), so that you can easily update the function without remembering any of those details.
Try this next:
Install the superb
module as a project dependency:
npm install superb --save
Add a new endpoint to the API by appending these lines to app.js
:
api.get('/greet', function (request) {
var superb = require('superb');
return request.queryString.name + ' is ' + superb();
});
You can now update your existing deployed application by executing the following command:
claudia update
When the deployment completes, try out the new endpoint by adding `/greet?name=’ followed by your name.
Benefits of using Claudia
Claudia significantly reduces the learning curve for deploying and managing serverless style applications, REST API, and event-driven microservices. Developers can use Lambda and API Gateway in a way that is similar to popular lightweight JavaScript web frameworks.
All the query string arguments are immediately available to your function in the request.queryString
object. HTTP Form POST variables are in request.post
, and any JSON, XML, or text content posted in as raw body text are in request.body
.
Asynchronous processes are also easy; just return a Promise
from the API endpoint handler, and Claudia waits until the promise resolves before responding to the caller. You can use any A+ Promise-compliant library, including the promises supported out of the box by the new AWS Lambda 4.3.2 runtime.
To make serverless-style applications easier to set up, Claudia automatically enables cross-origin resource sharing (CORS), so a client browser can call your new API directly even from a different domain. All errors are, by default, triggering the 500 HTTP code, so your API works well with most AJAX libraries. You can, of course, easily customize the API endpoints to return a different content type or HTTP response code, or include additional headers. For more information, see the Claudia API Builder documentation.
Conclusion
Claudia helps people get started quickly, and easily migrate existing, self-hosted or third-party-hosted APIs into Lambda. Because it’s not opinionated and does not require a particular structure or way of working, teams can easily start chopping pieces of existing infrastructure and gradually moving it over. For more information visit the git repository for Sample ClaudiaJS Projects.
Building Enterprise Level Web Applications on AWS Lambda with the DEEP Framework
This is a guest post by Eugene Istrati, the co-creator of the DEEP Framework, a full-stack web framework that enables developers to build cloud-native applications using microservices architecture.
From the beginning, Mitoc Group has been building web applications for enterprise customers. We are a small group of developers who are helping customers with their entire web development process, from conception through execution and down to maintenance. Being in the business of doing everything is very hard, and it would be impossible without using AWS foundational services, but we incrementally needed more. That is why we became early adopters of the serverless computing approach and developed an ecosystem called Digital Enterprise End-to-end Platform (DEEP) with AWS Lambda at the core.
In this post, we dive deeper into how DEEP is using AWS Lambda to empower developers to build cloud-native applications or platforms using microservices architecture. We will walk you through the process of identifying the front-end, back-end and data tiers required to build web applications with AWS Lambda at the core. We will focus on the structure of the AWS Lambda functions we use, as well as security, performance and benchmarking steps that we take to build enterprise-level web applications.
Enterprise-level web applications
Our approach to web development is full-stack and user-driven, focused on UI (aka the user interface) and UX (aka user eXperience). Before going into the details, we’d like to emphasize the strategical (biased and opinionated) decisions we made early on:
- We don’t say “no” to customers; every problem is seriously evaluated and sometimes we offer options that involve our direct competitors.
- We are developers and we focus only on the application level; everything else (platform level and infrastructure level) must be managed by AWS.
- We focus 20% of effort to solve 80% of work load; everything must be automated and pushed on the service side rather than the client side.
To be honest and fair, it doesn’t work all the time as expected, but it does help us to learn fast and move quickly, sustainably and incrementally solving business problems through technical solutions that really matter. However, the definition of “really matter” differs from customer to customer, quite uniquely in some cases.
Nevertheless, what we have learned from our customers is that enterprise-level web applications must provide the following common expectations:
- Be secure — security through obscurity (e.g., Amazon IAM, Amazon Cognito)
- Be compliant — governance-focused, audit-friendly service features with applicable compliance or audit standards
- Be reliable — service level agreements (e.g. Amazon S3, Amazon CloudFront)
- Be performant — studies show that page loads longer than 2s start impacting the users behavior
- Be pluggable — successful enterprise ecosystem is mainly driven by fully integrated web applications inside organizations;
- Be cost-efficient — a benefit of the AWS Free Tier, as well as pay only for services that you use and when you use them
- Be scalable — the serverless approach relies on abstracted services that are pre-scaled to AWS size, whatever that would be
Architecture
This post describes how we transformed a self-managed task management application (aka todo app) in minutes. The original version can be seen on www.todomvc.com and the original code can be downloaded from https://github.com/tastejs/todomvc/tree/master/examples/angularjs.
The architecture of every web application we build or transform, including the one described above, is similar to the reference architecture of the realtime voting application published recently by AWS on GitHub.
The todo app is written in AngularJS and deployed on Amazon S3, behind Amazon CloudFront (front-end). Task management is processed by AWS Lambda, optionally behind Amazon API Gateway (back-end). Task metadata is stored in Amazon DynamoDB (data tier). The transformed todo app, along with instructions on how to install and deploy this web application, is described in the Building Scalable Web Apps with AWS Lambda and Home-Grown Serverless blog post and the todo code is available on GitHub.
Let’s look at AWS Lambda functions and the value proposition they offer to us and our customers.
AWS Lambda functions
The goal of the todo app is to manage tasks in a self-service mode. End users can view tasks, create new tasks, mark or unmark a task as done, and clear completed tasks. From the UI point of view, that leads to four user interactions that require different back-end calls:
- web service that retrieves tasks
- web service that creates tasks
- web service that deletes tasks
- web service that updates tasks
A simple reordering of the above identified back-end services calls leads to basic CRUD (create, retrieve, update, delete) operations on the Task data object. These are the simple logical steps that we take to identify the front-end, back-end, and data tiers of (drums beating, trumpets playing) our approach to microservices, which we prefer to call microapplications.
Therefore, coming back to AWS Lambda, we have written four small Node.js functions that are context-bounded and self-sustained (each microservice corresponds to the above identified back-end web service):
Microservice that retrieves tasks
'use strict';
import DeepFramework from 'deep-framework';
export default class Handler extends DeepFramework.Core.AWS.Lambda.Runtime {
/**
* @param {Array} args
*/
constructor(...args) {
super(...args);
}
/**
* @param request
*/
handle(request) {
let taskId = request.getParam('Id');
if (taskId) {
this.retrieveTask(taskId, (task) => {
return this.createResponse(task).send();
});
} else {
this.retrieveAllTasks((result) => {
return this.createResponse(result).send();
});
}
}
/**
* @param {Function} callback
*/
retrieveAllTasks(callback) {
let TaskModel = this.kernel.get('db').get('Task');
TaskModel.findAll((err, task) => {
if (err) {
throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
}
return callback(task.Items);
});
}
/**
* @param {String} taskId
* @param {Function} callback
*/
retrieveTask(taskId, callback) {
let TaskModel = this.kernel.get('db').get('Task');
TaskModel.findOneById(taskId, (err, task) => {
if (err) {
throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
}
return callback(task ? task.get() : null);
});
}
}
Microservice that creates a task
'use strict';
import DeepFramework from 'deep-framework';
export default class extends DeepFramework.Core.AWS.Lambda.Runtime {
/**
* @param {Array} args
*/
constructor(...args) {
super(...args);
}
/**
* @param request
*/
handle(request) {
let TaskModel = this.kernel.get('db').get('Task');
TaskModel.createItem(request.data, (err, task) => {
if (err) {
throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
}
return this.createResponse(task.get()).send();
});
}
}
Microservice that updates a task
'use strict';
import DeepFramework from 'deep-framework';
export default class Handler extends DeepFramework.Core.AWS.Lambda.Runtime {
/**
* @param {Array} args
*/
constructor(...args) {
super(...args);
}
/**
* @param request
*/
handle(request) {
let taskId = request.getParam('Id');
if (typeof taskId !== 'string') {
throw new InvalidArgumentException(taskId, 'string');
}
let TaskModel = this.kernel.get('db').get('Task');
TaskModel.updateItem(taskId, request.data, (err, task) => {
if (err) {
throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
}
return this.createResponse(task.get()).send();
});
}
}
Microservice that deletes a task
'use strict';
import DeepFramework from 'deep-framework';
export default class extends DeepFramework.Core.AWS.Lambda.Runtime {
/**
* @param {Array} args
*/
constructor(...args) {
super(...args);
}
/**
* @param request
*/
handle(request) {
let taskId = request.getParam('Id');
if (typeof taskId !== 'string') {
throw new DeepFramework.Core.Exception.InvalidArgumentException(taskId, 'string');
}
let TaskModel = this.kernel.get('db').get('Task');
TaskModel.deleteById(taskId, (err) => {
if (err) {
throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
}
return this.createResponse({}).send();
});
}
}
Each above file with related dependencies is compressed into .zip file and uploaded to AWS Lambda. If you’re new to this process, we strongly recommend following the How to Create, Upload and Invoke an AWS Lambda function tutorial.
Back to the four small Node.js functions, you can see that we have adopted ES6 (aka ES2015) as our coding standard. And we are importing deep-framework in every function. What is this framework anyway and why are we using it everywhere?
Full-stack web framework
Step back for a minute. Building and uploading AWS Lambda functions to the service is very simple and straight-forward, but now imagine that you need to manage 100–150 web services to access a web page, multiplied by hundreds or thousands of web pages.
We believe that the only way to achieve this kind of flexibility and scale is automation and code reuse. These principles led us to build and open source DEEP Framework — a full-stack web framework that abstracts web services and web applications from specific cloud services — and DEEP CLI (aka deepify) — a development tool-chain that abstracts package management and associated development operations.
Therefore, to make sure that the process of managing AWS Lambda functions is streamlined and automated, we consistently include two more files in each uploaded .zip:
DEEP microservice bootstrap
'use strict';
import DeepFramework from 'deep-framework';
import Handler from './Handler';
export default DeepFramework.LambdaHandler(Handler);
DEEP microservice package metadata (for npm)
{
"name": "deep-todo-task-create",
"version": "0.0.1",
"description": "Create a new todo task",
"scripts": {
"postinstall": "npm run compile",
"compile": "deepify compile-es6 `pwd`"
},
"dependencies": {
"deep-framework": "^1.8.x"
},
"preferGlobal": false,
"private": true,
"analyze": true
}
Having these three files (Handler.es6, bootstrap.es6, and package.json) in each Lambda function doesn’t mean that your final .zip file will be that small. Actually, a lot of additional operations happen before the .zip file is created. To name a few:
- AWS Lambda performs better when the uploaded codebase is smaller. Because we provide both local development capabilities and one-step push to production, our process optimizes resources before deploying to AWS.
- ES6 is not supported by the node.js v0.10.x runtime that we use in AWS Lambda, it is however available in the Node 4.3 runtime, so we compile .es6 files into ES5-compliant .js files using Babel.
- Dependencies that are defined in package.json are automatically pulled and fine-tuned for node.js v0.10.x to provide the best performance possible.
Putting everything together
First, you need the following pre-requisites:
- AWS account (Create an Amazon Web Services Account)
- AWS CLI (Configure AWS Command Line Interface)
- Git v2+ (Get Started — Installing Git)
- Java / JRE v6+ (JDK 8 and JRE 8 Installation Start Here)
- js v4+ (Install nvm and Use latest node v4)
Note: Don’t use sudo to install nvm. Otherwise, you’ll have to fix npm permissions.
Second, install the DEEP CLI with the following command:
npm install deepify -g
Next, deploy the todo app using deepify:
deepify install github://MitocGroup/deep-microservices-todo-app ~/deep-todo-app
deepify server ~/deep-todo-app
deepify deploy ~/deep-todo-app
Note: When the deepify server command is finished, you can open http://localhost:8000 in your browser and enjoy the todo app running locally.
Cleaning up
There are at least half a dozen services and several dozen of resources created during deepify deploy. If only there was a simple command that would clean up everything when we’re done. We thought of that and created deepify undeploy to address this need. When you are done using todo app and want to remove web app related resources, execute the following:
deepify undeploy ~/deep-todo-app
As you can see, we empower developers to build hassle-free, cloud-native applications or platforms using microservices architecture and serverless computing.
And what about security?
Security
One of the biggest value propositions on AWS is out-of-the-box security and compliance. The beauty of the cloud-native approach is that security comes by design (in other words, it won’t work otherwise). We take full advantage of that shared responsibility model and enforce security in every layer.
End users benefit from IAM best practices through streamlined implementations of least privilege access, delegated roles instead of credentials, and integration with logging and monitoring services (e.g., AWS CloudTrail, Amazon CloudWatch, and Amazon Elasticsearch Service + Kibana). For example, developers and end users of the todo app didn’t need to explicitly define any security roles (it was done by deepify deploy), but they can rest assured that only their instance of todo app will be using their infrastructure, platform, and application resources.
The following are two security roles (back-end and front-end) that have been seamlessly generated and enforced in each layer:
IAM role that allows back-end invocation of AWS Lambda function (e.g. DeepProdTodoCreate1234abcd) in web application AWS account (e.g. 123456789000)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["lambda:InvokeFunction"],
"Resource": ["arn:aws:lambda:us-east-1:123456789000:function:DeepProdTodoCreate1234abcd*"]
}
]
}
DEEP role that allows front-end resource (e.g deep.todo:task) to execute action (e.g. deep.todo:task:create)
{
"Version": "2015-10-07",
"Statement": [
{
"Effect": "Allow",
"Action": ["deep.todo:task:create"],
"Resource": ["deep.todo:task"]
}
]
}
Benchmarking
We have been continuously benchmarking AWS Lambda for various use cases in our microapplications. After a couple of repetitive situations doing similar analysis, we decided to build the benchmarking as another microapplication and re-use the ecosystem to include it automatically where we needed it. You can find the open-source code for the benchmarking microapplication on GitHub:
Particularly, for todo app, we performed various benchmarking analysis on AWS Lambda by tweaking different components in a specific function (e.g. function size, memory size, billable cost, etc.). Next, we would like to share results with you:
Benchmarking for todo app
Req No | Function Size (MB) | Memory Size (MB) | Max Memory Used (MB) | Start time | Stop time | Front-end Call (ms) | Back-end Call (ms) | Billed Time (ms) | Billed Time ($) |
---|---|---|---|---|---|---|---|---|---|
1 | 1.1 | 128 | 34 | 20:15.8 | 20:16.2 | 359 | 200.47 | 300 | 0.000000624 |
2 | 1.1 | 128 | 34 | 20:17.8 | 20:18.2 | 381 | 202.45 | 300 | 0.000000624 |
3 | 1.1 | 128 | 34 | 20:19.9 | 20:20.3 | 406 | 192.52 | 200 | 0.000000416 |
4 | 1.1 | 128 | 34 | 20:21.9 | 20:22.2 | 306 | 152.19 | 200 | 0.000000416 |
5 | 1.1 | 128 | 34 | 20:23.9 | 20:24.2 | 333 | 175.01 | 200 | 0.000000416 |
6 | 1.1 | 128 | 34 | 20:25.9 | 20:26.3 | 431 | 278.03 | 300 | 0.000000624 |
7 | 1.1 | 128 | 34 | 20:27.9 | 20:28.2 | 323 | 170.97 | 200 | 0.000000416 |
8 | 1.1 | 128 | 34 | 20:29.9 | 20:30.2 | 327 | 160.24 | 200 | 0.000000416 |
9 | 1.1 | 128 | 34 | 20:31.9 | 20:32.4 | 556 | 225.25 | 300 | 0.000000624 |
10 | 1.1 | 128 | 35 | 20:33.9 | 20:34.2 | 333 | 179.59 | 200 | 0.000000416 |
Average | 375.50 | 193.67 | Total | 0.000004992 |
Performance
Speaking of performance, we find AWS Lambda mature enough to power large-scale web applications. The key is to build the functions as small as possible, focusing on a simple rule of one function to achieve only one task. Over time, these functions might grow in size; therefore, we always keep an eye on them and re-factor / split into the lowest possible logical denominator (smallest task).
Using the benchmarking tool, we ran multiple scenarios on the same function from todo app
Function Size (MB) | Memory Size (MB) | Max Memory Used (MB) | Avg Front-end (ms) | Avg Back-end (ms) | Total Calls (#) | Total Billed (ms) | Total Billed ($/1B)* |
---|---|---|---|---|---|---|---|
1.1 | 128 | 34-35 | 375.50 | 193.67 | 10 | 2,400 | 4,992 |
1.1 | 256 | 34-37 | 399.40 | 153.25 | 10 | 2,000 | 8,340 |
1.1 | 512 | 33-35 | 341.60 | 134.32 | 10 | 1,800 | 15,012 |
1.1 | 128 | 34-49 | 405.57 | 223.82 | 100 | 27,300 | 56,784 |
1.1 | 256 | 28-48 | 354.75 | 177.91 | 100 | 23,800 | 99,246 |
1.1 | 512 | 32-47 | 345.92 | 163.17 | 100 | 23,100 | 192,654 |
55.8 | 128 | 49-50 | 543.00 | 284.03 | 10 | 3,400 | 7,072 |
55.8 | 256 | 49-50 | 339.80 | 153.13 | 10 | 2,100 | 8,757 |
55.8 | 512 | 49-50 | 342.60 | 141.02 | 10 | 2,000 | 16,680 |
55.8 | 128 | 83-87 | 416.10 | 220.91 | 100 | 26,900 | 55,952 |
55.8 | 256 | 50-71 | 377.69 | 194.22 | 100 | 25,600 | 106,752 |
55.8 | 512 | 57-81 | 353.46 | 174.65 | 100 | 23,300 | 194,322 |
Based on performance data, we have learned some pretty cool stuff:
- The smaller the function is, the better it performs; On the other hand, if more memory is allocated, the size of the function matters less and less.
- Memory size is not directly proportional to billable costs; developers can decide the memory size based on performance requirements combined with associated costs.
- The key to better performance is continuous load, thanks to container reuse in AWS Lambda.
Conclusion
In this post, we presented a small web application that is built with AWS Lambda at the core. We walked you through the process of identifying the front-end, back-end, and data tiers required to build the todo app. You can fork the example code repository as a starting point for your own web applications.
If you have questions or suggestions, please leave a comment below.
Node.js 4.3.2 Runtime Now Available on Lambda
We are happy to announce that you may now develop your AWS Lambda functions using the Node.js 4.3.2 runtime. You can start using this new runtime version today by specifying a runtime parameter value of “nodejs4.3” when creating or updating functions. We will continue to support creating new Lambda functions on Node.js 0.10. However starting October 2016 you will no longer be able to create functions using Node.js 0.10, given the upcoming end of life for the runtime. Here’s a quick primer on what’s changed between the two versions:
New Node features
You can now leverage features in the V8 Javascript Engine such as ES6 Support, block scoping, Promises, and new arrow functions, to name a few. For more information, see the Expressive ES6 features that shine in Node.js 4.0 post by Ryan Paul @ RethinkDB.
Backward compatible
Nothing in regards to your existing functions running under Node.js 0.10 will change, and they will continue to operate and function as expected. You may also port your existing Node.js 0.10 functions over to Node.js 4.3.2 by simply updating the runtime, and they will continue to work as written. You will however need to take into account any static modules you may have compiled for 0.10 before making this move. Be sure to review the API changes between Node.js 0.10 and Node.js 4 to see if there are other changes that affect your code.
Node callbacks
The programming model for Node.js 0.10, Lambda required an explicit Context method call (done(), suceeed(), fail()) to exit the function. Context.succeed, context.done, and context.fail however, are more than just bookkeeping – they cause the request to return after the current task completes and freeze the process immediately, even if other tasks remain in the Node.js event loop. Generally that’s not what you want if those tasks represent incomplete callbacks.
This programming model for Node.js 4.3.2 improves on this by adding an optional callback parameter to the method. The callback parameter can be used to specify error or return values for the function execution. You specify the optional callback parameter when defining your function handler as below:
exports.myHandler = (event, context, callback) => callback(null, "I'm running Node4!");
By default, the callback waits for all the tasks in the Node.js event loop to complete, just as it would if you ran the function locally. If you chose to not use the callback parameter in your code, then AWS Lambda implicitly calls it with a return value of null. You can still use the Context methods to terminate the function, but the callback approach of waiting for all tasks to complete is more idiomatic to how Node.js behaves in general. The context parameter will also continue to exist and provides your handler with the runtime information of the Lambda function that is executing.
If you want to simulate the same behavior as a context method, you now have the ability to access the callbackWaitsForEmptyEventLoop setting via the context object. This property is useful to modify the default behavior of the callback. You can set this property to false to request AWS Lambda to freeze the process after the callback is called. For more information about this new functionality, see Lambda Function Handler.
Remember, the existing Node.js 0.10 programming model does not support the new callback functionality that specifically exists in the new 4.3.2 runtime. If you continue to use 0.10, you will still need to take advantage of the context object to specify return values of your function.
For more information, see the AWS Lambda Developer Guide.
Hope you enjoy,
-Bryan
Have feedback? I’m always listening @listonb
Indexing Amazon DynamoDB Content with Amazon Elasticsearch Service Using AWS Lambda
A lot of AWS customers have adopted Amazon DynamoDB for its predictable performance and seamless scalability. The main querying capabilities of DynamoDB are centered around lookups using a primary key. However, there are certain times where richer querying capabilities are required. Indexing the content of your DynamoDB tables with a search engine such as Elasticsearch would allow for full-text search.
In this post, we show how you can send changes to the content of your DynamoDB tables to an Amazon Elasticsearch Service (Amazon ES) cluster for indexing, using the DynamoDB Streams feature combined with AWS Lambda.
Architectural overview
Here’s a high-level overview of the architecture:
We’ll cover the main steps required to put this bridge in place:
- Choosing the DynamoDB tables to index and enabling DynamoDB Streams on them.
- Creating an IAM role for accessing the Amazon ES cluster.
- Configuring and enabling the Lambda blueprint.
Choosing the DynamoDB table to index
In this post, you look at indexing the content of a product catalog in order to provide full-text search capabilities. You’ll index the content of a DynamoDB table called all_products
, which is acting as the catalog of all products.
Here’s an example of an item stored in that table:
{
"product_id": "B016JOMAEE",
"name": "Serverless Single Page Apps: Fast, Scalable, and Available",
"category": "ebook",
"description": "AWS Lambda - A Guide to Serverless Microservices
takes a comprehensive look at developing
serverless workloads using the new
Amazon Web Services Lambda service.",
"author": "Matthew Fuller",
"price": 15.0,
"rating": 4.8
}
Enabling DynamoDB Streams
In the DynamoDB console, enable the DynamoDB Streams functionality on the all_products
table by selecting the table and choosing Manage Stream.
Multiple options are available for the stream. For this use case, you need new items to appear in the stream; choose either New image or New and old images. For more information, see Capturing Table Activity with DynamoDB Streams.
After the stream is set up, make a good note of the stream ARN. You’ll need that information later, when configuring the access permissions.
Creating a new IAM role
The Lambda function needs read access to the DynamoDB stream just created. In addition, the function also requires access to the Amazon ES cluster to submit new records for indexing.
In the AWS Identity and Access Management (IAM) console, create a new role for the Lambda function and call it ddb-elasticsearch-bridge
.
As this role will be used by the Lambda function, choose AWS Lambda
from the AWS Service Roles
list.
On the following screens, choose the AWSLambdaBasicExecutionRole
managed policy, which allows the Lambda function to send logs to Amazon CloudWatch Logs.
Configuring access to the Amazon ES cluster
First, you need a running Amazon ES cluster. In this example, create a search domain called inventory
. After the domain has been created, note its ARN:
In the IAM console, select the ddb-elasticsearch-bridge
role created earlier and add two inline policies to that role:
Here’s the policy to add to allow the Lambda code to push new documents to Amazon ES (replace the resource ARN with the ARN of your Amazon ES cluster):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"es:ESHttpPost"
],
"Effect": "Allow",
"Resource": "arn:aws:es:us-east-1:0123456789:domain/inventory/*"
}
]
}
Important: you need to add /*
to the resource ARN as depicted above.
Next, add a second policy for read access to the DynamoDB stream (replace the resource ARN with the ARN of your DynamoDB stream):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams"
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:us-east-1:0123456789:table/all_products/stream/2016-02-16T23:13:07.600"
]
}
]
}
Enabling the Lambda blueprint
When you log into the Lambda console and choose Create a Lambda Function, you are presented with a list of blueprints to use. Select the blueprint called dynamodb-to-elasticsearch
.
Next, select the DynamoDB table all_products
as the event source:
Then, customize the Lambda code to specify the Elasticsearch endpoint:
Finally, select the ddb-elasticsearch-bridge
role created earlier to give the Lambda function the permissions required to interact with DynamoDB and the Amazon ES cluster:
Testing the result
You’re all set!
After a few records have been added to your DynamoDB table, you can go back to the Amazon ES console and validate that a new index for your items has been automatically created:
Playing with Kibana (Optional)
Elasticsearch is commonly used with Kibana for visual exploration of data.
To start querying the indexed data, create an index pattern in Kibana. Use the name of the DynamoDB table as an index pattern:
Kibana automatically determines the best type for each field:
Use a simple query to search the product catalog for all items in the category book
containing the word aws
in any field:
Other considerations
Indexing pre-existing content
The solution presented earlier is ideal to ensure that new data is indexed as soon it is added to the DynamoDB table. But what about pre-existing data stored in the table?
Luckily, the Lambda function used earlier can also be used to process data from an Amazon Kinesis stream, as long as the format of the data is similar to the DynamoDB Streams records.
Provided that you have an Amazon Kinesis stream set up as an additional input source for the Lambda code above, you can use the (very naive) sample Python3 code below to read the entire content of a DynamoDB table and push it to an Amazon Kinesis stream called ddb-all-products
for indexing in Amazon ES.
import json
import boto3
import boto3.dynamodb.types
# Load the service resources in the desired region.
# Note: AWS credentials should be passed as environment variables
# or through IAM roles.
dynamodb = boto3.resource('dynamodb', region_name="us-east-1")
kinesis = boto3.client('kinesis', region_name="us-east-1")
# Load the DynamoDB table.
ddb_table_name = "all_products"
ks_stream_name = "ddb-all-products"
table = dynamodb.Table(ddb_table_name)
# Get the primary keys.
ddb_keys_name = [a['AttributeName'] for a in table.attribute_definitions]
# Scan operations are limited to 1 MB at a time.
# Iterate until all records have been scanned.
response = None
while True:
if not response:
# Scan from the start.
response = table.scan()
else:
# Scan from where you stopped previously.
response = table.scan(ExclusiveStartKey=response['LastEvaluatedKey'])
for i in response["Items"]:
# Get a dict of primary key(s).
ddb_keys = {k: i[k] for k in i if k in ddb_keys_name}
# Serialize Python Dictionaries into DynamoDB notation.
ddb_data = boto3.dynamodb.types.TypeSerializer().serialize(i)["M"]
ddb_keys = boto3.dynamodb.types.TypeSerializer().serialize(ddb_keys)["M"]
# The record must contain "Keys" and "NewImage" attributes to be similar
# to a DynamoDB Streams record. Additionally, you inject the name of
# the source DynamoDB table in the record so you can use it as an index
# for Amazon ES.
record = {"Keys": ddb_keys, "NewImage": ddb_data, "SourceTable": ddb_table_name}
# Convert the record to JSON.
record = json.dumps(record)
# Push the record to Amazon Kinesis.
res = kinesis.put_record(
StreamName=ks_stream_name,
Data=record,
PartitionKey=i["product_id"])
print(res)
# Stop the loop if no additional records are
# available.
if 'LastEvaluatedKey' not in response:
break
Note: In the code example above, you are passing the name of the source DynamoDB table as an extra record attribute SourceTable
. The Lambda function uses that attribute to build the Amazon ES index name. Another approach for passing that information is tagging the Amazon Kinesis stream.
Now, create the Amazon Kinesis stream ddb-all-products
and then add permissions to the ddb-elasticsearch-bridge
role in IAM to allow the Lambda function to read from the stream:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kinesis:Get*",
"kinesis:DescribeStream"
],
"Resource": [
"arn:aws:kinesis:us-east-1:0123456789:stream/ddb-all-products"
]
}
]
}
Finally, set the Amazon Kinesis stream as an additional input source to the Lambda function:
Neat tip: Doing a full re-index of the content this way will not create duplicate entries in Amazon ES.
Paying attention to attribute types
With DynamoDB, you can use different types for the same attribute on different records, but Amazon ES expects a given attribute to be of only one type. Similarly, changing the type of an existing attribute after it has been indexed in Amazon ES causes problems and some searches won’t work as expected.
In these cases, you must rebuild the Amazon ES index. For more information, see Reindexing Your Data in the Elasticsearch documentation.
Conclusion
In this post, you have seen how you can use AWS Lambda with DynamoDB to index your table content in Amazon ES as changes happen.
Because you are relying entirely on Lambda for the business logic, you don’t have to deal with servers at any point: everything is managed by the AWS platform in a highly available and scalable fashion. To learn more about Lambda and serverless infrastructures, see the Microservices without the Servers blog post.
Now that you have added full-text search to your DynamoDB table, you might be interested in exposing its content through a small REST API. For more information, see Using Amazon API Gateway as a proxy for DynamoDB.
Building a Dynamic DNS for Route 53 using CloudWatch Events and Lambda
Introduction
Dynamic registration of resource records is useful when you have instances that are not behind a load balancer that you would like to address by a host name and domain suffix of your choosing, rather than the default <region>.compute.internal or ec2.internal.
In this post, we explore how you can use CloudWatch Events and Lambda to create a Dynamic DNS for Route 53. Besides creating A records, this solution allows you to create alias, i.e. CNAME records, for when you want to address a server by a “friendly” or alternate name. Although this is antithetical to treating instances as disposable resources, there are still a lot of shops that find this useful.
Using CloudWatch and Lambda to respond to infrastructure changes in real-time
With the advent of CloudWatch Events in January 2016, you can now get near real-time information when an AWS resource changes its state, including when instances are launched or terminated. When you combine this with the power of Amazon Route 53 and AWS Lambda, you can create a system that closely mimics the behavior of Dynamic DNS.
For example, when a newly-launched instance changes its state from pending to running, an event can be sent to a Lambda function that creates a resource record in the appropriate Route 53 hosted zone. Similarly, when instances are stopped or terminated, Lambda can automatically remove resource records from Route 53.
The example provided in this post works precisely this way. It uses information from a CloudWatch event to gather information about the instance, such as its public and private DNS name, its public and private IP address, the VPC ID of the VPC that the instance was launched in, its tags, and so on. It then uses this information to create A, PTR, and CNAME records in the appropriate Route 53 public or private hosted zone. The solution persists data about the instances in an Amazon DynamoDB table so it can remove resource records when instances are stopped or terminated.
Route 53 Hosted zones
Route 53 offers the convenience of domain name services without having to build a globally distributed highly reliable DNS infrastructure. It allows instances within your VPC to resolve the names of resources that run within your AWS environment. It also lets clients on the Internet resolve names of your public-facing resources. This is accomplished by querying resource record sets that reside within a Route 53 public or private hosted zone.
A private hosted zone is basically a container that holds information about how you want to route traffic for a domain and its subdomains within one or more VPCs, whereas a public hosted zone is a container that holds information about how you want to route traffic from the Internet.
Choosing between VPC DNS or Route 53 Private Hosted Zones
Admittedly, you can use VPC DNS for internal name resolution instead of Route 53 private hosted zones. Although it doesn’t dynamically create resource records, VPC DNS will provide name resolution for all the hosts within a VPC’s CIDR range.
Unless you create a DHCP option set with a custom domain name and disable hostnames at the VPC, you can’t change the domain suffix; all instances are either assigned the ec2.internal or <region>.compute.internal domain suffix. You can’t create aliases or other resource record types with VPC DNS either.
Private hosted zones help you overcome these challenges by allowing you to create different resource record types with a custom domain suffix. Moreover, with Route 53 you can create a subdomain for your current DNS namespace or you can migrate an existing subdomain to Route 53. By using these options, you can create a contiguous DNS namespace between your on-premises environment and AWS.
So, while VPC DNS can provide basic name resolution for your VPC, Route 53 private hosted zones offer richer functionality by comparison. It also has a programmable API that can be used to automate the creation/removal of records sets and hosted zones which we’re going leverage later in this post.
Route 53 doesn’t offer support for dynamic registration of resource record sets for public or private hosted zones. This can pose challenges when an Auto Scaling event occurs and the instances are not behind a load balancer. A common workaround is to use an automation framework like Chef, Puppet, Ansible, or Salt to create resource records, or by adding instance user data to the launch profile of the Auto Scaling group. The drawbacks to these approaches are that:
1) automation frameworks typically require you to manage additional infrastructure.
2) instance user data doesn’t handle the removal of resource records when the instance is terminated.
This was the motivation for creating a serverless architecture that dynamically creates and removes resource records from Route 53 as EC2 instances are created and destroyed.
DDNS/Lambda example
Make sure that you have the latest version of the AWS CLI installed locally. For more information, see Getting Set Up with the AWS Command Line Interface.
For this example, create a new VPC configured with a private and public subnet, using Scenario 2: VPC with Public and Private Subnets (NAT) from the Amazon VPC User Guide. Ensure that the VPC has the DNS resolution and DNS hostnames options set to yes.
After the VPC is created, you can proceed to the next steps.
Step 1 – Create an IAM role for the Lambda function
In this step, you use the AWS Command Line Interface (AWS CLI) to create the Identity and Access Management (IAM) role that the Lambda function assumes when the function is invoked. You need to create an IAM policy with the required permissions and then attach this policy to the role.
Download the ddns-policy.json and ddns-trust.json files from the AWS Labs GitHub repo.
ddns-policy.json
The policy includes ec2:Describe permission, required for the function to obtain the EC2 instance’s attributes, including the private IP address, public IP address, and DNS hostname. The policy also includes DynamoDB and Route 53 full access, required for the function to create the DynamoDB table and to update the Route 53 DNS records. The policy also allows the function to create log groups and log events.
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "ec2:Describe*", "Resource": "*" }, { "Effect": "Allow", "Action": [ "dynamodb:*" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "route53:*" ], "Resource": [ "*" ] }] }
ddns-trust.json
The ddns-trust.json file contains the trust policy that grants the Lambda service permission to assume the role.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
Create the policy using the policy document in the ddns-pol.json file. You need to replace <LOCAL PATH> with your local path to the ddns-pol.json file. The output of the aws iam create-policy command includes the Amazon Resource Locator (ARN). Save the ARN, since you will need it for future steps.
aws iam create-policy --policy-name ddns-lambda-policy --policy-document file://<LOCAL PATH>/ddns-pol.json
Create the ddns-lambda-role IAM role using the trust policy in the ddns-trust.json file. You need to replace <LOCAL PATH> with your local path to the ddns-trust.json file. The output of the aws iam create-role command includes the ARN associated with the role that you created. Save this ARN, since you will need it when you create the Lambda function in the next section.
aws iam create-role --role-name ddns-lambda-role --assume-role-policy-document file://<LOCAL PATH>/ddns-trust.json
Attach the policy to the role. Use the ARN returned in step 2 for the –policy-arn input parameter.
aws iam attach-role-policy --role-name ddns-lambda-role --policy-arn <enter-your-policy-arn-here>
Step 2 – Create the Lambda function
The Lambda function uses modules included in the Python 2.7 Standard Library and the AWS SDK for Python module (boto3), which is preinstalled as part of the Lambda service. As such, you do not need to create a deployment package for this example.
The function code performs the following:
- Checks to see whether the “DDNS” table exists in DynamoDB and creates the table if it does not. This table is used to keep a record of instances that have been created along with their attributes. It’s necessary to persist the instance attributes in a table because once an EC2 instance is terminated, its attributes are no longer available to be queried via the EC2 API. Instead, they must be fetched from the table.
- Queries the event data to determine the instance’s state. If the state is “running”, the function queries the EC2 API for the data it will need to update DNS. If the state is anything else, e.g. “stopped” or “terminated”, it will retrieve the necessary information from the “DDNS” DynamoDB table.
- Verifies that “DNS resolution” and “DNS hostnames” are enabled for the VPC, as these are required in order to use Route 53 for private name resolution. The function then checks whether a reverse lookup zone for the instance already exists. If it does, it checks to see whether the reverse lookup zone is associated with the instance’s VPC. If it isn’t, it creates the association. This association is necessary in order for the VPC to use Route 53 zone for private name resolution.
- Checks the EC2 instance’s tags for the CNAME and ZONE tags. If the ZONE tag is found, the function creates A and PTR records in the specified zone. If the CNAME tag is found, the function creates a CNAME record in the specified zone.
- Verifies whether there’s a DHCP option set assigned to the VPC. If there is, it uses the value of the domain name to create resource records in the appropriate Route 53 private hosted zone. The function also checks to see whether there’s an association between the instance’s VPC and the private hosted zone. If there isn’t, it creates it.
- Deletes the required DNS resource records if the state of the EC2 instance changes to “shutting down” or “stopped”.
Use the AWS CLI to create the Lambda function:
- Download the union.py.zip file from the AWS Labs GitHub repo.
- Execute the following command to create the function. Note that you need to update the command to use the ARN of the role that you created earlier, as well as the local path to the union.py.zip file containing the Python code for the Lambda function.
aws lambda create-function --function-name ddns_lambda --runtime python2.7 --role <enter-your-role-arn-here> --handler union.lambda_handler --timeout 30 --zip-file fileb://<LOCAL PATH>/union.py.zip
- The output of the command returns the ARN of the newly-created function. Save this ARN, as you will need it in the next section
Step 3 – Create the CloudWatch Events Rule
In this step, you create the CloudWatch Events rule that triggers the Lambda function whenever CloudWatch detects a change to the state of an EC2 instance. You configure the rule to fire when any EC2 instance state changes to “running”, “shutting down”, or “stopped”. Use the aws events put-rule command to create the rule and set the Lambda function as the execution target:
aws events put-rule --event-pattern "{\"source\":[\"aws.ec2\"],\"detail-type\":[\"EC2 Instance State-change Notification\"],\"detail\":{\"state\":[\"running\",\"shutting-down\",\"stopped\"]}}" --state ENABLED --name ec2_lambda_ddns_rule
The output of the command returns the ARN to the newly created CloudWatch Events rule, named ec2_lambda_ddns_rule. Save the ARN, as you will need it to associate the rule with the Lambda function and to set the appropriate Lambda permissions.
Next, set the target of the rule to be the Lambda function. Note that the —targets input parameter requires that you include a unique identifier for the Id target. You also need to update the command to use the ARN of the Lambda function that you created previously.
aws events put-targets --rule ec2_lambda_ddns_rule --targets Id=id123456789012,Arn=<enter-your-lambda-function-arn-here>
Next, you add the permissions required for the CloudWatch Events rule to execute the Lambda function. Note that you need to provide a unique value for the –statement-id input parameter. You also need to provide the ARN of the CloudWatch Events rule you created earlier.
aws lambda add-permission --function-name ddns_lambda --statement-id 45 --action lambda:InvokeFunction --principal events.amazonaws.com --source-arn <enter-your-cloudwatch-events-rule-arn-here>
Step 4 – Create the private hosted zone in Route 53
To create the private hosted zone in Route 53, follow the steps outlined in Creating a Private Hosted Zone.
Step 5 – Create a DHCP options set and associate it with the VPC
In this step, you create a new DHCP options set and set the domain to be that of your private hosted zone.
- Follow the steps outlined in Creating a DHCP Options Set to create a new set of DHCP options.
- In the Create DHCP options set dialog box, give the new options set a name, set Domain name to the name of the private hosted zone that you created in Route 53, and set Domain name servers to “AmazonProvidedDNS”. Choose Yes, Create.
- Next, follow the steps outlined in Changing the Set of DHCP Options a VPC Uses to update the VPC to use the newly-created DHCP options set.
Step 6 – Launching the EC2 instance and validating results
In this step, you launch an EC2 instance and verify that the function executed successfully.
As mentioned previously, the Lambda function looks for the ZONE or CNAME tags associated with the EC2 instance. If you specify these tags when you launch the instance, you should include the trailing dot. In this example, the ZONE tag would be set to “ddnslambda.com.” and the CNAME tag could be set to “test.ddnslambda.com.”.
Because you updated the DHCP options set in this example, the Lambda function uses the specified zone when it creates the Route 53 DNS resource records. You can use the ZONE tag to override this behavior if you want the function to update a different hosted zone.
In this example, you launch an EC2 instance into the private subnet of the VPC. Because you updated the domain value of the DHCP options set to be that of the private hosted zone, the Lambda function creates the DNS resource records in the Route 53 zone file.
Launching the EC2 instance
- Follow the steps to launch an EC2 instance outlined in Launching an Instance.
- In Step 3: Configure Instance Details, for Network, select the VPC. For Subnet, select the private subnet. Choose Review and Launch.
- (Optional) If you would like to update a different private hosted zone than the one you associated with the VPC, specify the ZONE tag in this step. You can also specify the CNAME tag if you would like the function to create a CNAME resource record in the associated zone.
Choose Edit tags in the Step 7: Review Instance Launch.
Enter the key and value for Step 5: Tag Instance then choose Review and Launch.
Complete the launch of the instance and wait until the instance state changes to “running”. Then, continue to the next step.
Validating results
In this step, you verify that your Lambda function successfully updated the Route 53 resource records.
- Log in to the Route 53 console.
- In the left navigation pane, choose Hosted Zones to view the list of private and public zones currently configured in Route 53.
- Select the hosted zone that you created in step 4 to view the zone file.
- Verify that the resource records were created.
- Now that you’ve verified that the Lambda function successfully updated the Route 53 resource records in the zone file, stop the EC2 instance and verify that the records are removed by the function.
- Log in to the EC2 console.
- Choose Instances in the left navigation pane.
- Select the EC2 instance you launched earlier and choose Stop.
- Follow Steps 1 – 3 to view the DNS resource records in the Route 53 zone.
- Verify that the records have been removed from the zone file by the Lambda function.
Conclusion
Now that you’ve seen how you can combine various AWS services to automate the creation and removal of Route 53 resource records, we hope you are inspired to create your own solutions. CloudWatch Events is a powerful tool because it allows you to respond to events in real-time, such as when an instance changes its state. When used with Lambda, you can create highly scalable serverless infrastructures that react instantly to infrastructure changes.
To learn more about CloudWatch Events, see Using CloudWatch Events in the Amazon CloudWatch Developer Guide. To learn more about Lambda and serverless infrastructures, see the AWS Lambda Developer Guide and the “Microservices without the Servers” blog post.
We’ve open-sourced the code in this example in the AWS Labs GitHub repo and can’t wait to see your feedback and your ideas about how to improve the solution
Cloudmicro for AWS: Speeding up serverless development at The Coca‑Cola Company
We have a guest blog post today from our friend Patrick Brandt at The Coca‑Cola Company. Patrick and his team have open-sourced an innovative use of Docker containers to encourage rapid local development and testing for applications that use AWS Lambda and Amazon DynamoDB.
Using Cloudmicro to build AWS Lambda and DynamoDB applications on your laptop
My team at The Coca‑Cola Company recently began work on a proximity-marketing platform using AWS Lambda and DynamoDB. We’re gathering beacon sighting events via API Gateway, layering in additional data with a Lambda function, and then storing these events in DynamoDB.
In an effort to shorten the development cycle-time of building and deploying Lambda functions, we created a local runtime of Lambda and DynamoDB using Docker containers. Running our Lambda functions locally in containers removed the overhead of having to deploy code to debug it, greatly increasing the speed at which we could build and tweak new features. I’ve since launched an open-source organization called Cloudmicro with the mission of assembling Docker-ized versions of AWS services to encourage rapid development and easy experimentation.
Getting started with Cloudmicro
The Cloudmicro project I’m working with is a local runtime for Python-based Lambda functions that integrate with DynamoDB: https://github.com/Cloudmicro/lambda-dynamodb-local. The only prerequisite for this project is that you have Docker installed and running on your local environment.
Cloning the lambda-dynamodb-local project and running the hello function
In these examples, you run commands using Docker on a Mac. The instructions for running Docker commands using Windows are slightly different and can be found in the project Readme.
Run the following commands in your terminal window to clone the lambda-dynamodb-local project and execute the hello Lambda function:
> git clone https://github.com/Cloudmicro/lambda-dynamodb-local.git > cd lambda-dynamodb-local > docker-compose up -d > docker-compose run --rm -e FUNCTION_NAME=hello lambda-python
Your output will look like this:
executing hello function locally: [root - INFO - 2016-02-29 14:55:30,382] Event: {u'first_name': u'Umberto', u'last_name': u'Boccioni'} [root - INFO - 2016-02-29 14:55:30,382] START RequestId: 11a94c54-d0fe-4a87-83de-661692edc440 [root - INFO - 2016-02-29 14:55:30,382] END RequestId: 11a94c54-d0fe-4a87-83de-661692edc440 [root - INFO - 2016-02-29 14:55:30,382] RESULT: {'message': 'Hello Umberto Boccioni!'} [root - INFO - 2016-02-29 14:55:30,382] REPORT RequestId: 11a94c54-d0fe-4a87-83de-661692edc440 Duration: 0.11 ms
The output is identical to what you would see if you had run this same function using AWS.
Understanding how the hello function runs locally
We’ll look at three files and the docker-compose command to understand how the hello function executes with its test event.
The docker-compose.yml file
The docker-compose.yml file defines three docker-compose services:
lambda-python: build: . container_name: python-lambda-local volumes: - ./:/usr/src links: - dynamodb working_dir: /usr/src dynamodb: container_name: dynamodb-local image: modli/dynamodb expose: - "8000" init: image: node:latest container_name: init-local environment: - DYNAMODB_ENDPOINT=http://dynamodb:8000 volumes: - ./db_gen:/db_gen links: - dynamodb working_dir: /db_gen command: /bin/bash init.sh
- lambda-python contains the local version of the Python-based Lambda runtime that executes the Lambda function handler in lambda_functions/hello/hello.py.
- dynamodb contains an instance of the dynamodb-local application (a fully functional version of DynamoDB).
- init contains an application that initializes the dynamodb service with any number of DynamoDB tables and optional sample data for those tables.
The hello function only uses the lambda-python service. You’ll look at an example that uses dynamodb and init a little later.
The lambda_functions/hello/hello.py file
The hello function code is identical to the Lambda code found in the AWS documentation for Python handler functions:
import logging logger = logging.getLogger() logger.setLevel(logging.INFO) def hello_handler(event, context): message = 'Hello {} {}!'.format(event['first_name'], event['last_name']) return { 'message' : message }
Like the hello function, your Lambda functions will live in a subdirectory of lambda_functions. The pattern you’ll follow is lambda_functions/{function name}/{function name}.py and the function handler in your Python file will be named {function name}_handler.
You can also include a requirements.txt file in your function directory that will include any external Python library dependencies required by your Lambda function.
The local_events/hello.json file
The test event for the hello function has two fields:
{ "first_name": "Umberto", "last_name": "Boccioni" }
All test events live in the local_events directory. By convention, the file names for each test event must match the name of the corresponding Lambda function in the lambda_functions directory.
The docker-compose command
Running the docker-compose command will instantiate containers for all of the services outlined in the docker-compose.yml file and execute the hello function.
docker-compose run --rm -e FUNCTION_NAME=hello lambda-python
- The docker-compose run command will bring up the lambda-python service and the dynamodb linked service defined in the docker-compose.yml file.
- The –rm argument instructs docker-compose to destroy the container running the Lambda function once the function is complete.
- The -e FUNCTION_NAME=hello argument defines an environment variable that the Lambda function container uses to run a specific function in the lambda_functions directory (-e FUNCTION_NAME=hello will run the hello function).
Using DynamoDB
Now we’ll look at how you use the init service to create DynamoDB tables and seed them with sample data. Then we’ll tie it all together and create a Lambda function that reads data from a table in the DynamoDB container.
Creating tables and populating them with data
The init service uses two subdirectories in the db_gen directory to set up the tables in the container created by the dynamodb service:
- db_gen/tables/ contains JSON files that define each DynamoDB table.
- db_gen/table_data/ contains optional JSON files that define a list of items to be inserted into each table.
The file names in db_gen/table_data/ must match those in db_gen/tables/ in order to load tables with data.
You’ll need to follow a couple of steps to allow the init service to automatically create your DynamoDB tables and populate them with sample data. In this example, you’ll be creating a table that stores English words.
- Add a file named “words.json” to db_gen/tables.
{ "AttributeDefinitions": [ { "AttributeName": "language_code", "AttributeType": "S" }, { "AttributeName": "word", "AttributeType": "S" } ], "GlobalSecondaryIndexes": [ { "IndexName": "language_code-index", "Projection": { "ProjectionType": "ALL" }, "ProvisionedThroughput": { "WriteCapacityUnits": 5, "ReadCapacityUnits": 5 }, "KeySchema": [ { "KeyType": "HASH", "AttributeName": "language_code" } ] } ], "ProvisionedThroughput": { "WriteCapacityUnits": 5, "ReadCapacityUnits": 5 }, "TableName": "words", "KeySchema": [ { "KeyType": "HASH", "AttributeName": "word" } ] }
- Add a file named “words.json” to db_gen/table_data.
[{"word":"a","langauge_code":"en"}, {"word":"aah","langauge_code":"en"}, {"word":"aahed","langauge_code":"en"}, {"word":"aahing","langauge_code":"en"}, {"word":"aahs","langauge_code":"en"}]
Your DynamoDB database can be re-created with the init service by running this command:
docker-compose run --rm init
This will rebuild your DynamoDB container with your table definitions and table data.
You can use the describe-table command in the AWS CLI as a handy way to create your DynamoDB table definitions: first use the AWS console to create a DynamoDB table within your AWS account and then use the describe-table command to return a JSON representation of that table. If you use this shortcut, be aware that you’ll need to massage the CLI response such that the “Table” field is removed and the JSON in the “Table” field is moved up a level in the hierarchy. Once this is done, there are several fields you need to remove from the response before it can be used to create your DynamoDB table. You can use the validation errors returned by running the following command as your guide for the fields that need to be removed:
docker-compose run --rm init
Retrieving data from DynamoDB
Now you’re going to write a Lambda function that scans the words table in DynamoDB and returns the output.
- Create a getWords Lambda function in lambda_functions/getWords/getWords.py.
from lambda_utils import * import logging logger = logging.getLogger() logger.setLevel(logging.INFO) @import_config def getWords_handler(event, context, config): dynamodb = dynamodb_connect(config) words_table = dynamodb.Table(config.Dynamodb.wordsTable) words = words_table.scan() return words["Items"]
- Create local_events/getWords.json and add an empty JSON object.
{}
- Ensure that the table name is referenced in config/docker-config.py.
class Dynamodb: wordsTable = "words" endpoint = "http://dynamodb:8000" class Session: region = "us-east-1" access_key = "Temp" secret_key = "Temp"
- Now you can run your new function and see the results of a word table scan.
docker-compose run --rm -e FUNCTION_NAME=getWords lambda-python
You may have noticed the @import_config decorator applied to the Lambda function handler in the prior example. This is a utility that imports configuration information from the config directory and injects it into the function handler parameter list. You should update the config/docker-config.py file with DynamoDB table names and then reference these table names via the config parameter in your Lambda function handler.
This configuration pattern is not specific to Lambda functions run with Cloudmicro; it is an example of a general approach to environmental-awareness in Python-based Lambda that I’ve outlined on Gist.
Call for contributors
The goal of Cloudmicro for AWS is to re-create the AWS cloud on your laptop for rapid development of cloud applications. The lambda-dynamodb-local project is just the start of a much larger vision for an ecosystem of interconnected Docker-ized AWS components.
Here are some milestones:
- Support Lambda function invocation from other Docker-ized Lambda functions.
- Add a Docker-ized S3 service.
- Create Yeoman generators to easily scaffold Cloudmicro services.
Supporting these capabilities will require re-architecting the current lambda-dynamodb-local project into a system that provides more robust coordination between containers. I’m hoping to enlist brilliant developers like you to support the cause and build something that many people will find useful.
Fork the lambda-dynamodb-local project, or find me on GitHub and let me know how you’d like to help out.
Getting Started with JAWS on Amazon Web Services
Nick Corbett, AWS Professional Services, Big Data Consultant
Amazon API Gateway and AWS Lambda empower developers to deliver a microservice architecture without managing infrastructure. Building scalable, secure, and durable applications has never been easier. However, managing the deployment of a large project is not always easy. A global app, deployed across multiple AWS regions in multiple environments will collect API Gateway resources, AWS Lambda functions, Amazon Identity and Access Management (IAM) roles and other AWS resources. As your project grows, so will the number of resources. The need to coordinate and organize your efforts quickly becomes apparent.
In this post I will introduce JAWS, an open source application framework that you can use to develop massively scalable and complex apps using API Gateway and AWS Lambda whilst helping you manage your codebase and deployments. I will show you how to build a simple microservice that you can use to manage users for a sample application. You will build CRUD methods to support the management of users and persist details in Amazon DynamoDB.
To get started, you’ll first need to install Node.js. Once you’ve done that, you can install JAWS using node.js’ package manger from a command prompt (note that on some systems you may need to run this command as super user):
npm install jaws-framework -g
Now that JAWS is installed, you are ready to create your first project. Navigate to the directory where you want to create your project and type:
jaws project create
JAWS will walk you through the process of creating a project. When prompted, enter the following information:
- Project Name: Specify “userManagement” as value. Camel case is recommended here. JAWS uses AWS CloudFormation to deploy your project and some items use the project name. CloudFormation tokenizes the project name with hyphens so it is best to avoid adding any more.
- Project Domain: Use any domain you own. It is important to make this unique for your project. The project domain is used as part of the name for a new Amazon Simple Storage Service (Amazon S3) bucket. This Amazon S3 bucket is used to deploy your solution.
- Email Address for CloudWatch Alarms: Your email address.
- Stage: Specify “dev“. A stage is an environment, such as dev, UAT, or production. Each region can have multiple stages. You can easily add more stages after the project is created.
- Region: Any AWS region. The AWS region in which you will deploy your solution. You can add other regions after the project is created. Regardless of the region you pick, API Gateway will create a global Amazon CloudFront distribution for your project to provide your users with the lowest possible latency for their API requests.
- Profile: Your AWS profile. JAWS uses a profile in your AWS Command Line Interface credentials file (in
~/.aws/credentials
) to make API calls. If you have multiple profiles defined, you can select the one to use.
As it creates your project, the framework builds and runs a CloudFormation script containing some shared resources that are needed to support your project, such as IAM roles and the Amazon S3 bucket named after the project domain.
After this is complete, you are ready to create your first AWS Module (awsm). An awsm, is how JAWS describes your microservice and includes references to both your API Gateway endpoints and AWS Lambda functions. To create a module, navigate to the userManagement project directory that JAWS created and type:
jaws module create users create
This creates a new endpoint (users) with a method behind it for creating a new user (create). The following folders and files are created in the aws_modules directory of your project:
The create directory that JAWS made contains 4 files:
awsm.json
: Contains configuration for the API Gateway endpoint and AWS Lambda methodindex.js
: Contains the code you write to implement the methodhandler.js
event.json
: Defines the event that is used when your code is tested with the jaws run command
JAWS creates a thin wrapper around your code (in handler.js) to integrate with AWS Lambda. This means that you can develop and test your code (in index.js) before deploying to AWS Lambda. To demonstrate this, go to the index.js file in the create directory and update the code to:
// Export for Lambda Handler module.exports.run = function(event, context, cb) { return cb(null, action(event)); }; // Your code var action = function(event) { return { message: 'You have created user ' + event.username }; };
Next, edit the event.json
file to read:
{ "username" : "Nick" }
Finally, from the create directory, type the following command:
jaws run
The JAWS framework uses the event that is defined in event.json to test your code. The following message is returned:
JAWS: {"message":"You have created user Nick"}
The run command is good for simple testing but as your project grows, a unit test framework, such as Mocha, is recommended.
As well as developing an AWS Lambda function to implement your business logic, you also need to configure a REST endpoint in API Gateway. In our sample application, users are created using the following url:
/users called with POST
Go into awsm.json in the create directory and find the apiGateway section. Update the Path to users and the Method to POST. This indicates to JAWS that it should create a users resource in API gateway with a POST method that is integrated to your AWS Lambda code. There are other settings in the awsm.json file to control how your project is deployed, although there is no need to change anything else at the moment.
You are now ready to deploy the first iteration of your project to AWS. At the command line, type:
jaws dash
Use the arrow keys and enter to highlight both the endpoint and AWS Lambda function in yellow before navigating to deploy selected and pressing enter. Your code is then packaged, using Browserify and Minify to improve run-time efficiency, and zipped. This package is then uploaded to an S3 bucket.
For each project, JAWS maintains two CloudFormation stacks. The first stack, containing shared resources, was deployed when you made the project. JAWS now creates a second stack that contains your new AWS Lambda function (the code in the S3 bucket is used as a source). Any additional AWS Lambda functions that you write are added to this stack. After this stack is deployed, JAWS creates the API Gateway resources and methods.
You are now ready to test the deployment. Go to the AWS Management Console and open the Amazon API Gateway console. Click the userManagement-dev API and then click the POST method that JAWS created for the users resource. Click Test to test the function and use the JSON object from event.json in the request body. If everything has worked, you will see a response:
{ "message": "You have created user Nick" }
You can view more detailed logging from your AWS Lambda function in Amazon CloudWatch Logs.
The next step is to add a similar stub for the GET function. This is accessed by the url:
/users/ called with GET
To create the endpoint and method, go to your command line and, from the project directory, type:
jaws module create users get
JAWS creates another sub-directory in your AWSM for the new method. Go into awsm.json in the get subdirectory and update the apiGateway section of the file by making the following changes to the cloudformation section:
"Path": "users/{username}" "Method": "GET" "RequestTemplates": { "application/json": "{\"username\": \"$input.params('username') \"}" }
This change indicates to JAWS that the AWS Lambda function will be invoked when the path users/{username} is called. It also specifies the format of the JSON event sent to the AWS Lambda function. For example, if the url users/Anna is called with a GET verb, then your AWS Lambda function is called with the following event:
{ "username": "Anna" }
Go into index.js for the GET method and change the code to the following:
// Export For Lambda Handler module.exports.run = function(event, context, cb) { return cb(null, action(event)); }; var action = function(event) { return { message: "User requested: " + event.username }; };
You are then ready to deploy your project again (using the jaws dash command). When you test this method in API Gateway console, you are asked for the username:
You’ve now created stub functions for the CREATE and GET methods. Hopefully you can see how this process can be used to make the UPDATE and DELETE methods to complete the set of CRUD functions. Its now time to replace your stub code with something more meaningful.
Before replacing your stub code, you need a data store for your users. Open the resources-cf.json file in the cloudformation folder for your stage and region. This file contains the shared resources CloudFormation stack that JAWS deployed when you created your project. Add the following to resource:
"myDynamoDBTable" : { "Type" : "AWS::DynamoDB::Table", "Properties" : { "AttributeDefinitions": [ { "AttributeName": "username", "AttributeType": "S" } ], "KeySchema": [ { "AttributeName": "username", "KeyType": "HASH" } ], "ProvisionedThroughput": { "ReadCapacityUnits": "5", "WriteCapacityUnits": "5" }, "TableName": { "Fn::Join": [ "-", [ "users", { "Ref": "aaDataModelStage" } ] ] } } }
In addition to creating the DynamoDB table, you also need to update the IAM role used by your AWS Lambda functions so they have permission to use the service. Find the IAMPolicyLambda policy in the same file and add the following extra statement to the policy document:
{ "Effect": "Allow", "Resource": "*", "Action": [ "dynamodb:*Item", "dynamodb:Query", "dynamodb:Scan" ] }
You can deploy changes to the resources for your stage by running the following JAWS command:
jaws deploy resources dev
JAWS updates the CloudFormation stack for your resources, creating a DynamoDB table (named users-dev) and updating the IAM role used by your AWS Lambda functions. The only remaining task is to inject this dependency into your AWS Lambda function, so it can find the location to write the data. You can use the recently released API Gateway stage variables for this, or you can use an environment variable in JAWS.
Environment variables are set for each stage and region, allowing you to run the same code across multiple regions and environments and configure the code at runtime. In this case, for example, you can have a different users table for dev and production.
Open the aswm.json file for the GET function and modify the envVars section at the top of the file:
"lambda": { "envVars": [ "TABLE_NAME", "JAWS_STAGE" ],
This indicates to the framework to include environment variables called TABLE_NAME and JAWS_STAGE in the deployment package for your AWS Lambda function. Your code can access these as environment variables of the runtime:
// Export For Lambda Handler module.exports.run = function(event, context, cb) { return cb(null, action(event)); } // Your Code var action = function(event) { const tableName = process.env.TABLE_NAME + "-" + process.env.JAWS_STAGE; return { message: "User requested: " + event.username, table: tableName }; };
The final task is to set the environment variable for the stage and region. To do this, use the following JAWS command:
jaws env set dev <region> TABLE_NAME users
JAWS maintains a file in S3 that contains the environment variables. You can now deploy your project again using the jaws dash command. Before packaging your code, JAWS pulls down the environment variables from S3 and includes them in your distribution.
At runtime, your code builds the name for the DynamoDB table by combining TABLE_NAME and the JAWS_STAGE environment variable. The JAWS_STAGE environment variable is maintained by the framework. If you test your method now in API Gateway, you should see a result similar to this:
Its now a simple task to update the logic of your AWS Lambda functions to read and write from the DynamoDB table.
After you finish building your sample app, you can tidy up by deleting the two CloudFormation stacks that JAWS created. You also need to manually delete the userManagement-dev API from API Gateway.
Summary
In this post I have shown you how to get started with JAWS, a framework for building microservices using API Gateway and AWS Lambda. As you build your project you can focus on producing exciting functionality since all of the infrastructure you need is fully managed. You can start to build your microservice application, leaving the heavy lifting to AWS and the organization of your project to JAWS.
If you have any questions or suggestions, please leave a comment below.
AWS Lambda sessions at re:Invent 2015 – Wrap up
Vyom Nagrani, Sr. Product Manager, AWS Lambda
Announcements
AWS Lambda announced four new features at re:Invent 2015
- Support for Python functions
- Increased function duration from 60 seconds to 300 seconds
- Function Versioning & Aliasing
- Scheduled functions (Cron) – Console only
You can read the details for these announcements here.
Breakout sessions
We had listed the AWS Lambda breakout sessions at re:Invent’15 earlier. Here is an easy reference to all videos and slide decks
- CMP301 – AWS Lambda and the Serverless Cloud [Video] [Slides]
- ARC308 – The Serverless Company Using AWS Lambda: Streamlining Architecture with AWS [Video] [Slides]
- MBL302 – Building Scalable, Serverless Mobile and Internet of Things Back Ends [Video] [Slides]
- BDT307 – Zero Infrastructure, Real-Time Data Collection, and Analytics [Video] [Slides]
- DEV203 – Using Amazon API Gateway with AWS Lambda to Build Secure and Scalable APIs [Video] [Slides]
- GAM401 – Build a Serverless Mobile Game with Amazon Cognito, Lambda, and DynamoDB [Video] [Slides]
- ARC201 – Microservices Architecture for Digital Platforms with AWS Lambda, Amazon CloudFront and Amazon DynamoDB [Video] [Slides]
- CMP403 – AWS Lambda: Simplifying Big Data Workloads [Video] [Slides]
- CMP407 – Lambda as Cron: Scheduling Invocations in AWS Lambda [Video] [Slides]
- DVO209 – JAWS: The Monstrously Scalable Serverless Framework – AWS Lambda, Amazon API Gateway, and More! [Video] [Slides]
Partners
We have a variety of partners that provide software that integrates with AWS Lambda. We announced a partnership with Algorithmia and Twilio as Code Library Partners for AWS Lambda. We also have new blueprints available for these partner integrations on the Lambda console. This adds to our existing integration with partners like CloudBees, Codeship, Zapier, and Splunk.
What’s next
AWS Lambda turns 1 today! AWS Lambda was announced on November 13th 2014 at re:Invent’14 as part of the Day 2 Keynote. A year later, we are as excited to be enabling new serverless architectures for building applications in the cloud. Our team is heads down working on the VPC functionality we pre-announced at re:Invent. We are only a few weeks away from enabling access to resources in a private VPC from your Lambda functions. We also plan to enable the API and CLI support for AWS Lambda scheduled functions.
Moving forward, we plan to add more language support for AWS Lambda, and expand availability to more AWS regions. We have a large number of other feature additions to Lambda on our roadmap for 2016, so stay tuned to the AWS Compute Blog and AWS Lambda Forums for updates!