Category: Python


Chalice – 1.0.0 GA Release

by John Carlyle | on | in Python | Permalink | Comments |  Share

We’re excited to announce the 1.0.0 GA (Generally Available) release of Chalice!

Chalice is an open source serverless microframework that enables allows you to create and maintain application backends with a variety of AWS resources. These include:

Chalice 1.0.0 is now generally available and ready for production use. If you want, to give it a try you can download it from PyPi. You can install it with pip as follows.

pip install --upgrade chalice

We follow Semantic Versioning, and are dedicated to maintaining backwards compatibility for each major version.

Getting started with Chalice

You can get started with Chalice and deploy a fully functional API in just a few minutes by following our getting started guide.

You can find the complete documentation at readthedocs.

Notable Chalice features

Chalice provides many features to help build serverless applications on AWS. Here we provide an overview of a select few.

Building an API backend

The core of Chalice is the ability to annotate Python functions with a simple decorator that allows Chalice to deploy this function to AWS Lambda, and link it to a route in API Gateway. The following is a fully functional Chalice application, with a single linked route.

from chalice import Chalice

app = Chalice(app_name="helloworld")

@app.route("/")
def hello_world():
    return {"hello": "world"}

This application can be deployed easily by running the command chalice deploy. Chalice takes care of all the machinery around bundling up the application code, deploying that code to Lambda, setting up an API Gateway Rest API with all the routes specified in your code, and linking up the Rest API to the Lambda function. Chalice will print out something like the following while it deploys the application.

Initial creation of lambda function.
Creating role
Creating deployment package.
Initiating first time deployment...
Deploying to: dev
https://.execute-api.us-west-2.amazonaws.com/api/

Once complete, you can send requests to endpoint it printed out at the end.

$ curl https://.execute-api.us-west-2.amazonaws.com/api/
{"hello": "world"}

Dependency packaging

App packaging can be difficult in the Python world. Chalice will try to download or build all of your project requirements that are specified in a special requirements.txt file and add them to the code bundle that is uploaded to Lambda. Chalice will also try to build and deploy dependencies that have C extensions.

Pure Lambda functions

Pure Lambda functions enable you to deploy functions that are not tied to API Gateway. This is useful if you want to take advantage of the Chalice deployment and packaging features, but don’t need to call it over a REST API.

@app.lambda_function()
def custom_lambda_function(event, context):
    # Anything you want here.
    return {}

Scheduled events

Scheduled events let you mark a handler function to be called on some time interval using Amazon CloudWatch Events. It’s easy to add a scheduled job to be run using a Chalice scheduled event handler, as follows:

@app.schedule(Rate(12, unit=Rate.HOURS))
def handler(event):
    backup_logs()

Automatic policy generation

Automatic policy generation means that Chalice can scan your code for AWS calls, and generate an IAM policy with the minimal set of permissions your Lambda function needs to run. You can disable this feature and provide your own IAM policy if you need more control over exactly what permissions the Lambda function should have.

Authorizers

Chalice can handle a lot of common authorization workflows for you by providing hooks into both IAM authorization and Amazon Cognito user pools. If you want more control over your authorization methods, you can use a custom authorizer. This lets you call a Lambda function that runs custom code from your Chalice application to determine whether a request is authorized.

Continuous integration

You can use Chalice to build a continuous integration pipeline. This works by creating an AWS CodeCommit repository for your project code to live in, and an AWS CodePipeline that watches the repository for changes and kicks off a build in AWS CodeBuild whenever there are changes. You can configure the AWS CodeBuild stage to run tests on your code, and automatically deploy code to production if the tests all pass. The pipeline resources are all created and managed with an AWS CloudFormation template that Chalice will generate for you.

Upgrade notes

If you’re already a Chalice user, there are a few changes to be aware of when upgrading to version 1.0.0.

Parameters to route handling functions are now keyword arguments instead of positional arguments. In the following code, captured parts of the URI will be assigned to the argument with a matching name, rather than in the order they appear.

@route('/user/{first_name}/{last_name}')
def name_builder(last_name, first_name):
    return '%s %s' % (first_name, last_name)

This means that code in which the variable names don’t match the URI will now be broken. For example the following code will not work because the parameter and URI capture group don’t work.

@route('/user/{user_id}')
def get_user(user_number):
    return get_user(user_number)

Support for policy.json has been removed. It now must be suffixed with the stage name, for example, policy-dev.json.

Let us know what you think

We would love to hear your feedback. Feel free to leave comments or suggestions on our GitHub page.

Resources

Chalice Version 1.0.0b1 Is Now Available

by James Saryerwinnie | on | in Python | Permalink | Comments |  Share

We’ve just released Chalice version 1.0.0b1, the first preview release of Chalice 1.0.0. Since our last post that showcased the 0.9.0 release we’ve added numerous features we’re excited to share with you.

  • Support for built in authorizers. In earlier versions of Chalice, you could integrate a custom authorizer with your Chalice application. However, you had to manage the AWS Lambda function separately from your Chalice app. You can now use Chalice to manage the Lambda function used for your custom authorizer. When you define a built-in authorizer in your Chalice application, the chalice deploy command will manage both your Lambda function used for your API handler and the Lambda function used for your authorizer. You register an authorizer function with Chalice by using the @app.authorizer() decorator. Our user guide walks through an example of using built-in authorizers in Chalice.
  • Support for binary Python packges. When it’s possible, Chalice now automatically tries to download binary packages. This allows you to use Python packages that require C extensions, provided they have a manylinux1 wheel available. As a result, Python packages such as numpy, psycopg2, and Pillow will automatically work with Chalice. See 3rd Party Packages in our user guide for more information.
  • Support for scheduled events. Scheduled events has been one of the most requested features of Chalice. In version 1.0.0b1 of Chalice, you can now register a function to be called on a regular schedule. This is powered by Amazon CloudWatch Events. To create a scheduled event, you use the @app.schedule() decorator on any function in your application. Chalice takes care of creating the additional Lambda function, creating the necessary CloudWatch Events rules and targets, and adding the appropriate permissions to the Lambda function policy. See Event Sources in our user guide for more information on using scheduled events in Chalice.
  • Support for pure AWS Lambda functions. The @app.route(), @app.authorizer(), and @app.schedule() decorators not only create Lambda functions for you, they also offer a higher level of abstraction over a standard Lambda function. However, there are times when you just need a pure Lambda function with no additional levels of abstraction. Chalice now supports this with the @app.lambda_function() decorator. By using this decorator, you can still leverage all of Chalice’s deployment capabilities including automatic policy generation, deployment packaging for your requirements.txt file, stage support, etc. See pure Lambda functions in our user guide for more details.

If you’d like to try out this preview version of Chalice 1.0.0, you have two options when using pip:

  • You can specify the --pre flag: pip install --upgrade --pre chalice.
  • You can specify a version range that references this preview release: pip install chalice>=1.0.0b1,<2.0.0. This also installs any future 1.0.0 preview releases of Chalice.

We’d love to hear any feedback you have about Chalice. Try out these new features today and let us know what you think. You can chat with us on our Gitter channel and file feature requests and issues on our GitHub repo. We look forward to hearing from you.

Chalice Version 0.9.0 is Now Available

by James Saryerwinnie | on | in Python | Permalink | Comments |  Share

The latest preview version of Chalice, our microframework for Python serverless application development, is now available. This release includes a couple of commonly requested features:

To demonstrate these features, let’s walk through a sample Chalice app.

Thumbnail Generator

In this sample app, we create a view function that accepts a .png file as input, and generates a thumbnail of the image as output. This requires accepting binary content in the request body and returning binary content in the response body.

First, we install Chalice and create our initial project structure.

$ virtualenv /tmp/venv
$ source /tmp/venv/bin/activate
$ pip install chalice
$ chalice new-project thumbnail-generator
$ cd thumbnail-generator

Next, we write a view function that expects an image as the request body. It then produces a thumbnail of that image and sends it as a response. Here’s the code to do this.

from io import BytesIO
from chalice import Chalice, Response

from PIL import Image

app = Chalice(app_name='thumbnail-generator')


@app.route('/thumbnail', content_types=['image/png'], methods=['POST'])
def thumbnail():
    im = Image.open(BytesIO(app.current_request.raw_body))
    im.thumbnail((150, 150))
    out = BytesIO()
    im.save(out, 'PNG')
    return Response(out.getvalue(),
                    status_code=200,
                    headers={'Content-Type': 'image/png'})

To properly handle incoming binary content, we need two elements. First, we must declare the content type we’re expecting as input using the content_types argument in the route() call. By specifying content_types=['image/png'], we state that requests must set the Content-Type header to image/png. Second, to access the raw binary request body, we must use the app.current_request.raw_body attribute. This is the raw binary data that we will load with
Pillow, the python imaging library we’ll use to generate thumbnails.

Once we’ve used Pillow to generate a thumbnail, we must return the generated image as binary content. To do so, we use the Response` class. To return binary content, we again need two elements.

First, the response body we pass to the Response object must be of type bytes(). Second, we must specify a Content-Type header that is configured as a binary content type. By default, Chalice automatically configures a common set of content types as binary. Therefore, in most cases you only have to set the appropriate content type. You can see the list of default binary content types here
<http://chalice.readthedocs.io/en/latest/api.html#default_binary_types>__. If a binary content type you want to use isn’t in this list by default, you can change the binary content type list by modifying the app.api.binary_types list.

Before deploying, let’s test this locally. First, we install the Pillow library:

$ pip install Pillow

Next, we run chalice local and test our app using a local .png file we saved in /tmp.

$ chalice local
Serving on localhost:8000

$  curl -H 'Accept: image/png' -H 'Content-Type: image/png' \
    --data-binary @/tmp/lion.png \
    http://localhost:8000/thumbnail > /tmp/thumbnail.png

You can open /tmp/thumbnail.png and verify that a thumbnail was created.

In this example, we must set the Accept header to indicate that we want to return the binary content of type image/png.

Deploying Our App

Now that we tested the app locally, let’s deploy it to API Gateway and Lambda.

We use a couple of additional Chalice features to deploy our application. First, we configure the amount of memory to allocate to our Lambda function. This feature was just released in Chalice version 0.9.0. We want to increase the memory size of our Lambda function to 1GB. To do this, we update our .chalice/config.json file with the following lambda_memory_size entry:

$ cat .chalice/config.json
{
  "lambda_memory_size": 1024,
  "stages": {
    "dev": {
      "api_gateway_stage": "dev"
    }
  },
  "version": "2.0",
  "app_name": "thumbnail-generator"
}

We increase the amount of memory so that we have more CPU power available for our thumbnail generator. In the Lambda resource model, you choose the amount of memory you want for your function, and are allocated proportional CPU power and other resources. For example, choosing 256MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128MB of memory.

Next, we need to install Pillow. Pillow contains C extensions that must be compatible with the platform where you run the Lambda function.

Create a vendor/ directory at the root of your application, as follow.

$ mkdir vendor

Next, download the Pillow wheel file for Linux:

$ cd vendor
$ pip download --platform manylinux1_x86_64 \
    --python-version 27 --implementation cp \
    --abi cp27mu --only-binary=:all: Pillow
$ unzip Pillow*
$ rm *.whl
$ cd ..

If you get an error about being unable to find the olefile package, you can safely ignore it. You can read more about how the vendor/ directory works in our documentation.

Now we can deploy the app using the chalice deploy command.

$ chalice deploy
Initial creation of lambda function.
Creating role
Creating deployment package.
Initiating first time deployment...
Deploying to: dev
https://abcd.execute-api.us-west-2.amazonaws.com/dev/

Let’s test it. We can use the same curl command we used previously, but replace it with our newly created remote endpoint.

$ curl -H 'Accept: image/png' -H 'Content-Type: image/png' \
    --data-binary @/tmp/lion.png \
    https://y5d036o2mf.execute-api.us-west-2.amazonaws.com/dev/thumbnail \
    > /tmp/thumbnail2.png

You can open /tmp/thumbnail2.png and verify that Chalice correctly generated a thumbnail.

Try out the latest version of Chalice today and let us know what you think. You can chat with us on our gitter channel and file feature requests on our github repo We look forward to your feedback and suggestions.

Build and Deploy a Serverless REST API in Minutes Using Chalice

by Leah Rivers | on | in Python | Permalink | Comments |  Share

Chalice is a serverless microframework that makes it simple for you to use AWS Lambda and Amazon API Gateway to build serverless apps. We’ve improved Chalice based on community feedback from GitHub, and we’re eager for you to take our latest version for a spin. Hopefully, you’ll find Chalice a fast and effective way to build serverless apps.

To help you get started with Chalice, here’s a quick five-step review:

   Step 1: Install Chalice
   Step 2: Configure credentials
   Step 3: Create a project
   Step 4: Deploy your API
   Step 5: You’re done launching a simple API. Consider adding something to your app!

Let’s dig in.

Step 1: Install Chalice.
To install Chalice, you have to use Python2.7 or 3.6, the versions Lambda supports. We recommend using a virtual environment, as follows.
(If you haven’t installed chalice before, you can do that with pip install chalice).

 $ pip install virtualenv
 $ virtualenv ~/.virtualenvs/chalice-demo
 $ source ~/.virtualenvs/chalice-demo/bin/activate

Step 2: Add credentials if you haven’t previously configured boto3 or the AWS CLI.
(If you’re already running boto3 or the AWS CLI, you’re all good. Move on to Step 3.)

If this is your first time configuring credentials for AWS, use the following.

 $ mkdir ~/.aws
 $ cat >> ~/.aws/config
 [default]
 aws_access_key_id=YOUR_ACCESS_KEY_HERE
 aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
 region=YOUR_REGION (such as us-west-2, us-west-1, etc)

For more information on all the supported methods for configuring credentials, see the boto3 docs.

Step 3: Create a project using the chalice command line.
Use the new-project command to create a sample app that defines a single view.

 $ chalice new-project helloworld
 $ cd helloworld

Take a moment to check out what you’ve created. In app.py, you’ve created a sample app that defines a single view, /, that when called will return the JSON body {“hello”: “world”}.

Step 4: Deploy your App.
Alright, double-check that you’re still in your project directory – you’re ready to deploy!
From the command line, run chalice deploy.

 $ chalice deploy
 ...
 Initiating first time deployment...
 https://qxea58oupc.execute-api.us-west-2.amazonaws.com/dev/

You now have an API up and running using API Gateway and Lambda.

 $ curl https://qxea58oupc.execute-api.us-west-2.amazonaws.com/dev/
 {"hello": "world"}

Step 5: Add something to your app!
From this point, there’s a bunch of stuff you can do, including adding URL parameters, adding routing, or customizing the HTTP response. Find tutorials and examples here.

Have fun!

Using Python and Amazon SQS FIFO Queues to Preserve Message Sequencing

by Tara Van Unen | on | in Python | Permalink | Comments |  Share

Thanks to Alexandre Pinhel, Solutions Architect from our team for writing this post!

Amazon SQS is a managed message queuing service that makes it simple to decouple application components. We recently announced an entirely new queue type, SQS FIFO (first-in, first out) queues with exactly-once processing and deduplication. SQS FIFO queues are now available in the US East (Ohio) and US West (Oregon) regions, with more regions to follow. This new type of queue lets you use Amazon SQS for systems that depend on receiving messages in exact order, and exactly once, such as financial services and e-commerce applications. For example, FIFO queues help ensure mobile banking transactions are processed in the correct sequence, and that inventory updates for online retail sites are processed in the right order. In this post, we show how to use FIFO queues to preserve message sequencing with Python.

FIFO queues complement our existing SQS standard queues, which offer higher throughput, best-effort ordering, and at-least-once delivery. The following diagram compares the features of standard queues vs. FIFO queues. The same API functions apply to both types of queues.

The below use case provides an example of how you can now use SQS FIFO queues to exchange sequence-sensitive information. For more information about developing applications using Amazon SQS, see the Amazon SQS Developer Guide.

SQS FIFO Queues Example

In the capital markets industry, some of the most common patterns for exchanging messages with partners and customers are based on messaging technologies with two types of scenarios:

  1. Communication channels between two messaging managers (one sender channel and one receiver channel). Each messaging manager hosts the local queue and has an alias to the remote queue hosted on the other side (an MQ manager). The messages sent from an MQ manager are not stored locally. The receiving MQ manager stores the messages for the client applications of the named queues.
  2. A single messaging manager that hosts all the queues and that has the associated responsibility for message exchange and backup.

You can use Amazon SQS to decouple the components of an application so that these components can run independently, as expected in a messaging use case. The following diagram shows a sample architecture using an SQS queue with processing servers.


To preserve the order of messages, we use FIFO queues. These queues help ensure that trades are received in the correct order, and a book event is received before an update event or a cancel event.

Important: The name of a FIFO queue must end with the .fifo suffix.

The following diagram shows a financial use case, where Amazon SQS FIFO queues are used with different processing servers based on the type of messages being managed.

 

 

In FIFO queues, Amazon SQS also provides content-based deduplication. Content-based deduplication allows SQS to distinguish the contents of one message from the contents of another message using the message body. This helps eliminate duplicates in referential systems such as those that manage pricing.

In the following example, we simulate the two parts of a capital market exchange. In the first part, we simulate the application sending the trade status and sending messages to the queue named Trade Status. (In Amazon SQS, the queue will be named TradeStatus.fifo.) The application regularly sends trade status received during the trade lifecycle in the queue (for example, trade received, trade checked, trade confirmed, and so on). In the second part, we simulate a client application that gets the trade status to update an internal website or to send status update notifications to other tools. The script stops after the message is read.

To accomplish this, you can use the following two Python code examples. This example is using boto3, the AWS SDK for Python.

This first script sends an XML message to a queue named TradeStatus.fifo, and the second script receives the message from the same queue. Messages can contain up to 256 KB of text in any format. Any component can later retrieve the messages programmatically using the Amazon SQS API. You can manage messages larger than 256 KB by using the SQS Extended Client Library for Java, which uses Amazon S3 to store larger payloads.

For queue creation, please see the Amazon SQS Developer guide.

Name: TradeStatus.fifo

URL: https://sqs.us-west-2.amazonaws.com/12345678/TradeStatus.fifo

The scripts below are in Python2.

import boto3

# Get the service resource
sqs = boto3.resource('sqs')

# Get the queue
queue = sqs.get_queue_by_name(QueueName='TradeStatus.fifo')

try:
    userInput = raw_input("Please enter file name: ")
except NameError:
    pass

with open(userInput, 'r') as myfile:
    data=myfile.read()

response = queue.send_message(
    MessageBody=data,
    MessageGroupId='messageGroup1'
)

# The response is NOT a resource, but gives you a message ID and MD5
print(response.get('MessageId'))
print(response.get('MD5OfMessageBody'))

The following Python code receives the message from the TradeStatus.fifo queue and deletes the message when it’s received. Afterward, the message is no longer available.

import boto3

# Get the service resource
sqs = boto3.resource('sqs')

# Get the queue
queue = sqs.get_queue_by_name(QueueName='TradeStatus.fifo')

# Process messages by printing out body
for message in queue.receive_messages():
    # Print out the body of the message
    print('Hello, {0}'.format(message.body))

    # Let the queue know that the message is processed
    message.delete()

Note: In Python, you need only the name of the queue.

More Resources

In this post, we showed how you can use Amazon SQS FIFO queues to exchange data between distributed systems that depend on receiving messages in exact order, and exactly once. You can get started with SQS FIFO queues using just three simple commands. For more information, see the following resources:

Chalice Version 0.6.0 is Now Available

by James Saryerwinnie | on | in Python | Permalink | Comments |  Share

The latest preview version of Chalice, our microframework for Python serverless application development, now includes a couple of commonly requested features:

  • Customizing the HTTP response. A new Response class, chalice.Response, enables you to customize the HTTP response by specifying the status code, body, and a mapping of HTTP headers to return. The tutorial in the chalice documentation shows how to use this new functionality to return a non-JSON response to the user.
  • Vendoring binary packages. You can create a top-level vendor/ directory in your application source directory. This vendor directory is automatically included as part of the AWS Lambda deployment package when you deploy your application. You can use this feature for any private Python packages that can’t be specified in your requirements.txt file, as well as any binary content that includes Python packages with C extensions. For more information, see the packaging docs.

Let’s look at the first feature in more detail.

Customizing the HTTP Response

The following example shows a view function that returns a plain text response to the user.

from chalice import Chalice, Response

app = Chalice(app_name='helloworld')

@app.route('/')
def hello_world():
    return Response(
        status_code=200,
        body='hello world',
        headers={'Content-Type': 'text/plain'})

The existing default behavior of returning a JSON response is still preserved. To return a JSON response, you can just return the equivalent Python value directly from your view function.

from chalice import Chalice, Response

app = Chalice(app_name='helloworld')

@app.route('/')
def hello_world():
    return {'hello': 'world'}

You can also use this chalice.Response classto return HTTP redirects to users. In this view function, we accept a URL in the response body and generate a redirect to that URL:

from chalice import Chalice, Response

app = Chalice(app_name='redirect')

@app.route('/redirect', content_types=['text/plain'])
def hello_world():
    url = app.current_request.raw_body.strip()
    return Response(
        status_code=301,
        body='',
        headers={'Location': url})

See the 0.6.0 upgrade notes for more information.

Try out the latest version of Chalice today and let us know what you think. You can chat with us on our gitter channel and file feature requests on our github repo. We look forward to your feedback and suggestions.

Chalice 0.4 & 0.5 Deliver Local Testing and Multifile Application Capabilities for Python Serverless Application Development

by Leah Rivers | on | in Python | Permalink | Comments |  Share

We’re continuing to add features to Chalice, a preview release of our microframework for Python serverless application development using AWS Lambda and Amazon API Gateway. Chalice is designed to make it simple and fast for Python developers to create REST APIs built in a serverless framework.

In our latest releases, we’ve added initial versions for a couple of the most commonly requested features:

  1. Save time by testing APIs locally before deploying to Amazon API Gateway. In this first version of local testing support for Chalice, we’ve delivered a local HTTP server you can use to test and debug a local version of your python app. This enables you to avoid the work of deploying to API Gateway before you validate APIs.
  2. Build more complex applications with more complex initial support for multifile python apps. Chalice 0.4 enables Python developers to maintain their preferred best practices and coding styles for applications that would not normally be contained within one single file, and to include files of other types as part of the deployment package. This improves on our earlier Chalice releases where deployment packages were limited to the app.py file.

We’ve also improved existing capabilities that make it easier to build and manage your serverless apps.

  1. More configurable logging with improved readability. We’ve added the ability to configure logging for the app object, where previously logging was configured on the root logger. This update enables you to configure logging levels and log format, and eliminates some duplicate log entries seen in previous versions of Chalice.
  2. Improved ability to retrieve your app’s Amazon API Gateway URL. We’ve included a chalice url command which enables you to programmatically retrieve the URL of your API; in previous versions this was a manual process.

Our releases continue to be focused on feedback and requests from the developer community. Want to learn more? Here are a few suggestions.

Try building a serverless application with Chalice. Chalice is available on PyPI (pip install chalice) and GitHub (https://github.com/awslabs/chalice – check out the README for tutorials). It’s published as a preview project and is not yet recommended for production APIs. You can also see our original Chalice blog post where we introduced a preview release of Chalice.

Stay tuned for new capabilities to be released. You can check out the working list of features for our upcoming release here: https://github.com/awslabs/chalice/blob/master/CHANGELOG.rst#next-release-tbd

Let us know what you think. We look forward to your feedback and suggestions. Feel free to leave comments here or come talk to us on GitHub.
Planning to attend AWS re:Invent? Come check out our re:Invent session focused on Chalice where we will present new features and go through demos, such as how to deploy a REST API in less than 30 seconds. You can add this session to your re:Invent schedule here, or sign up for the re:Invent live stream.

Preview the Python Serverless Microframework for AWS

by Peter Moon | on | in Python | Permalink | Comments |  Share

Serverless computing is one of the most talked-about subjects among AWS customers. The AWS serverless offerings, AWS Lambda and Amazon API Gateway, make it possible for developers to create and run API applications with built-in, virtually unlimited scalability without managing any servers. Today the AWS Developer Tools team is excited to announce the preview of the Python Serverless Microframework for AWS.

This three-minute video shows how quickly you can start building serverless APIs using the framework and its command-line tool, chalice.

In just 45 seconds, I created a new Hello World project, inspected its code file (app.py), deployed it to a public API endpoint, and using curl, made a successful HTTP GET request to the endpoint. Because our goal is to minimize the time it takes to get started, we hope you’ll enjoy the simple and fast experience offered by the new microframework.

In the next minute of the video, I added a new API feature to the app.py file, redeployed the API, and then verified that it works as expected.

If you’ve noticed the programming model feels familiar, that’s because it’s based on the one used by Flask, a popular Python microframework praised by the Python community for its simplicity and ease of use. We believe adopting a similarly succinct and intuitive style will help Python developers build serverless APIs as quickly as possible.

In the last part of the video, you’ll see how the framework makes it easy to consume AWS Lambda’s built-in logging feature available through Amazon CloudWatch Logs. Using the chalice logs and chalice deploy commands together, you can iterate quickly over test-diagnose-fix-deploy cycles in a live environment. The chalice deploy command can optionally take a deployment stage name, and you can deploy different versions of your code to different stages. Using this feature, you can leave your production stage intact while modifying your development stage. Then you can deploy to the production stage when the changes are ready to go out.

The Python Serverless Microframework for AWS is available on PyPI (pip install chalice) and GitHub (https://github.com/awslabs/chalice). It is published as a preview project and is not yet recommended for production APIs.

We look forward your feedback and suggestions. Feel free to leave comments here or come talk to us on GitHub!

How to Analyze AWS Config Snapshots with ElasticSearch and Kibana

by Vladimir Budilov | on | in Python | Permalink | Comments |  Share

Introduction
In this blog post, I will walk you through a turn-key solution that includes one of our most recently released services, AWS Config. This solution shows how to automate the ingestion of your AWS Config snapshots into the ElasticSearch/Logstash/Kibana (ELK) stack for searching and mapping your AWS environments. Using this functionality, you can do free-form searches, such as “How many EC2 instances are tagged PROD?” or “How many EC2 instances are currently connected to this master security group?”

Prerequisites
In this post, I assume that you have an ELK stack up and running. (Although the “L” isn’t really required, the ELK acronym has stuck, so I’ll continue to use it.)

Here are some ways to get the ELK stack up and running:

  1. You can use our Amazon ElasticSearch Service, which provides the two main components you’ll be using: ElasticSearch and Kibana.
  2. Take a look at this excellent post by Logz.io. It provides step-by-step instructions for installing the ELK stack on an EC2 instance.
  3. You can install Docker locally or create an Amazon EC2 Container Service (Amazon ECS) cluster and then install the ELK Docker image. Follow the instructions here.

You can download the python app referenced in this post from https://github.com/awslabs/aws-config-to-elasticsearch

Why AWS Config?
AWS Config provides a detailed view of your configurations of your AWS resources and their relationships to other resources. For example, you can find out which resources are set up in your default VPC or which Availability Zone has the most EC2 instances. AWS Config also captures the history of configuration changes made to these resources and allows you to look them up through an API. The service allows you to create one-time snapshots or turn on configuration recording, which provides change snapshots and notifications.

Why ELK?
ElasticSearch and Kibana are some of the most popular free, open-source solutions out there to analyze and visualize data. ElasticSearch, which is built on the Lucene search engine, allows for schema-less data ingestion and querying. It provides out-of-the-box data analysis queries and filters, such as data aggregates and term counts. Kibana is the visualization and searching UI that opens up the ElasticSearch data to the regular user.

The Solution
I’ve created a Python app that automates the process of getting AWS Config data from your AWS account to ELK. In short, it asks AWS Config to take a snapshot in each region in which you have the service enabled; waits until the snapshot is uploaded to the configured Amazon S3 bucket; copies the snapshot from the S3 bucket; parses the snapshot (which is just a huge JSON blob); and ingests the JSON array elements into ELK.

Running the Script
You have a couple of options when you run the app. You can specify the region that you want to export and load by including -r and the region name as shown:

./esingest.py –d localhost:9200 –r us-east-1

Or you can simply include the destination (which is required). The app will loop over all of the regions. The following output is an example of what you would see if you don’t specify the region:

./esingest.py –d localhost:9200

figure-1-app-output

Working with Kibana
Now that you have ingested the data into ElasticSearch, you need to use Kibana to index the data. The first time you open Kibana, the Settings page will be displayed. Use this page to configure the searchable index. For simplicity’s sake, under Index name or pattern, type *, and for Time-field name, choose snapshotTimeIso. You can use any date field from the drop-down list, such as resourceCreationTime:

figure-2-kibana-configuration

This will index all of your ElasticSearch indices and use the snapshotTimeIso as the time-series field. You will have duplicates if you run esingest without deleting the current ELK indices, but you will be able to include the snapshot time in your search queries to get time-based results.

Now that we have indexed the data in Kibana, let’s do some searching. Choose the Discover tab and change the time filter by clicking the text in the upper-right corner:

figure-3-kibana-discover

For now, choose Last 5 years, and then minimize the Time Filter section.

For our first search, type resourceType: "aws::ec2::instance" in the text field. You will see all of your EC2 instances in the search results. The time graph shows when they were added to ElasticSearch. Because I ran esingest just once, there’s only one Config snapshot loaded, and only one timestamp will show up.

figure-4-kibana-search-instances

There are many other search queries you can use. Kibana supports the Lucene query syntax, so see this tutorial for examples and ideas.
As you can see, the time filter shows when the data was ingested into ElasticSearch. You might have duplicates here, so you can specify the instance ID and the exact snapshot time (input: resourceType: “*Instance*” AND “sg-a6f641c0*”)

figure-5-search-instances-and-securitygroup

Kibana Visualize Functionality
In addition to search functionality, Kibana provides a way to visualize search results and create search slices. Let’s look at some real-world use cases that I’ve encountered while talking to customers. Click the Visualize tab, choose Pie Chart, and start exploring!

What’s my EC2 distribution between Availability Zones?
Input: resourceType: “aws::ec2::Instance”

figure-6-kibana-visuralize-instances

Let’s create a sub-aggregation and add the tags that are assigned to those EC2 instances:

Input: resourceType: “aws::ec2::Instance”

figure-7-kibana-visualize-instances

Which AMIs were used to create your EC2 instances, and when were they created?
Input: *

figure-8-kibana-visualize-instances-and-regions

How many instances use a security group that you have set up?
Input: “sg-a6f641c0*”

figure-9-kibana-visualize-instances-and-sg

Conclusion
AWS Config is a useful tool for understanding what’s running in your AWS account. The combination of ELK and AWS Config offers AWS admins a lot of advantages that are worth exploring.

Serverless Service Discovery: Part 4: Registrar

by Magnus Bjorkman | on | in Python | Permalink | Comments |  Share

In this, the last part of our serverless service discovery series, we will show how to register and look up a new service. We will add these components:

AWS Lambda Registrar Agent

In Docker, it is common to have container agents that add functionality to your Docker deployment. We will borrow from this concept and build a Lambda registrar agent that will manage the registration and monitoring of a service.


def component_status(lambda_functions, rest_api_id):
    """Checking component status of REST API."""
    any_existing = False
    any_gone = False
    client = boto3.client('lambda')
    for lambda_function in lambda_functions:
        try:
            logger.info("checking Lambda: %s" % (lambda_function,))
            client.get_function_configuration(
                            FunctionName=lambda_function)
            any_existing = True
        except botocore.exceptions.ClientError:
            any_gone = True

    client = boto3.client('apigateway')
    try:
        logger.info("checking Rest API: %s" % (rest_api_id,))
        client.get_rest_api(restApiId=rest_api_id)
        any_existing = True
    except botocore.exceptions.ClientError:
        any_gone = True

    if (not any_existing):
        return "service_removed"
    elif (any_gone):
        return "unhealthy"
    else:
        return "healthy"


def lambda_handler(event, context):
    """Lambda hander for agent service registration."""
    with open('tmp/service_properties.json') as json_data:
        service_properties = json.load(json_data)

    logger.info("service_name: %s" % (service_properties['service_name'],))
    logger.info("service_version: %s" % (service_properties['service_version'],))

    status = component_status(service_properties['lambda_functions'],
                              service_properties['rest_api_id'])

    register_request = {
            "service_name": service_properties['service_name'],
            "service_version": service_properties['service_version'],
            "endpoint_url": service_properties['endpoint_url'],
            "ttl": "300"
            }
    if (status == 'healthy'):
        logger.info('registering healthy service')

        register_request["status"] = 'healthy'

        response = signed_post(
          service_properties['discovery_service_endpoint']+"/catalog/register",
          "us-east-1",
          "execute-api",
          json.dumps(register_request))


    elif (status == 'unhealthy'):
        logger.info('registering unhealthy service')

        register_request["status"] = 'unhealthy'

        response = signed_post(
          service_properties['discovery_service_endpoint']+"/catalog/register",
          "us-east-1",
          "execute-api",
          json.dumps(register_request))

    else:
        logger.info('removing service and registrar')

        deregister_request = {
            "service_name": service_properties['service_name'],
            "service_version": service_properties['service_version']
            }

        response = signed_post(
            service_properties['discovery_service_endpoint'] +
            "/catalog/deregister",
            "us-east-1",
            "execute-api",
            json.dumps(deregister_request))

        client = boto3.client('lambda')
        client.delete_function(
                 FunctionName=service_properties['registrar_name'])

The Lambda registrar agent is packaged with a property file that defines the Lambda functions and Amazon API Gateway deployment that are part of the service. The registrar agent uses the component_status function to inspect the state of those parts and takes action, depending on what it discovers:

  • If all of the parts are there, the service is considered healthy. The register function is called with the service information and a healthy status.
  • If only some of the parts are there, the service is considered unhealthy. The register function is called with the service information and an unhealthy status.
  • If none of the parts are there, the service is considered to have been removed. The deregister function is called, and the Lambda agent will delete itself because it is no longer needed.

Subsequent register function calls will overwrite the information, so as the health status of our services changes, we can call the function repeatedly. In fact, when we deploy the agent with our Hello World service, we will show how to put the Lambda registrar agent on a five-minute schedule to continuously monitor our service.

Deploy the Hello World Service with the Lambda Agent

We will first implement our simple Hello World Lambda function:


def lambda_handler(api_parameters, context):
    """Hello World Lambda function."""
    return {
            "message": "Hello "+api_parameters['name']
            }

We will create a Swagger file for the service:


{
  "swagger": "2.0",
  "info": {
    "title": "helloworld_service",
    "version": "1.0.0"
  },
  "basePath": "/v1",
  "schemes": ["https"],
  "consumes": ["application/json"],
  "produces": ["application/json"],
  "paths": {
    "/helloworld/{name}": {
      "parameters": [{
        "name": "name",
        "in": "path",
        "description": "The name to say hello to.",
        "required": true,
        "type": "string"
      }],
      "get": {
        "responses": {
          "200": {
            "description": "Hello World message"
          }
        },
        "x-amazon-apigateway-integration": {
          "type": "aws",
          "uri": "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/$helloworld_serviceARN$/invocations",
          "httpMethod": "POST",
          "requestTemplates": {
            "application/json": "{\"name\": \"$input.params('name')\"}"
          },
          "responses": {
            "default": {
              "statusCode": "200",
              "schema": {
                "$ref": "#/definitions/HelloWorldModel"
              }
            }
          }
        }
      }
    }
  },
  "definitions": {
    "HelloWorldModel": {
      "type": "object",
      "properties": {
        "message": {
          "type": "string"
        }
      },
      "required": ["message"]
    }
  }
}

Now we are ready to pull everything we have done in this blog series together: we will deploy this service with a Lambda registrar agent that registers and deregisters it with our serverless discovery service. First, we need to add the requests Python module to the directory we are deploying from because our Lambda registrar agent is dependent on it.


pip install requests -t /path/to/project-dir

Second, we deploy the Hello World service and the Lambda registrar agent:


ACCOUNT_NUMBER = _your aws account number

######################################
# Deploy Hello World Service
######################################
create_deployment_package("/tmp/helloworld.zip", ["helloworld_service.py"])
hello_world_arn = create_lambda_function(
                       "/tmp/helloworld.zip",
                       "helloworld_service",
                       "arn:aws:iam::"+ACCOUNT_NUMBER+":role/lambda_s3",
                       "helloworld_service.lambda_handler",
                       "Hello World service.",
                       ACCOUNT_NUMBER)
replace_instances_in_file("swagger.json",
                          "/tmp/swagger_with_arn.json",
                          "$helloworld_serviceARN$",
                          hello_world_arn)
api_id = create_api("/tmp/swagger_with_arn.json")
rest_api_id, stage, endpoint_url = deploy_api(api_id, "/tmp/swagger_with_arn.json", "dev")

######################################
# Deploy Lambda Registrar Agent
######################################
with open('/tmp/service_properties.json',
          'w') as outfile:
    json.dump(
      {
       "lambda_functions": ["helloworld_service"],
       "rest_api_id": rest_api_id,
       "stage": stage,
       "endpoint_url": endpoint_url,
       "service_name": "helloworld",
       "service_version": "1.0",
       "discovery_service_endpoint":
       "https://1vvw0qvh4i.execute-api.us-east-1.amazonaws.com/dev",
       "registrar_name": "registrar_"+rest_api_id
       }, outfile)

create_deployment_package("/tmp/helloworld_registrar.zip",
                          ["registrar.py", "/tmp/service_properties.json",
                           "requests"])
registrar_arn = create_lambda_function(
                       "/tmp/helloworld_registrar.zip",
                       "registrar_"+rest_api_id,
                       "arn:aws:iam::"+ACCOUNT_NUMBER+":role/lambda_s3",
                       "registrar.lambda_handler",
                       "Registrar for Hello World service.",
                       ACCOUNT_NUMBER)

After we have deployed the Hello World service, we create a JSON file (service_properties.json) with some of the outputs from that deployment. This JSON file is packaged with the Lambda registrar agent.

Both the service and the agent are now deployed, but nothing is triggering the agent to execute. We will use the following to create a five-minute monitoring schedule using CloudWatch events:


client = boto3.client('events')
response = client.put_rule(
    Name="registrar_"+rest_api_id,
    ScheduleExpression='rate(5 minutes)',
    State='ENABLED'
)
rule_arn = response['RuleArn']

lambda_client = boto3.client('lambda')
response = lambda_client.add_permission(
        FunctionName=registrar_arn,
        StatementId="registrar_"+rest_api_id,
        Action="lambda:InvokeFunction",
        Principal="events.amazonaws.com",
        SourceArn=rule_arn
    )

response = client.put_targets(
    Rule="registrar_"+rest_api_id,
    Targets=[
        {
            'Id': "registrar_"+rest_api_id,
            'Arn': registrar_arn
        },
    ]
)

Now we have deployed a service that is being continuously updated in the discovery service. We can use it like this:


############################
# 1. Do service lookup
############################
request_url="https://yourrestapiid.execute-api.us-east-1.amazonaws.com/"\
            "dev/catalog/helloworld/1.0"
response = requests.get(request_url)
json_response = json.loads(response.content)


############################
# 2. Use the service
############################
request_url=("%s/helloworld/Magnus" % (json_response['endpoint_url'],))

response = requests.get(request_url)
json_response = json.loads(response.content)
logger.info("Message: %s" % (json_response['message'],))

We should get the following output:


INFO:root:Message: Hello Magnus

Summary

We have implemented a fairly simple but functional discovery service without provisioning any servers or containers. We can build on this by adding more advanced monitoring, circuit breakers, caching, additional protocols for discovery, etc. By providing a stable host name for our discovery service (instead of the one generated by API Gateway), we can make that a central part of our microservices architecture.

We showed how to use Amazon API Gateway and AWS Lambda to build a discovery service using Python, but the approach is general. It should work for other services you want to build. The examples provided for creating and updating the services can be enhanced and integrated into any CI/CD platforms to create a fully automated deployment pipeline.