AWS Developer Blog

AWS Toolkit for Eclipse: Serverless Applications

I’m glad to announce that the AWS Lambda plugin in the AWS Toolkit for Eclipse now supports serverless application development for Java. The serverless application (also called a Lambda-based application) is composed of functions triggered by events. In this blog, I provide two examples to show you how to leverage the Eclipse IDE to create and deploy a serverless application quickly.

Install the AWS Toolkit for Eclipse

To install the latest AWS Toolkit for Eclipse, go to this page and follow the instructions at the top right of the page. You should install the AWS Toolkit for Eclipse Core, AWS CloudFormation Tool, and AWS Lambda Plugin to use this feature. The following figure shows where you can choose these three components in the installation wizard. To complete the installation, you need to review and accept the license and restart Eclipse.

InstallServerless

Create a Serverless Project

To create a serverless project, click the Toolbar AWS and choose New AWS Serverless Project…, The following wizard opens. You can also create a new serverless project using the AWS Toolkit for Eclipse in the usual way: Choose File, New, Other, AWS and then choose AWS Serverless Java Project. As you can see in the following figure, the Toolkit provides two blueprints for you to start with: article and hello-world.

  • article – This is a simple serverless application that helps manage articles. It consists of two Lambda functions triggered by API events. The two functions are GetArticle and PutArticle, which manage storing articles to the backend service and retrieving articles to the front end. This blueprint also leverages an Amazon S3 bucket for storing article content and an Amazon DynamoDB table for storing article metadata.
  • hello-world – This blueprint project only includes a simple stand alone Lambda function, HelloWorld, which is not triggered by any event and not bound to any resource. It simply takes in a String and outputs it with the prefix “Hello”. If an empty String is provided, it outputs “Hello World”.

NewServerlessProject

You can also use a serverless template to create a serverless project by choosing Select a Serverless template file and then importing the template file. This template file is a simplified version of the SAM (AWS Serverless Application Model) file that is used in a serverless application to define the application resources stack. The following snippet is from the blueprint articles template for defining the Lambda function GetArticle. Different from the real SAM file, you don’t need to provide the CodeUri and Runtime properties, and you only need to provide the class name for the Handler property instead of the Fully Qualified Class Name. By importing a template file, the AWS Toolkit for Eclipse will generate all the Lambda function hooks and the Lambda Proxy Integration models used as the API event Input and Output for the Lambda functions.

{
  "Type": "AWS::Serverless::Function",
  "Properties": {
    "Handler": "com.serverless.demo.GetArticle",
    "Runtime": "Java",
    "CodeUri": "s3://serverless-bucket/get-article.zip",
    "Policies": [
      "AmazonDynamoDBReadOnlyAccess",
      "AmazonS3ReadOnlyAccess"
    ],
    ...
}

The following figure shows the startup view after you create the article blueprint project. As you can see from the project structure, the AWS Toolkit for Eclipse puts all the Lambda functions defined in the template into a function package, and all the required models into a model package. You can check the serverless.template file for a closer look at this project. As we mentioned earlier, this is a simplified version of a SAM file, which is derived from AWS CloudFormation template. See the README.html page for the next.

articleStartup

Deploy a Serverless Project

If the serverless project is created from a blueprint, you can deploy it directly to AWS. Notice that the article blueprint will create an S3 bucket and a DynamoDB table for use of the Lambda functions. You can open the serverless.template file and customize the resource names in the Parameters property section, as shown in the following snippet.

"Parameters" : {
    "ArticleBucketName" : {
        "Type" : "String",
        "Default" : "serverless-blueprint-article-bucket",
        "Description" : "Name of S3 bucket used to store the article content.",
        "MinLength" : "0"
    },
    "ArticleTableName" : {
        "Type" : "String",
        "Default" : "serverless-blueprint-article-table",
        "Description" : "Name of DynamoDB table used to store the article metadata.",
        "MinLength" : "0"
      },
      ...
}

To deploy this project to AWS, click the project name in the explorer view, choose Amazon Web Services, and then choose Deploy Serverless Project. Or right click the workspace of any Lambda function file, choose AWS Lambda, and then choose Deploy Serverless Project. You will see the following wizard. Choose the S3 bucket, type the CloudFormation stack name, and then choose Finish. The AWS Toolkit for Eclipse generates the fat JAR file for the underlying Lambda functions, and uploads it to the S3 bucket you chose. It’ll also update the serverless.template file in memory to be a real SAM file and upload it to the S3 bucket. AWS CloudFormation reads this file to create the stack.

DeployArticle

While the AWS CloudFormation stack is being created, a Stack Editor view is shown to indicate the current status of the stack. This page is automatically refreshed every five seconds, but you can also manually refresh it by clicking the refresh icon at the top right of the view. Upon CREATE_COMPLETE, you will see a link to the right  of the Output label in the top section. This link is the Prod stage endpoint of the API Gateway API created by this serverless project.

DeploymentStackEditor

Test a Serverless Project

After successfully deploying the article project, you can test the two APIs by hitting the API Prod endpoint through browser tools or command line tools.

  • Using the Curl command line tool.
    $ curl --data "This is an article!" https://s5cvlouqwe.execute-api.us-west-2.amazonaws.com/Prod?id=1
    Successfully inserted article 1
    $ curl -X GET https://s5cvlouqwe.execute-api.us-west-2.amazonaws.com/Prod?id=1
    This is an article!
  • Using the Simple rest client plugin in Chrome. You can also use this plugin to send a POST request to the endpoint.

We’d like to know what you think of the workflow for developing serverless applications with the AWS Toolkit for Eclipse. Please let us know if there are other features you want to see in this toolkit. We appreciate your comments.

Introducing Support for Java SDK Generation in Amazon API Gateway

by Andrew Shore | on | in Java | Permalink | Comments |  Share

We are excited to announce support for generating a Java SDK for services fronted by Amazon API Gateway. The generated Java SDKs are compatible with Java 8 and later. Generated SDKs have first-class support for API keys, custom or AWS Identity and Access Management (IAM) authentication, automatic and configurable retries, exception handling, and more. In this blog post, we’ll walk through how to create a sample API, and generate a Java SDK from that API, and explore various features of the generated SDK. This post assumes you have some familiarity with API Gateway concepts.

Create an Example API

To start, let’s create a sample API by using the API Gateway console. Navigate to the API Gateway console and select your preferred region. Choose Create API, and then choose the Example API option. Choose Import to create the example API.

create-example-api

The example API is pretty simple. It consists of four operations.

  1. A GET on the API root resource that returns HTML describing the API.
  2. A GET on the /pets resource that returns a list of Pets.
  3. A POST on the /pets resource that creates a new Pet
  4. A GET on the /pets/{petId} resource that returns a specific Pet by ID.

Deploy the API

Next, you’ll deploy the API to a stage.

Under Actions, choose Deploy API, name the stage test, and then choose Deploy.

deploy-example-api

After you deploy the API, on the SDK Generation tab, choose Java as the platform. For Service Name, type PetStore. For Java Package Name, type com.petstore.client. Leave the other fields empty. Choose Generate SDK, and then download and unzip the SDK package.

generate-java-sdk

There are several configuration options available for the Java platform. Before proceeding, let’s go over them.

Service Name – Used to name the Java Interface you’ll use to make calls to your API.

Java Package Name – The name of the package your generated SDK code will be placed under. This name is typically named based on your organization.

The following optional parameters are used when publishing the SDK to a remote repository, like Maven Central.

Java Build System – The build system to configure for the generated SDK, either maven or gradle. The default is maven.

Java Group ID – Typically identifies your organization. Defaults to Java Package Name if not provided.

Java Artifact ID – Identifies the library or product. Defaults to Service Name if not provided.

Java Artifact Version – Version identifier for the published SDK. Defaults to 1.0-SNAPSHOT if not provided.

Compile Client

Navigate to the location where you unzipped the SDK package. If you’ve been following the example, the package will be setup as a Maven project. Ensure Maven and a JDK have been installed correctly, and run the following command to install the client package into your local Maven repository This makes it available for other local projects to use.

mvn install

Set Up an Application

Next, you’ll set up an application that depends on the client package you previously installed. Because the client requires Java 8 or later, any application that depends on the client must also be built with Java 8. Here, you’ll use a simple Maven Archetype to generate an empty Java 8 project.

mvn archetype:generate -B -DarchetypeGroupId=pl.org.miki -DarchetypeArtifactId=java8-quickstart-archetype -DarchetypeVersion=1.0.0 \
    -DgroupId=com.petstore.app \
    -DartifactId=petstore-app \
    -Dversion=1.0 \
    -Dpackage=com.petstore.app

Navigate to the newly created project and open the pom.xml file. Add the following snippet to the <dependencies>…</dependencies section of the XML file. If you changed any of SDK export parameters in the console, use those values instead.

<dependency>
    <groupId>com.petstore.client</groupId>
    <artifactId>PetStore</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

Create a file src/main/java/com/petstore/app/AppMain.java with the following contents.

package com.petstore.app;

import com.petstore.client.*;
import com.petstore.client.model.*;
import com.amazonaws.opensdk.*;
import com.amazonaws.opensdk.config.*;

public class AppMain {

    public static void main(String[] args) {

    }
}

Build the application to ensure everything is configured correctly.

mvn install

To run the application, you can use the following Maven command. (As you make changes, be sure to rerun mvn install before running the application.)

mvn exec:java -Dexec.mainClass="com.petstore.app.AppMain"

Exploring the SDK

Creating the Client

The first thing you need to do is construct an instance of the client. You can use the client builder obtained from a static factory method on the client interface. All configuration methods on the builder are optional (except for authorization related configuration).In the following code, you obtain an instance of the builder, override some of the configuration, and construct a client. The following settings are for demonstration only, and are not necessarily the recommended settings for creating service clients.

PetStore client = PetStore.builder()
        .timeoutConfiguration(new TimeoutConfiguration()
                                      .httpRequestTimeout(20_000)
                                      .totalExecutionTimeout(30_000))
		.connectionConfiguration(new ConnectionConfiguration()
                                      .maxConnections(100)
                                      .connectionMaxIdleMillis(120))
        .build();

The builder exposes a ton of useful configuration methods for timeouts, connection management, proxy settings, custom endpoints, and authorization. Consult the Javadocs for full details on what is configurable.

Making API Calls

Once you’ve built a client, you’re ready to make an API call.

Call the GET /pets API to list the current pets. The following code prints out each pet to STDOUT. For each API in the service, a method is generated on the client interface. That method’s name will be based on a combination of the HTTP method and resource path, although this can be overridden (more on that later in this post).

client.getPets(new GetPetsRequest())
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

The GET /pets operation exposes a query parameter named type that can be used to filter the pets that are returned. You can set modeled query parameters and headers on the request object.

client.getPets(new GetPetsRequest().type("dog"))
        .getPets()
        .forEach(p -> System.out.printf("Dog: %s\n", p));

Let’s try creating a Pet and inspecting the result from the service. Here you call the POST /pets operation, supplying information about the new Pet. The CreatePetResult contains the unmarshalled service response (as modeled in the Method Response) and additional HTTP-level metadata that’s available via the sdkResponseMetadata() method.

final CreatePetResult result = client.createPet(
        new CreatePetRequest().newPet(new NewPet()
                                              .type(PetType.Bird)
                                              .price(123.45)));
System.out.printf("Response message: %s \n", result.getNewPetResponse().getMessage());
System.out.println(result.sdkResponseMetadata().header("Content-Type"));
System.out.println(result.sdkResponseMetadata().requestId());
System.out.println(result.sdkResponseMetadata().httpStatusCode());

The GET /pets/{petId} operation uses a path placeholder to get a specific Pet, identified by its ID. When making a call with the SDK, all you need to do is supply the ID. The SDK handles the rest.

GetPetResult pet = client.getPet(new GetPetRequest().petId("1"));
System.out.printf("Pet by ID: %s\n", pet);

Overriding Configuration at the Request Level

In addition to the client-level configuration you supply when creating the client (by using the client builder), you can also override certain configurations at the request level. This “request config” is scoped only to calls made with that request object, and takes precedence over any configuration in the client.

client.getPets(new GetPetsRequest()
                       .sdkRequestConfig(SdkRequestConfig.builder()
                                                 .httpRequestTimeout(1000).build()))
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

You can also set custom headers or query parameters via the request config. This is useful for adding headers or query parameters that are not modeled by your API. The parameters are scoped to calls made with that request object.

client.getPets(new GetPetsRequest()
                       .sdkRequestConfig(SdkRequestConfig.builder()
                                                 .customHeader("x-my-custom-header", "foo")
                                                 .customQueryParam("MyCustomQueryParam", "bar")
                                                 .build()))
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

Naming Operations

It’s possible to override the default names given to operations through the API Gateway console or during an import from a Swagger file. Let’s rename the GetPet operation (GET /pets/{petId}) to GetPetById by using the console. First, navigate to the GET method on the /pets/{petId} resource.

change-operation-name

Choose Method Request, and then expand the SDK Settings section.

sdk-settings

Edit the Operation Name field and enter GetPetById. Save the change and deploy the API to the stage you created previously. Regenerate a Java SDK, and it should have the updated naming for that operation.

GetPetByIdResult pet = client.getPetById(new GetPetByIdRequest().petId("1"));
System.out.printf("Pet by ID: %s\n", pet);

If you are importing an API from a Swagger file, you can customize the operation name by using the operationId field. The following snippet is from the example API, and shows how the operationId field is used.

...
    "/pets/{petId}": {
      "get": {
        "tags": [
          "pets"
        ],
        "summary": "Info for a specific pet",
        "operationId": "GetPet",
        "produces": [
          "application/json"
        ],
...

Final Thoughts

This post highlights how to generate the Java SDK of an API in API Gateway, and how to call the API using the SDK in an application. For more information about how to build the SDK package, initiate a client with other configuration properties, make raw requests, configure authorization, handle exceptions, and configure retry behavior, see the README.html file in the uncompressed SDK project folder.

Amazon CloudWatch Logs and .NET Logging Frameworks

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

You can use Amazon CloudWatch Logs to monitor, store, and access your application’s logs. To get log data into CloudWatch Logs, you can use an AWS SDK or install the CloudWatch Log agent to monitor certain log folders. Today, we’ve made it even easier to use CloudWatch Logs with .NET applications by integrating CloudWatch Logs with several popular .NET logging frameworks.

The supported .NET logging frameworks are NLog, Log4net, and the new built-in ASP.NET Core logging framework. For each framework, all you need to do is add the appropriate NuGet package, add CloudWatch Logs as an output source, and then use your logging library as you normally would.

For example to use CloudWatch Logs with a .NET application using NLog, add the AWS.Logger.NLog NuGet package, and then add the AWS target into your NLog.config file. Here is an example of an NLog.config file that enables both CloudWatch Logs and the console as output for the log messages.


<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      throwExceptions="true">
  <targets>
    <target name="aws" type="AWSTarget" logGroup="NLog.ConfigExample" region="us-east-1"/>
    <target name="logfile" xsi:type="Console" layout="${callsite} ${message}" />
  </targets>
  <rules>
    <logger name="*" minlevel="Info" writeTo="logfile,aws" />
  </rules>
</nlog>

After performing these steps, when you run your application the log messages written with NLog will be sent to CloudWatch Logs. Then you can view your application’s log messages in near real time from the CloudWatch Logs console. You can also set up metrics and alarms from the CloudWatch Logs console, based on your application’s log messages.

These logging plugins are all built on top of the AWS SDK for .NET, and use the same behavior used by the SDK to find AWS credentials. The credentials used by the logging plugins must have the following permissions to access CloudWatch Logs.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups"
      ],
      "Resource": [
        "arn:aws:logs:*:*:*"
      ]
    }
  ]
}

The AWS .NET logging plugins are a new open source project on GitHub. All of the plugins are there, including samples and instructions on how to configure CloudWatch Logs for each of the supported .NET logging frameworks.

For any comments or issues for the new libraries, open an issue in the GitHub repository.

AWS Serverless Applications in Visual Studio

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In the last post, I talked about the AWS Lambda Project template. The other new project template we added to Visual Studio is the AWS Serverless Application. This is our AWS Toolkit for Visual Studio implementation of the new AWS Serverless Application Model. Using this project type, you can develop a collection of AWS Lambda functions and deploy them with any necessary AWS resources as a whole application, using AWS CloudFormation to orchestrate the deployment.

To demonstrate this, let’s create a new AWS Serverless Application and name it Blogger.

serverless-new-project

As in the AWS Lambda Project, we can then choose a blueprint to help get started. For this post, we’re going to use the Blog API using DynamoDB blueprint.

serverless-blueprints

The Project Files

Now let’s take a look at the files in our serverless application.

Blog.cs

This is a simple class used to represent the blog items that are stored in Amazon DynamoDB.

Functions.cs

This file defines all the C# functions we want to expose as Lambda functions. There are four functions defined to manage a blog platform:

  • GetBlogsAsync – gets a list of all the blogs.
  • GetBlogAsync – gets a single blog identified by either the query parameter Id or the ID added to the URL resource path.
  • AddBlogAsync – adds a blog to DynamoDB table.
  • RemoveBlogAsync – removes a blog from the DynamoDB table.

Each of these functions accepts an APIGatewayProxyRequest object and returns an APIGatewayProxyResponse. That is because these Lambda functions will be exposed as an HTTP API using Amazon API Gateway. The APIGatewayProxyRequest contains all the information representing the HTTP request. In the GetBlogAsync operation, you can see how we can find the ID of the blog from the resource path or query string.


public async Task GetBlogAsync(APIGatewayProxyRequest request, ILambdaContext context)
{
    string blogId = null;
    if (request.PathParameters != null && request.PathParameters.ContainsKey(ID_QUERY_STRING_NAME))
        blogId = request.PathParameters[ID_QUERY_STRING_NAME];
    else if (request.QueryStringParameters != null && request.QueryStringParameters.ContainsKey(ID_QUERY_STRING_NAME))
        blogId = request?.QueryStringParameters[ID_QUERY_STRING_NAME];

    ...
}

In the default constructor for this class, we can also see how the name of the DynamoDB table storing our blogs is passed in as an environment variable. This environment variable is set when Lambda deploys our function.


public Functions()
{
    // Check to see if a table name was passed in through environment variables and, if so,
    // add the table mapping
    var tableName = System.Environment.GetEnvironmentVariable(TABLENAME_ENVIRONMENT_VARIABLE_LOOKUP);
    if(!string.IsNullOrEmpty(tableName))
    {
        AWSConfigsDynamoDB.Context.TypeMappings[typeof(Blog)] = new Amazon.Util.TypeMapping(typeof(Blog), tableName);
    }

    var config = new DynamoDBContextConfig { Conversion = DynamoDBEntryConversion.V2 };
    this.DDBContext = new DynamoDBContext(new AmazonDynamoDBClient(), config);
}

serverless.template

This file is the AWS CloudFormation template used to deploy the four functions. The parameters for the template enable us to set the name of the DynamoDB table, and choose whether we want CloudFormation to create the table or to assume the table is already created.

The template defines four resources of type AWS::Serverless::Function. This is a special meta resource defined as part of the AWS Serverless Application Model specification. The specification is a transform that is applied to the template as part of the CloudFormation deployment. The transform expands the meta resource type into the more concrete resources, like AWS::Lambda::Function and AWS::IAM::Role. The transform is declared at the top of the template file, as follows.


{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Transform" : "AWS::Serverless-2016-10-31",

  ...
  
 }

Now let’s take a look at the GetBlogs declaration in the template, which is very similar to the other function declarations.


"GetBlogs" : {
  "Type" : "AWS::Serverless::Function",
  "Properties": {
    "Handler": "Blogger::Blogger.Functions::GetBlogsAsync",
    "Runtime": "dotnetcore1.0",
    "CodeUri": "",
    "Description": "Function to get a list of blogs",
    "MemorySize": 256,
    "Timeout": 30,
    "Role": null,
    "Policies": [ "AWSLambdaFullAccess" ],
    "Environment" : {
      "Variables" : {
        "BlogTable" : { "Fn::If" : ["CreateBlogTable", {"Ref":"BlogTable"}, { "Ref" : "BlogTableName" } ] }
      }
    },
    "Events": {
      "PutResource": {
        "Type": "Api",
        "Properties": {
          "Path": "/",
          "Method": "GET"
        }
      }
    }
  }
}

You can see a lot of the fields here are very similar to what we saw when we did a Lambda project deployment. In the Environment property, notice how the name of the DynamoDB table is being passed in as an environment variable. The CodeUri property tells CloudFormation where in Amazon S3 your application bundle is stored. Leave this property blank because the toolkit will fill it in during deployment, after it uploads the application bundle to S3 (it won’t change the template file on disk when it does so).

The Events section is where we can define the HTTP bindings for our Lambda function. This takes care of all the API Gateway setup we need to do for our function. You can also set up other types of event sources in this section.

template-addeventsource

One of the great benefits of using CloudFormation to manage the deployment is we can also add and configure any other AWS resources necessary for our application in the template, and let CloudFormation take care of creating and deleting the resources.

template-addresources

Deploying

We deploy our serverless application in the same way we deployed the Lamba project previously: right-click the project and choose Publish to AWS Lambda.

serverless-publishmenu

This launches the deployment wizard, but this time it’s quite a bit simpler. Because all the Lambda configuration was done in the serverless.template file, all we need to supply are the following:

  • The name of our CloudFormation stack, which will be the container for all the resources declared in the template.
  • The S3 bucket to upload our application bundle to.

These should exist in the same AWS Region.

serverless-first-page

Because the serverless template has parameters, an additional page is displayed in the wizard where we specify the values for the parameters. We can leave the BlogTableName property blank and let CloudFormation generate a unique name for the table. We do need to set ShouldCreateTable to true so that CloudFormation will create the table. To use an existing table, enter the table name and set the ShouldCreateTable parameter to false. We can leave the other fields at their default values and choose Publish.

serverless-second-page

Once the publish step is complete, the CloudFormation stack view is displayed in AWS Explorer. This view shows the progress of the creation of all the resources declared in our serverless template.

serverless-stack-view

When the stack creation is complete, the root URL for the API Gateway is displayed on the page. If we click that link, it returns an empty JSON array because we haven’t added any blogs to our table. To get blogs in our table, we need to make an HTTP PUT method to this URL, passing in a JSON document that represents the blog. We can do that in code or in any number of tools. I’ll use the Postman tool, which is a Chrome browser extension, but you can use any tool you like. In this tool, I’ll set the URL and change the method to be PUT. In the Body tab, I’ll put in some sample content. When we make the HTTP call, you can see that we get back the blog ID.

procman-addpost

Now if we go back to the browser with the link to our AWS Serverless URL, you can see we are getting back the blog we just posted.

get-post

Conclusion

Using the AWS Serverless Application template, you can manage a collection of Lambda functions and the application’s other AWS resources. Also, with the new AWS Serverless Application Model specification, we can use a simplified syntax to declare our serverless application in the CloudFormation template. If you have any questions or suggestions for blueprints, feel free to reach out to us on our .NET Lambda GitHub repository.

Upgrading from Version 2 to Version 3 of the AWS SDK for Ruby

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

Recently we announced the modularization of the AWS SDK for Ruby. This blog post will focus on how to upgrade your application to use the new service specific gems.

This blog post is divided up into sections based on how you currently depend on the AWS SDK for Ruby today. Find the section below that describes how you load the SDK today, and it will guide you in upgrading. Since version 3 is backwards compatible with version 2, you should not need to make additional changes beyond those described below.

Bunder: gem ‘aws-sdk’, ‘~>2’

Congratulations! You are following recommended best practices for how to depend on the SDK today. The simplest path to upgrade is to change the version from 2 to 3.

#gem 'aws-sdk', '~> 2'
gem 'aws-sdk', '~> 3'

See the section about using service specific gems below.

Bundler: gem ‘aws-sdk’ (without version)

It is not recommended to depend on the SDK without a major version constraint. Fortunately, version 3 is backwards compatible with version 2. Bundle updating your dependencies will work, but consider yourself lucky! You should add the version constraint to protect from future major version changes:

#gem 'aws-sdk'
gem 'aws-sdk', '~> 3'

See the section about using service specific gems below.

Bundler: gem ‘aws-sdk-core’, ‘~> 2’

The aws-sdk-core gem changes signification from version 2 to 3. In version 3, the core gem no longer defines any services. It will only contain shared utilities, such as credential providers, serializers, etc. To upgrade you must make two changes:

  • Change your bundler dependency
  • Change your ruby require statements

In bundler, replace aws-sdk-core, with aws-sdk:

#gem 'aws-sdk-core', '~> 2'
gem 'aws-sdk', '~> 3'

In code, replace any require statements on aws-sdk-core with aws-sdk.

#require 'aws-sdk-core'
require 'aws-sdk'

See the section about using service specific gems below.

Bundler: gem ‘aws-sdk-core’ (without version)

If you happen to bundle update before changing your Gemfile, your application will be broken. Version 3 of the aws-sdk-core gem no longer defines service modules and clients. It is a shared dependency of the 75+ service gems. To upgrade you must make two changes:

  • Change your bundler dependency
  • Change your ruby require statements

In bundler, replace aws-sdk-core, with aws-sdk:

#gem 'aws-sdk-core'
gem 'aws-sdk', '~> 3'

In code, replace any require statements on aws-sdk-core with aws-sdk.

#require 'aws-sdk-core'
require 'aws-sdk'

See the section about using service specific gems below.

Bundler: gem ‘aws-sdk-resource’ (with or without version)

In version 3, the aws-sdk-resources gem has been removed. This gem will not receive any further updates. Each service gem contains both the client interface, and the resource interfaces. To upgrade you must make two changes:

  • Change your bundler dependency
  • Change your ruby require statements
#gem 'aws-sdk-resources', '~> 2'
gem 'aws-sdk', '~> 3'

In code, replace any require statements on aws-sdk-resources with aws-sdk.

#require 'aws-sdk-core'
require 'aws-sdk'

See the section about using service specific gems below.

Using the Service Specific Gems

Each of the instructions above suggested using version 3 of the aws-sdk gem. This will work and is the shortest path to upgrading. It will however install 75+ service specific gems. You may choose to replace your dependency on the aws-sdk gem with service specific gems.

If my application depends on Amazon DynamoDB and Amazon S3, I could make the following changes:

In my Gemfile:

#gem 'aws-sdk', '~> 3'
gem 'aws-sdk-dynamodb', '~> 1'
gem 'aws-sdk-s3', '~> 1'

In my code:

#require 'aws-sdk'
require 'aws-sdk-s3'
require 'aws-sdk-dynamodb'

If you are a library maintainer, and you depend on the AWS SDK for Ruby, you should use service specific gems. Do no force your users to install every AWS service gem if you only depend on one.

Conclusion

Upgrading should be very simple. If you encounter any backwards incompatible changes, please open a GitHub issue. The modularized SDK will be in preview for a short period to hopefully catch these issues before going GA. You can also catch us in the gitter channel.

AWS SDK for Ruby Modularization (Version 3)

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

Version 3 of the AWS SDK for Ruby is available now as a preview release. This version modularizes the monolithic SDK into service specific gems. Aside from gem packaging differences, version 3 interfaces are backwards compatible with version 2.

You can install individual gems like so:

$ gem install aws-sdk-s3 --version 1.0.0.rc1

You can install everything using the aws-sdk gem:

$ gem install aws-sdk --version 3.0.0.rc1

To see a complete list of gems, checkout the project README for a list of supported services and gems.

Motivation

Modularization allows us to make some long requested changes to the SDK. Some of these changes were not reasonable when we shipped a single monolithic gem with 75+ services. Some of the primary motivating factors for breaking up the monolith include:

  • To provide better versioning information. When 75+ services share a single gem, it is difficult to communicate when a change is meaningful to a user. We can now use strict semantic versioning for each gem.
  • To improve our ability to deliver AWS API updates continuously. The number of new services and updates has been significantly increasing the frequency with which we update. We want to avoid situations where one update is blocked by an unrelated service. We can now continuously deliver updates per service gem.
  • Remove the usage of Ruby `autoload` statements. When you require a service gem, such as aws-sdk-ec2, all of the code is loaded and ready to use. This should eliminate a large number of thread safety issues that users encounter due to the use of autoload.
  • A large amount of the dynamic runtime has been replaced with code generation. This allows users to reason better about what code is doing, receive better stack traces, improve performance, etc.

What Has Changed?

Our intent for the modularization is to keep SDK interfaces backwards compatible. You may need to modify your gem dependency on the AWS SDK for Ruby. The aws-sdk and aws-sdk-core gems have been bumped to version 3.0 to protect users from package level changes.

* Every service has a gem, such as aws-sdk-s3.
* The aws-sdk-core gem now only contains shared utilities.
* The aws-sdk-resources is obsolete. Service gems contain both client and resource interfaces.
* The aws-sdk gem now has a dependency on 75+ service gems.

Here is a diagram showing the dependencies of the aws-sdk gem across its major versions.

gem-diagram

Why Bump to Version 3?

The version 2 aws-sdk-core gem includes code that defines 75+ service modules and shared utilities. It is important to prevent a service specific gem, such as aws-sdk-s3 and the core gem from both defining the same interfaces.

While we have worked hard to ensure full backwards compatibility in the service interfaces, a small number of private internal interfaces have been removed or changed. For users that have relied on these un-documented interfaces, this will prevent unexpected breaks with a gem update. Some of these changes include:

  • Removed the internal runtime methods Aws.add_service and Aws.service_added. These methods were used by the runtime to detect when a service was autoloaded.
  • Removed the internal Aws::Signers module and the various signature classes therein. These classes were marked with @api private. They are now available as separate gems:
    • aws-sigv4
    • aws-sigv2

Migrating Code From Version 2 to Version 3

Migrating should be very simple. If you depend on aws-sdk, then you do not need to change anything. If you depend on aws-sdk-resources or aws-sdk-core, replace these with a dependency on one of the following:

* aws-sdk ~> 3.0
* Service specific gems, such as aws-sdk-s3 ~> 1.0

You will also need to replace your require statements. You should no longer call require "aws-sdk-resources" or require "aws-sdk-core". A follow-up blog post provides detailed instructions on upgrading.

Questions?

Join us in our Gitter channel with your questions and feedback. The modularized released is currently published as a preview gem (rc1). We would love for you to try things out and to share feedback for these are GA.

Using the AWS Lambda Project in Visual Studio

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Last week we launched C# and .NET Core support for AWS Lambda. That release provided updated tooling for Visual Studio to help you get started writing your AWS Lambda functions and deploy them right from Visual Studio. In this post, we describe how to create, deploy, and test an AWS Lambda project.

Creating a Lambda Project

To get started writing Lambda functions in Visual Studio, you first need to create an AWS Lambda project. You can do this by using the Visual Studio 2015 New Project wizard. Under the Visual C# templates, there is a new category called AWS Lambda. You can choose between two types of project, AWS Lambda Project and AWS Serverless Application, and you also have the option to add a test project. In this post, we’ll focus on the AWS Lambda project and save AWS Serverless Application for a separate post. To begin, choose AWS Lambda Project with Tests (.NET Core), name the project ImageRekognition, and then choose OK.

lambda-new-project

On the next page, you choose the blueprint you want to get started with. Blueprints provide starting code to help you write your Lambda functions. For this example, choose the Detect Image Labels blueprint. This blueprint provides code for listening to Amazon S3 events and uses the newly released Amazon Rekognition service to detect labels and then add them to the S3 object as tags.

lambda-blueprints

When the project is complete, you will have a solution with two projects, as shown: the source project that contains your Lambda function code that will be deployed to AWS Lambda, and a test project using xUnit for testing your function locally.

lambda-solution-explorer

You might notice when you first create your projects that Visual Studio does not find all the NuGet references. This happens because these blueprints require dependencies that must be retrieved from NuGet. When new projects are created, Visual Studio only pulls in local references and not remote references from NuGet. You can fix this easily by right-clicking your references and choosing Restore Packages.

Lambda Function Source

Now let’s open the Function.cs file and look at the code that came with the blueprint. The first bit of code is the assembly attribute that is added to the top of the file.

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializerAttribute(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

By default, Lambda accepts only input parameters and return types of type System.IO.Stream. To use typed classes for input parameters and return types, we have to register a serializer. This assembly attribute is registering the Lambda JSON serializer, which uses Newtonsoft.Json to convert the streams to typed classes. The serializer can be set at the assembly or method level.

The class has two constructors. The first is a default constructor that is used when Lambda invokes your function. This constructor creates the S3 and Rekognition service clients, and will get the AWS credentials for these clients from the IAM role we’ll assign to the function when we deploy it. The AWS Region for the clients will be set to the region your Lambda function is running in. In this blueprint, we only want to add tags to our S3 object if the Rekognition service has a minimum level of confidence about the label. This constructor will check the environment variable MinConfidence to determine the acceptable confidence level. We can set this environment variable when we deploy the Lambda function.

public Function()
{
    this.S3Client = new AmazonS3Client();
    this.RekognitionClient = new AmazonRekognitionClient();

    var environmentMinConfidence = System.Environment.GetEnvironmentVariable(MIN_CONFIDENCE_ENVIRONMENT_VARIABLE_NAME);
    if(!string.IsNullOrWhiteSpace(environmentMinConfidence))
    {
        float value;
        if(float.TryParse(environmentMinConfidence, out value))
        {
            this.MinConfidence = value;
            Console.WriteLine($"Setting minimum confidence to {this.MinConfidence}");
        }
        else
        {
            Console.WriteLine($"Failed to parse value {environmentMinConfidence} for minimum confidence. Reverting back to default of {this.MinConfidence}");
        }
    }
    else
    {
        Console.WriteLine($"Using default minimum confidence of {this.MinConfidence}");
    }
}

We can use the second constructor for testing. Our test project configures its own S3 and Rekognition clients and passes them in.

public Function(IAmazonS3 s3Client, IAmazonRekognition rekognitionClient, float minConfidence)
{
    this.S3Client = s3Client;
    this.RekognitionClient = rekognitionClient;
    this.MinConfidence = minConfidence;
}

FunctionHandler is the method Lambda will call after it constructs the instance. Notice that the input parameter is of type S3Event and not a Stream. We can do this because of our registered serializer. The S3Event contains all the information about the event triggered in S3. The function loops through all the S3 objects that were part of the event and tells Rekognition to detect labels. After the labels are detected, they are added as tags to the S3 object.

public async Task FunctionHandler(S3Event input, ILambdaContext context)
{
    foreach(var record in input.Records)
    {
        if(!SupportedImageTypes.Contains(Path.GetExtension(record.S3.Object.Key)))
        {
            Console.WriteLine($"Object {record.S3.Bucket.Name}:{record.S3.Object.Key} is not a supported image type");
            continue;
        }

        Console.WriteLine($"Looking for labels in image {record.S3.Bucket.Name}:{record.S3.Object.Key}");
        var detectResponses = await this.RekognitionClient.DetectLabelsAsync(new DetectLabelsRequest
        {
            MinConfidence = MinConfidence,
            Image = new Image
            {
                S3Object = new Amazon.Rekognition.Model.S3Object
                {
                    Bucket = record.S3.Bucket.Name,
                    Name = record.S3.Object.Key
                }
            }
        });

        var tags = new List();
        foreach(var label in detectResponses.Labels)
        {
            if(tags.Count < 10)
            {
                Console.WriteLine($"\tFound Label {label.Name} with confidence {label.Confidence}");
                tags.Add(new Tag { Key = label.Name, Value = label.Confidence.ToString() });
            }
            else
            {
                Console.WriteLine($"\tSkipped label {label.Name} with confidence {label.Confidence} because maximum number of tags reached");
            }
        }

        await this.S3Client.PutObjectTaggingAsync(new PutObjectTaggingRequest
        {
            BucketName = record.S3.Bucket.Name,
            Key = record.S3.Object.Key,
            Tagging = new Tagging
            {
                TagSet = tags
            }
        });
    }
    return;
}

Notice that the code contains calls to Console.WriteLine(). When the function is being run in AWS Lambda, all calls to Console.WriteLine() will redirect to Amazon CloudWatch Logs.

Default Settings File

Another file that was created with the blueprint is aws-lambda-tools-defaults.json. This file contains default values that the blueprint has set to help prepopulate some of the fields in the deployment wizard. It is also helpful in setting command line options with our integration with the new .NET Core CLI. We’ll dive deeper into the CLI integration in a later post, but to get started using it, navigate to the function’s project directory and type dotnet lambda help.

{
  "Information" : [
    "This file provides default values for the deployment wizard inside Visual Studio and the AWS Lambda commands added to the .NET Core CLI.",
    "To learn more about the Lambda commands with the .NET Core CLI execute the following command at the command line in the project root directory.",

    "dotnet lambda help",

    "All the command line options for the Lambda command can be specified in this file."
  ],

  "profile":"",
  "region" : "",
  "configuration" : "Release",
  "framework" : "netcoreapp1.0",
  "function-runtime":"dotnetcore1.0",
  "function-memory-size" : 256,
  "function-timeout" : 30,
  "function-handler" : "ImageRekognition::ImageRekognition.Function::FunctionHandler"
}

An important field to understand is the function-handler. This indicates to Lambda the method to call in our code in response to our function being invoked. The format of this field is <assembly-name>::<full-type-name>::<method-name>. Be sure to include the namespace with the type name.

Deploying the Function

To get started deploying the function, right-click the Lambda project and then choose

Publish to AWS Lambda. This starts the deployment wizard. Notice that many of the fields are already set. These values came from the aws-lambda-tools-defaults.json file described earlier. We do need to enter a function name. For this example, let’s name it ImageRekognition, and then choose Next.

lambda-deployment-wizard-page1

On the next page, we need to select an IAM role that gives permission for our code to access S3 and Rekognition. To keep this post short, let’s select the Power User managed policy; the tools create a role for us based on this policy. Note that the Power User managed policy was added to use to create a role in version 1.11.1.0 of the toolkit.

Finally, we set the environment variable MinConfidence to 60, and then choose Publish.

lambda-deployment-wizard-page2

This launches the deployment process, which builds and packages the Lambda project and then creates the Lambda function. Once publishing is complete, the Function view in the AWS Explorer window is displayed. From here, we can invoke a test function, view CloudWatch Logs for the function, and configure event sources.

lambda-function-view

With our function deployed, we need to configure S3 to send its events to our new function. We do this by going to the event source tab and choosing Add. Then, we choose Amazon S3 and choose the bucket we want to connect to our Lambda function. The bucket must be in the same region as the region where the Lambda function was deployed.

Testing the Function

Now that the function is deployed and an S3 bucket is configured as an event source for it, open the S3 bucket browser from the AWS Explorer for the bucket we selected and upload some images.

When the upload is complete, we can confirm that our function ran by looking at the logs from our function view. Or, we can right-click the images in the bucket browser and select Properties. In the Properties dialog box on the Tags tab, we can view the tags that were applied to our object.

lambda-object-properties

Conclusion

We hope this post gives you a good understanding of how our tooling inside Visual Studio works for developing and creating Lambda functions. We’ll be adding more blueprints over time to help you get started using other AWS services with Lambda. The blueprints are hosted in our new Lambda .NET GitHub repository. If you have any suggestions for new blueprints, open an issue and let us know.

Automating the Deployment of Encrypted Web Services with the AWS SDK for PHP

by Joseph Fontes | on | in PHP | Permalink | Comments |  Share

Having worked in the web hosting space, one of the areas I find so fun about AWS is the ease of automating tasks that have historically been quite disjointed.  The process of supporting a customer request to register a domain, create or update DNS entries, configure the load balancer, deploy servers, etc., had me working across a multitude of systems, interfaces, and APIs.  Now, with the release of AWS Certificate Manager (ACM) in addition to existing AWS services, AWS provides all the tools and capabilities needed to support the provisioning of these services within customer accounts.

In this three-part series of posts, we’ll review how to use the AWS SDK for PHP to automate web service deployment, domain registration, DNS administration, and SSL certificate generation and assignment.  Using the examples outlined in these posts, as well as other features and functions of the AWS SDK for PHP, you’ll learn how to programmatically create a process for automatically purchasing a domain and then deploying an HTTPS (SSL-secured) web service on AWS by using either Amazon EC2 or AWS Elastic Beanstalk.

The examples in this post focus on using Amazon Route 53 to automate the registration of domain names and DNS administration.  Next, we’ll showcase how to use ACM to create and manage SSL certificates.  In subsequent posts, we will show how to automate the setup of encrypted HTTPS web services with the new domain and newly created certificates on Elastic Beanstalk.  Then, we’ll show how to automate deployments to EC2 and Elastic Load Balancing.  Once complete, we’ll have two web application stacks.  We’ll run the www.dev-null.link site from Elastic Beanstalk, and use EC2 and Elastic Load Balancing to run the second web application stack.  The following diagrams illustrate the final designs.

   

Amazon Route53 Domain Registration

The first task in building the web infrastructure is to identify and register an available domain name.  We can use the AWS SDK for PHP to check domain name availability.  We will use a method called checkDomainAvailability, which is part of the Route 53 Domains client.  We can automate the process of testing domains until we have a name that meets our application’s needs and is also available for registration.  The example below loops through an array of domain names, listing their current status for registration.

$route53Client = $sdk->createRoute53Domains();

$domainNames = [ "test.com", "dev.com", "dev-null.link", "null38.link" ];

foreach($domainNames as $domainNameElement) {
        $route53CheckDomainAvailData = [ 'DomainName' => $domainNameElement ];
        $route53CheckDomainResults = $route53Client->checkDomainAvailability($route53CheckDomainAvailData);
        print "Domain $domainNameElement is ".$route53CheckDomainResults['Availability']."n";
}

You can view the results of the check below.

There are two domain names available for registration.  In this example, we’ll register the domain dev-null.link.  This name contains the “.link” top-level domain (TLD), and the “dev-null” second-level domain. Now, register the domain by using the registerDomain method.  The registration has several required fields that we need to complete. These requirements are specific to each top level domain. For this example, we can use the following data ( provided in this Github Gist):
$route53DomainRegData = [
    'AdminContact' => [
        'AddressLine1' => $address1,
        'AddressLine2' => $address2,
        'City' => $city,
        'ContactType' => 'PERSON',
        'CountryCode' => 'US',
.....

$route53Client = $sdk->createRoute53Domains();
$route53CreateDomRes = $r53Client->registerDomain($route53DomainRegData);

print_r($route53DomainRegData);

Notice that the PhoneNumber data element must be in the format of  “+1.1231231212” to be valid.

We can now register the domain as follows.
[user@dev1 scripts]# php aws-route53-register-domain.php 
...
            [statusCode] => 200 
            [effectiveUri] => https://route53domains.us-east-1.amazonaws.com 
...

While we wait for the registration process to finish, we can check on the status of the domain.  First, use the listOperations method to print the list of current operations, and then enter the operation number into the getOperationDetail method.  Let’s look at all of the pending operations with the code.

$route53ListOperationsResults = $route53Client->listOperations();
print_r($route53ListOperationsResults);

$route53OperDetails = [ 'OperationId' => $operationId ];
$route53OperResults = $route53Client->getOperationDetail($route53OperDetails);
print_r($route53OperResults);

Result

[user@dev1 scripts]#
[user@dev1 scripts]# php aws-route53-list-operations.php
…
[Status] => IN_PROGRESS
                    [Type] => REGISTER_DOMAIN
…

AWS Certificate Manager

With ACM, we no longer have to worry about certificate expirations, securing the certificate private keys, copying self-signed CA certificates to clients, making sure servers all have the right certificate, or even the cost of a managed SSL certificate.  AWS provides managed SSL certificates at no cost.  Also, AWS handles the responsibility of renewing the certificate and placing it on the devices used to terminate SSL connections.  ACM can be used across managed AWS services such as ELB, Amazon CloudFront, and Elastic Beanstalk.

ACM in Action

Let’s go through how to create multiple certificates to secure connections to different websites.  We must first request a new certificate with the corresponding public and private keys.  The requestCertificate method automates this process.

The following example shows how to generate the certificate for our first domain.

$acmClient = $sdk->createAcm();

$acmRequestCertData = [ 'DomainName' => "www.dev-null.link",
    'DomainValidationOptions' => [
        [
            'DomainName' => "www.dev-null.link",
            'ValidationDomain' => "dev-null.link",
        ],
    ],
    'IdempotencyToken' => 'TOKENSTRINGDEVNULL01',
    'SubjectAlternativeNames' => ['dev-null.link', 'images.dev-null.link'],
];

$acmRequestCertResults = $acmClient->requestCertificate($acmRequestCertData);
print_r($acmRequestCertResults);

Result

[user@dev1 scripts]# php aws-acm-requestCertificate.php
...
    [CertificateArn] => arn:aws:acm:us-east-1:ACCOUNTID:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
    [@metadata] => Array
        (
            [statusCode] => 200
...
 

SubjectAlternativeNames identifies other DNS entries that the certificate should cover.  In this example, they are all subdomains of dev-null.link but can also include other domain names that could be used synonymously with our requested domain.  You can repeat this call for any additional certificates you need.  Be sure to update the value of IdempotencyToken for each certificate request created.  You should save the value of CertificateArn because you’ll need to use it later.

We want to secure the primary hostname www.dev-null.link.  ACM requires validation for the domain from an email address that is tied to the registration information.  Domain validation requests are sent to multiple locations.  These validation emails are sent to domain email addresses in the following order: admin@dev-null.link, administrator@dev-null.link, hostmaster@dev-null.link, postmaster@dev-null.link, and webmaster@dev-null.link.  In addition, a validation request is also sent to the email contacts for the Administrative, Technical, and Domain Registrant.  The following figure shows a copy of the received email.

When you click the link in the email, you’re taken to the page shown below.

Next, click I Approve.  A confirmation page appears that you can save for your records.

Let’s now use the listCertificates and describeCertificate methods to show all of the certificates we’ve generated.

$acmListCertResults = $acmClient->listCertificates();
print_r($acmListCertResults);

Result

[user@dev1 scripts]# php aws-acm-list.php
...
    [CertificateSummaryList] => Array
...
                    [CertificateArn] => arn:aws:acm:us-east-1:ACCOUNTID:certificate/CERTIFICATE-ID
                    [DomainName] => www.dev-null.link
...
                    [CertificateArn] => arn:aws:acm:us-east-1:ACCOUNTID:certificate/CERTIFICATE-ID
                    [DomainName] => api.dev-null.link
...
 
You can view details about the certificates by calling the describeCertificates method with the CertificateARN received from the 
previous listCertificates call.
$certificateArn = $acmListCertResults['CertificateSummaryList'][0]['CertificateArn'];

$acmDescribeCertData = [ 'CertificateArn' => $certificateArn ];
$acmDescribeCertResults = $acmClient->describeCertificate($acmDescribeCertData);

print_r($acmDescribeCertResults);

You can view the full output here with abbreviated output shown below.

[Certificate] => ...
Array
(
        [CertificateArn] => CERTIFICATE-ARN
        [DomainName] => www.dev-null.link
        [SubjectAlternativeNames] => Array
        (
                [0] => www.dev-null.link
                [1] => dev-null.link
                [2] => images.dev-null.link
        )
...

Finally, view the full certificate with the certificate chain.

$acmGetCertificateData = [ 'CertificateArn' => $certificateArn ];

$acmGetCertificateResults = $acmClient->getCertificate($acmGetCertificateData);

print_r($acmGetCertificateResults);
Result
[Certificate] => -----BEGIN CERTIFICATE-----
…
-----END CERTIFICATE-----

 [CertificateChain] => -----BEGIN CERTIFICATE-----
…
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
…
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
…
-----END CERTIFICATE-----
    [@metadata] => Array
        (
            [statusCode] => 200
…
This code will return the public certificate and the chain of trust leading to the public CA that is signing the certificate allowing web browsers to trust the website being visited.
To get more information about the certificate and the request process, we can use the describeCertificate method.  This method takes the certificateArn value as input and produces information about the referenced certificate.  This information includes the list of email addresses that received validation emails, the encryption algorithm used, the certificate creation and expiration dates, and the full list of host names covered by the certificate.  Looking forward, we can delete certificates and resend the validation email with the available ACM methods.
Now that we’ve covered the setup, configuration, and deployment of ACM and DNS, we now have a registered domain available for use.  In the next post, we’ll review how to use the domain, dev-null.link, for our website.  We’ll deploy an HTTPS website that is secured with SSL/TLS by using the AWS SDK with ELB, Amazon EC2, and Elastic Beanstalk.  We’ll also cover creating and deleting Amazon Route 53 resource records, and how to assign our ACM certificate to newly created load balancers.

 

 

Amazon S3 Encryption Client Now Available for C++ Developers

by Jonathan Henson | on | in C++, C++ | Permalink | Comments |  Share

My colleague, Conor Campbell, has great news for C++ developers who need to store sensitive information in Amazon S3.

— Jonathan

Many customers have asked for an Amazon S3 Encryption Client that is compatible with the existing Java client, and today we are delighted to provide it. You can now use the AWS SDK for C++ to securely retrieve and store objects, using encryption and decryption, to and from Amazon S3.

The Amazon S3 Encryption Client encrypts your S3 objects using envelope encryption with a master key that you supply and a generated content encryption key. The content encryption key is used to encrypt the body of the S3 object, while the master key is used to encrypt the content encryption key. There are two different types of encryption materials representing the master key that you can use:

  • Simple Encryption Materials. This mode uses AES Key Wrap to encrypt and decrypt the content encryption key.
  • KMS Encryption Materials. This mode uses an AWS KMS customer master key (CMK) to encrypt and decrypt the content encryption key.

In addition, we have provided the ability to specify the crypto mode that controls the encryption for your S3 objects. Currently, we support three crypto modes:

  • Encryption Only. Uses AES-CBC
  • Authenticated Encryption. Uses AES-GCM and allows AES-CTR for Range-Get operations.
  • Strict Authenticated Encryption. This is the most secure option. It uses AES-GCM and does not allow Range-Get operations because we cannot ensure cryptographic integrity protections for the data without verifying the entire authentication tag.

Users can also choose where to store the encryption metadata for the object either in the metadata of the S3 object or in a separate instruction file. The encryption information includes the following:

  • Encrypted content encryption key
  • Initialization vector
  • Crypto tag length
  • Materials description
  • Content encryption key algorithm
  • Key wrap algorithm

The client handles all of the encryption and decryption details under the hood. All you need to upload an object to or download an object from S3 is a simple PUT or GET operation. When calling a GET operation, this client automatically detects which crypto mode and storage method to use for successful decryption– eliminating confusion for selecting an appropriate crypto configuration.

Here are a few examples.


#include <aws/core/auth/AWSCredentialsProviderChain.h>
#include <aws/s3-encryption/S3EncryptionClient.h>
#include <aws/s3-encryption/CryptoConfiguration.h>
#include <aws/s3-encryption/materials/SimpleEncryptionMaterials.h>
#include <fstream>

using namespace Aws::S3;
using namespace Aws::S3::Model;
using namespace Aws::S3Encryption;
using namespace Aws::S3Encryption::Materials;

static const char* const KEY = "s3_encryption_cpp_sample_key";
static const char* const BUCKET = "s3-encryption-cpp-bucket";
static const char* const FILE_NAME = "./localFile";

int main()
{
    Aws::SDKOptions options;
    Aws::InitAPI(options);
    {
		auto masterKey = Aws::Utils::Crypto::SymmetricCipher::GenerateKey();
		auto simpleMaterials = Aws::MakeShared("s3Encryption", masterKey);

		CryptoConfiguration cryptoConfiguration(StorageMethod::METADATA, CryptoMode::AUTHENTICATED_ENCRYPTION);

		auto credentials = Aws::MakeShared<Aws::Auth::DefaultAWSCredentialsProviderChain>("s3Encryption");

		//construct S3 encryption client
		S3EncryptionClient encryptionClient(simpleMaterials, cryptoConfiguration, credentials);

		auto textFile = Aws::MakeShared<Aws::FStream>("s3Encryption", FILE_NAME, std::ios_base::in);
		assert(textFile->is_open());

		//put an encrypted object to S3
		PutObjectRequest putObjectRequest;
		putObjectRequest.WithBucket(BUCKET)
			.WithKey(KEY).SetBody(textFile);

		auto putObjectOutcome = encryptionClient.PutObject(putObjectRequest);

		if (putObjectOutcome.IsSuccess())
		{
			std::cout << "Put object succeeded" << std::endl;
		}
		else
		{
			std::cout << "Error while putting Object " << putObjectOutcome.GetError().GetExceptionName() <<
				" " << putObjectOutcome.GetError().GetMessage() << std::endl;
		}

		//get an encrypted object from S3
		GetObjectRequest getRequest;
		getRequest.WithBucket(BUCKET)
			.WithKey(KEY);

		auto getObjectOutcome = encryptionClient.GetObject(getRequest);
		if (getObjectOutcome.IsSuccess())
		{
			std::cout << "Successfully retrieved object from s3 with value: " << std::endl;
			std::cout << getObjectOutcome.GetResult().GetBody().rdbuf() << std::endl << std::endl;;
		}
		else
		{
			std::cout << "Error while getting object " << getObjectOutcome.GetError().GetExceptionName() <<
				" " << getObjectOutcome.GetError().GetMessage() << std::endl;
		}
	}
    Aws::ShutdownAPI(options);
}

In the previous example, we are setting up the Amazon S3 Encryption Client with simple encryption materials, a crypto configuration, and the default AWS credentials provider. The crypto configuration is using the metadata as the storage method and specifying authenticated encryption mode. This encrypts the S3 object with AES-GCM and uses the master key provided within simple encryption materials to encrypt the content encryption key using AES KeyWrap. Then the client will PUT a text file to S3, where it is encrypted and stored. When a GET operation is performed on the object, it is decrypted using the encryption information stored in the metadata, and returns the original text file within the body of the S3 object.

Now, what if we wanted to store our encryption information in a separate instruction file object? And what if we wanted to use AWS KMS for our master key? Maybe we even want to increase the level of security by using strict AES-GCM instead? Well that’s an easy switch, as you can see here.


#include <aws/core/auth/AWSCredentialsProviderChain.h>
#include <aws/s3-encryption/S3EncryptionClient.h>
#include <aws/s3-encryption/CryptoConfiguration.h>
#include <aws/s3-encryption/materials/KMSEncryptionMaterials.h>

using namespace Aws::S3;
using namespace Aws::S3::Model;
using namespace Aws::S3Encryption;
using namespace Aws::S3Encryption::Materials;

static const char* const KEY = "s3_encryption_cpp_sample_key";
static const char* const BUCKET = "s3-encryption-cpp-sample-bucket";
static const char* const CUSTOMER_MASTER_KEY_ID = "ars:some_customer_master_key_id";

int main()
{
    Aws::SDKOptions options;
    Aws::InitAPI(options);
    {
		auto kmsMaterials = Aws::MakeShared<KMSEncryptionMaterials>("s3Encryption", CUSTOMER_MASTER_KEY_ID);

		CryptoConfiguration cryptoConfiguration(StorageMethod::INSTRUCTION_FILE, CryptoMode::STRICT_AUTHENTICATED_ENCRYPTION);

		auto credentials = Aws::MakeShared<Aws::Auth::DefaultAWSCredentialsProviderChain>("s3Encryption");

		//construct S3 encryption client
		S3EncryptionClient encryptionClient(kmsMaterials, cryptoConfiguration, credentials);

		auto requestStream = Aws::MakeShared<Aws::StringStream>("s3Encryption");
		*requestStream << "Hello from the S3 Encryption Client!";

		//put an encrypted object to S3
		PutObjectRequest putObjectRequest;
		putObjectRequest.WithBucket(BUCKET)
			.WithKey(KEY).SetBody(requestStream);

		auto putObjectOutcome = encryptionClient.PutObject(putObjectRequest);

		if (putObjectOutcome.IsSuccess())
		{
			std::cout << "Put object succeeded" << std::endl;
		}
		else
		{
			std::cout << "Error while putting Object " << putObjectOutcome.GetError().GetExceptionName() <<
				" " << putObjectOutcome.GetError().GetMessage() << std::endl;
		}

		//get an encrypted object from S3
		GetObjectRequest getRequest;
		getRequest.WithBucket(BUCKET)
			.WithKey(KEY);

		auto getObjectOutcome = encryptionClient.GetObject(getRequest);
		if (getObjectOutcome.IsSuccess())
		{
			std::cout << "Successfully retrieved object from s3 with value: " << std::endl;
			std::cout << getObjectOutcome.GetResult().GetBody().rdbuf() << std::endl << std::endl;;
		}
		else
		{
			std::cout << "Error while getting object " << getObjectOutcome.GetError().GetExceptionName() <<
				" " << getObjectOutcome.GetError().GetMessage() << std::endl;
		}
    }
    Aws::ShutdownAPI(options);
}

A few caveats:

  • We have not implemented Range-Get operations in Encryption Only mode. Although this is possible, this is a legacy mode and we encourage you to use the Authenticated Encryption mode instead. However, if you do Range-Get operations in this legacy mode, please let us know and we will work on providing them.
  • Currently, Apple does not support AES-GCM with 256 bit keys. As a result Authenticated Encryption and Strict Authenticated Encryption PUT operations do not work on the default Apple build configuration. You can, however, still download objects that were uploaded using Authenticated Encryption because we can use AES-CTR mode for the download. Alternatively you can build the SDK using OpenSSL for full functionality. As soon as Apple makes AES-GCM available for CommonCrypto, we will add more support.
  • We have not yet implemented Upload Part. We will be working on that next. In the meantime, if you use the TransferManager interface with the Amazon S3 Encryption Client, be sure to set the minimum part size to a size that is larger than your largest object.

The documentation for Amazon S3 Encryption Client can be found here.

This package is now available in NuGet.

We hope you enjoy the Amazon S3 Encryption Client. Please leave your feedback on GitHub and feel free to submit pull requests or feature requests.

Chalice 0.4 & 0.5 Deliver Local Testing and Multifile Application Capabilities for Python Serverless Application Development

by Leah Rivers | on | in Python | Permalink | Comments |  Share

We’re continuing to add features to Chalice, a preview release of our microframework for Python serverless application development using AWS Lambda and Amazon API Gateway. Chalice is designed to make it simple and fast for Python developers to create REST APIs built in a serverless framework.

In our latest releases, we’ve added initial versions for a couple of the most commonly requested features:

  1. Save time by testing APIs locally before deploying to Amazon API Gateway. In this first version of local testing support for Chalice, we’ve delivered a local HTTP server you can use to test and debug a local version of your python app. This enables you to avoid the work of deploying to API Gateway before you validate APIs.
  2. Build more complex applications with more complex initial support for multifile python apps. Chalice 0.4 enables Python developers to maintain their preferred best practices and coding styles for applications that would not normally be contained within one single file, and to include files of other types as part of the deployment package. This improves on our earlier Chalice releases where deployment packages were limited to the app.py file.

We’ve also improved existing capabilities that make it easier to build and manage your serverless apps.

  1. More configurable logging with improved readability. We’ve added the ability to configure logging for the app object, where previously logging was configured on the root logger. This update enables you to configure logging levels and log format, and eliminates some duplicate log entries seen in previous versions of Chalice.
  2. Improved ability to retrieve your app’s Amazon API Gateway URL. We’ve included a chalice url command which enables you to programmatically retrieve the URL of your API; in previous versions this was a manual process.

Our releases continue to be focused on feedback and requests from the developer community. Want to learn more? Here are a few suggestions.

Try building a serverless application with Chalice. Chalice is available on PyPI (pip install chalice) and GitHub (https://github.com/awslabs/chalice – check out the README for tutorials). It’s published as a preview project and is not yet recommended for production APIs. You can also see our original Chalice blog post where we introduced a preview release of Chalice.

Stay tuned for new capabilities to be released. You can check out the working list of features for our upcoming release here: https://github.com/awslabs/chalice/blob/master/CHANGELOG.rst#next-release-tbd

Let us know what you think. We look forward to your feedback and suggestions. Feel free to leave comments here or come talk to us on GitHub.
Planning to attend AWS re:Invent? Come check out our re:Invent session focused on Chalice where we will present new features and go through demos, such as how to deploy a REST API in less than 30 seconds. You can add this session to your re:Invent schedule here, or sign up for the re:Invent live stream.