Category: Java


CHANGELOG for the AWS SDK for Java

by Dongie Agnir | on | in Java | Permalink | Comments |  Share

We are happy to announce that beginning with version 1.11.82, the source and ZIP distributions of the AWS SDK for Java now include a CHANGELOG.md file that lists the most notable changes for each release.

In the past, changes for each release of the AWS SDK for Java were published to the AWS Release Notes website, but this approach had some drawbacks. Customers wishing to view the set of changes for multiple versions on the website needed to run a search for each version they were interested in. Many customers acquire the source code through our GitHub repository, so viewing the release notes meant potentially opening a browser and navigating away from the code itself. Finally, although rare, sometimes there’s a delay between the release of a new version of the SDK and the availability of the release notes.

By implementing a changelog file, we hope to address these problems in a way that is simple and consistent with many other open source software projects, including other AWS SDKs like JavaScript and .NET. New changes are always prepended to the changelog file in a consistent format, so viewing the changes for multiple versions is now a breeze. The changelog is made available with the source and ZIP distributions, enabling customers to quickly access changes without opening a browser. As an added bonus, because it’s a simple text file, the changes up to the current version can easily be made available for viewing offline. Finally, the file is always updated along with the SDK source, so the list of changes is available as soon as the source code is available.

We hope that with this change, customers will find it easier than ever to keep up to date with the exciting changes being introduced in the AWS SDK for Java. As always, please let us know what you think in the comments below.

AWS Step Functions Fluent Java API

by Andrew Shore | on | in Java | Permalink | Comments |  Share

AWS Step Functions, a new service that launched at re:Invent 2016, makes it easier to build complex, distributed applications in the cloud. Using this service, you can create state machines that can connect microservices and activities into a visual workflow. State machines support branching, parallel execution, retry/error handling, synchronization (via Wait states), and task execution (via AWS Lambda or an AWS Step Functions Activity).

The Step Functions console provides excellent support for visualizing and debugging a workflow and for creating state machine descriptions. State machines are described in a JSON document, as described in detail here. Although the console has a great editor for building these documents visually, you might want to write state machines in your IDE via a native Java API. Today, we’re launching a fluent builder API to create state machines in a readable, compact way. This new API is included in the AWS SDK for Java.

 

To get started, create a new Maven project and declare a dependency on the aws-java-sdk-stepfunctions client.

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-stepfunctions</artifactId>
    <version>1.11.86</version>
</dependency>

Let’s take a look at some examples. We’ll go through each blueprint available in the console and translate that to the Java API.

Hello World

The following is a JSON representation of a simple state machine that consists of a single task state. The task calls out to a Lambda function (identified by ARN), passing the input of the state machine to the function. When the function completes successfully, the state machine terminates with the same output as the function.
JSON

{
  "Comment" : "A Hello World example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API
Let’s rewrite this simple state machine using the new Java API and transform it to JSON. Be sure you include the static import for the fluent API methods.


package com.example;

import static com.amazonaws.services.stepfunctions.builder.StepFunctionBuilder.*;
import com.amazonaws.services.stepfunctions.builder.ErrorCodes;

public class StepFunctionsDemo {

    public static void main(String[] args) {
        final StateMachine stateMachine = stateMachine()
                .comment("A Hello World example of the Amazon States Language using an AWS Lambda Function")
                .startAt("Hello World")
                .state("Hello World", taskState()
                        .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                        .transition(end()))
                .build();
        System.out.println(stateMachine.toPrettyJson());
    }
}

Let’s take a closer look at the previous example. The very first method you will always call when constructing a state machine, is stateMachine(). This returns a mutable StateMachine.Builder that can be used to configure all properties of a state machine. Here, we’re adding a comment describing the purpose of the state machine, indicating the initial state via the startAt() method, and defining that state via the state() method. Each state machine must have at least one state in it and must have a valid path to a terminal state (that is, a state that causes the state machine to end). In this example, we have a single TaskState (configured via the taskState() method) that also serves as the terminal state via the End transition (configured by transition(end()) ).

Once you configure a state machine to your liking, you can call the build() method on the StateMachineBuilder to produce an immutable StateMachine object. This object can then be transformed into JSON (see toJson() and toPrettyJson()) or it can be passed directly to the CreateStateMachine API in the Java SDK (see below).

The following creates the state machine (created previously) via the service client. The definition() method can take either the raw JSON or a StateMachine object. For more information about getting started with the Java SDK, see our AWS Java Developer Guide.

final AWSStepFunctions client = AWSStepFunctionsClientBuilder.defaultClient();
client.createStateMachine(new CreateStateMachineRequest()
                                          .withName("Hello World State Machine")
                                          .withRoleArn("arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME")
                                          .withDefinition(stateMachine));

 

Wait State

The following state machine demonstrates various uses of the Wait state type, which can be used to wait for a given amount of time or until a specific time. Wait states can dynamically wait based on input using the TimestampPath and SecondsPath properties, which are JSON reference paths to a timestamp or an integer, respectively. The Next property identifies the state to transition to after the wait is complete.
JSON

{
  "Comment" : "An example of the Amazon States Language using wait states",
  "StartAt" : "First State",
  "States" : {
    "First State" : {
      "Next" : "Wait Using Seconds",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    },
    "Wait Using Seconds" : {
      "Seconds" : 10,
      "Next" : "Wait Using Timestamp",
      "Type" : "Wait"
    },
    "Wait Using Timestamp" : {
      "Timestamp" : "2017-01-16T19:18:55.103Z",
      "Next" : "Wait Using Timestamp Path",
      "Type" : "Wait"
    },
    "Wait Using Timestamp Path" : {
      "TimestampPath" : "$.expirydate",
      "Next" : "Wait Using Seconds Path",
      "Type" : "Wait"
    },
    "Wait Using Seconds Path" : {
      "SecondsPath" : "$.expiryseconds",
      "Next" : "Final State",
      "Type" : "Wait"
    },
    "Final State" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API
Again, we call the stateMachine() method to begin constructing the state machine. Our start-at state is a Task state that has a transition to the Wait Using Seconds state. The Wait Using Seconds state is configured to wait for 10 seconds before proceeding to the Wait Using Timestamp state. Notice that we use the waitState() method to obtain an instance of WaitState.Builder, which we then use to configure the state. The waitFor() method can accept different types of wait strategies (Seconds, SecondsPath, Timestamp, TimestampPath). Each strategy has a corresponding method in the fluent API (seconds, secondsPath, timestamp, and timestampPath, respectively). Both the SecondsPath and TimestampPath strategies require a valid JsonPath that references data in the input to the state. This input is then used to determine how long to wait for.

final Date waitUsingTimestamp =
        Date.from(LocalDateTime.now(ZoneOffset.UTC).plusMinutes(15).toInstant(ZoneOffset.UTC));
final StateMachine stateMachine = stateMachine()
        .comment("An example of the Amazon States Language using wait states")
        .startAt("First State")
        .state("First State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(next("Wait Using Seconds")))
        .state("Wait Using Seconds", waitState()
                .waitFor(seconds(10))
                .transition(next("Wait Using Timestamp")))
        .state("Wait Using Timestamp", waitState()
                .waitFor(timestamp(waitUsingTimestamp))
                .transition(next("Wait Using Timestamp Path")))
        .state("Wait Using Timestamp Path", waitState()
                .waitFor(timestampPath("$.expirydate"))
                .transition(next("Wait Using Seconds Path")))
        .state("Wait Using Seconds Path", waitState()
                .waitFor(secondsPath("$.expiryseconds"))
                .transition(next("Final State")))
        .state("Final State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end()))
        .build();
System.out.println(stateMachine.toPrettyJson());

Retry Failure

Retriers are a mechanism to retry certain types of states on a given set of error codes. They define both the condition on which to retry (via ErrorEquals) and the backoff behavior and maximum number of retry attempts. At the time of this post, they may be used only with Task states and Parallel states. In the following state machine, the Task state has three retriers. The first retrier retries a custom error code named HandledError that might be thrown from the Lambda function. The initial delay of the first retry attempt is one second (as defined by IntervalSeconds). The maximum number of retry attempts is set at five. The BackoffRate is used for subsequent retries to determine the next delay; for example, the delays for the first retrier would be 1, 2, 4, 8, etc. The second retrier uses a predefined error code available that is matched whenever the task fails (for whatever reason). A full list of predefined error codes can be found here. Finally, the last retrier uses the special error code States.ALL to retry on everything else. If you use the States.ALL error code, it must appear in the last retrier and must be the only code present in ErrorEquals.
JSON

{
  "Comment" : "A Retry example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Retry" : [ {
        "ErrorEquals" : [ "HandledError" ],
        "IntervalSeconds" : 1,
        "MaxAttempts" : 5,
        "BackoffRate" : 2.0
      }, {
        "ErrorEquals" : [ "States.TaskFailed" ],
        "IntervalSeconds" : 30,
        "MaxAttempts" : 2,
        "BackoffRate" : 2.0
      }, {
        "ErrorEquals" : [ "States.ALL" ],
        "IntervalSeconds" : 5,
        "MaxAttempts" : 5,
        "BackoffRate" : 2.0
      } ],
      "Type" : "Task"
    }
  }
}

Java API

Let’s see what the previous example looks like in the Java API. Here we use the retrier() method to configure a Retrier.Builder. The errorEquals() method can take one or more error codes that indicate what this retrier handles. The second retrier uses a constant defined in the ErrorCodes class, which contains all predefined error codes supported by the States language. The last retrier uses a special method, retryOnAllErrors(), to indicate the retrier handles any other errors. This is equivalent to errorEquals("States.ALL") but is easier to read and easier to remember. Again, the “retry all” retrier must be last or a validation exception will be thrown.

final StateMachine stateMachine = stateMachine()
        .comment("A Retry example of the Amazon States Language using an AWS Lambda Function")
        .startAt("Hello World")
        .state("Hello World", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end())
                .retrier(retrier()
                                 .errorEquals("HandledError")
                                 .intervalSeconds(1)
                                 .maxAttempts(5)
                                 .backoffRate(2.0))
                .retrier(retrier()
                                 .errorEquals(ErrorCodes.TASK_FAILED)
                                 .intervalSeconds(30)
                                 .maxAttempts(2)
                                 .backoffRate(2.0))
                .retrier(retrier()
                                 .retryOnAllErrors()
                                 .intervalSeconds(5)
                                 .maxAttempts(5)
                                 .backoffRate(2.0))
        )
        .build();

System.out.println(stateMachine.toPrettyJson());

Catch Failure

Catchers are a similar error handling mechanism. Like Retriers, they can be defined to handle certain error codes that can be thrown from a state. Catchers define a state transition that occurs when the error code matches the ErrorEquals list. The transition state can handle the recovery steps needed for that particular failure scenario. Much like retriers, ErrorEquals can contain one or more error codes (either custom or predefined). The States.ALL is a special catch all that must be in the last Catcher, if present.
JSON

{
  "Comment" : "A Catch example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Catch" : [ {
        "Next" : "Custom Error Fallback",
        "ErrorEquals" : [ "HandledError" ]
      }, {
        "Next" : "Reserved Type Fallback",
        "ErrorEquals" : [ "States.TaskFailed" ]
      }, {
        "Next" : "Catch All Fallback",
        "ErrorEquals" : [ "States.ALL" ]
      } ],
      "Type" : "Task"
    },
    "Custom Error Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a custom lambda function exception",
      "Type" : "Pass"
    },
    "Reserved Type Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a reserved error code",
      "Type" : "Pass"
    },
    "Catch All Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a reserved error code",
      "Type" : "Pass"
    }
  }
}

Java API

To configure a catcher, first call the catcher() method to obtain a Catcher.Builder. The first Catcher handles the custom error code HandledError, and transitions to the Custom Error Fallback state. The second handles the predefined States.TaskFailed error code, and transitions to the Reserved Type Fallback state. Finally, the last catcher handles all remaining errors and transitions to the Catch All Fallback state. Like Retriers, there is a special method, catchAll(), that configures the catcher to handle all error codes. Use of catchAll() is preferred over errorEquals("States.ALL").

final StateMachine stateMachine = stateMachine()
        .comment("A Catch example of the Amazon States Language using an AWS Lambda Function")
        .startAt("Hello World")
        .state("Hello World", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end())
                .catcher(catcher()
                                 .errorEquals("HandledError")
                                 .transition(next("Custom Error Fallback")))
                .catcher(catcher()
                                 .errorEquals(ErrorCodes.TASK_FAILED)
                                 .transition(next("Reserved Type Fallback")))
                .catcher(catcher()
                                 .catchAll()
                                 .transition(next("Catch All Fallback"))))
        .state("Custom Error Fallback", passState()
                .result("\"This is a fallback from a custom lambda function exception\"")
                .transition(end()))
        .state("Reserved Type Fallback", passState()
                .result("\"This is a fallback from a reserved error code\"")
                .transition(end()))
        .state("Catch All Fallback", passState()
                .result("\"This is a fallback from a reserved error code\"")
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());

Parallel State

You can use a Parallel state to concurrently execute multiple branches. Branches are themselves pseudo state machines and can contain multiple states (and even nested Parallel states). The Parallel state waits until all branches have terminated successfully before transitioning to the next state. Parallel states support retriers and catchers in the event that execution of a branch fails.
JSON

{
  "Comment": "An example of the Amazon States Language using a parallel state to execute two branches at the same time.",
  "StartAt": "Parallel",
  "States": {
    "Parallel": {
      "Type": "Parallel",
      "Next": "Final State",
      "Branches": [
        {
          "StartAt": "Wait 20s",
          "States": {
            "Wait 20s": {
              "Type": "Wait",
              "Seconds": 20,
              "End": true
            }
          }
        },
        {
          "StartAt": "Pass",
          "States": {
            "Pass": {
              "Type": "Pass",
              "Next": "Wait 10s"
            },
            "Wait 10s": {
              "Type": "Wait",
              "Seconds": 10,
              "End": true
            }
          }
        }
      ]
    },
    "Final State": {
      "Type": "Pass",
      "End": true
    }
  }
}

Java API

To create a Parallel state in the Java API, call the parallelState() method to obtain an instance of ParallelState.Builder. Next, you can add branches of execution via the branch() method. Each branch must have StartAt (name of initial state for branch) specified and at least one state.

final StateMachine stateMachine = stateMachine()
        .comment(
                "An example of the Amazon States Language using a parallel state to execute two branches at the same time.")
        .startAt("Parallel")
        .state("Parallel", parallelState()
                .transition(next("Final State"))
                .branch(branch()
                                .startAt("Wait 20s")
                                .state("Wait 20s", waitState()
                                        .waitFor(seconds(20))
                                        .transition(end())))
                .branch(branch()
                                .startAt("Pass")
                                .state("Pass", passState()
                                        .transition(next("Wait 10s")))
                                .state("Wait 10s", waitState()
                                        .waitFor(seconds(10))
                                        .transition(end()))))
        .state("Final State", passState()
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());
System.out.println(stateMachine.toPrettyJson());

Choice State

A Choice state adds branching logic to a state machine. It consists of one or more choices and, optionally, a default state transition if no choices matches. Each choice rule represents a condition and a transition to enact if that condition evaluates to true. Choice conditions can be simple (StringEquals, NumericLessThan, etc) or composite conditions using And, Or, Not.

In the following example, we have a choice state with two choices, both using the NumericEquals condition, and a default transition if neither choice rule matches.
JSON

{
  "Comment" : "An example of the Amazon States Language using a choice state.",
  "StartAt" : "First State",
  "States" : {
    "First State" : {
      "Next" : "Choice State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    },
    "Choice State" : {
      "Default" : "Default State",
      "Choices" : [ {
        "Variable" : "$.foo",
        "NumericEquals" : 1,
        "Next" : "First Match State"
      }, {
        "Variable" : "$.foo",
        "NumericEquals" : 2,
        "Next" : "Second Match State"
      } ],
      "Type" : "Choice"
    },
    "First Match State" : {
      "Next" : "Next State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:OnFirstMatch",
      "Type" : "Task"
    },
    "Second Match State" : {
      "Next" : "Next State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:OnSecondMatch",
      "Type" : "Task"
    },
    "Default State" : {
      "Cause" : "No Matches!",
      "Type" : "Fail"
    },
    "Next State" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API

To add a Choice state to your state machine, use the choiceState() method to obtain an instance of ChoiceState.Builder. You can add choice rules via the choice() method on the builder. For simple conditions, there are several overloads for each comparison operator (LTE, LT, EQ, GT, GTE) and data types (String, Numeric, Timestamp, Boolean). In this example, we’re using the eq() method that takes a string as the first argument, which is the JsonPath expression referencing the input data to apply the condition to. The second argument will differ depending on the type of data you are comparing against. Here we’re using an integer for numeric comparison. Each choice rule must have a transition that should occur if the condition evaluates to true.

final StateMachine stateMachine = stateMachine()
        .comment("An example of the Amazon States Language using a choice state.")
        .startAt("First State")
        .state("First State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(next("Choice State")))
        .state("Choice State", choiceState()
                .choice(choice()
                                .transition(next("First Match State"))
                                .condition(eq("$.foo", 1)))
                .choice(choice()
                                .transition(next("Second Match State"))
                                .condition(eq("$.foo", 2)))
                .defaultStateName("Default State"))
        .state("First Match State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:OnFirstMatch")
                .transition(next("Next State")))
        .state("Second Match State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:OnSecondMatch")
                .transition(next("Next State")))
        .state("Default State", failState()
                .cause("No Matches!"))
        .state("Next State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());

You can find more references and tools for building state machines in the Step Functions documentation, and post your questions and feedback to the Step Functions Developers Forum.

AWS Toolkit for Eclipse: Serverless Applications

I’m glad to announce that the AWS Lambda plugin in the AWS Toolkit for Eclipse now supports serverless application development for Java. The serverless application (also called a Lambda-based application) is composed of functions triggered by events. In this blog, I provide two examples to show you how to leverage the Eclipse IDE to create and deploy a serverless application quickly.

Install the AWS Toolkit for Eclipse

To install the latest AWS Toolkit for Eclipse, go to this page and follow the instructions at the top right of the page. You should install the AWS Toolkit for Eclipse Core, AWS CloudFormation Tool, and AWS Lambda Plugin to use this feature. The following figure shows where you can choose these three components in the installation wizard. To complete the installation, you need to review and accept the license and restart Eclipse.

InstallServerless

Create a Serverless Project

To create a serverless project, click the Toolbar AWS and choose New AWS Serverless Project…, The following wizard opens. You can also create a new serverless project using the AWS Toolkit for Eclipse in the usual way: Choose File, New, Other, AWS and then choose AWS Serverless Java Project. As you can see in the following figure, the Toolkit provides two blueprints for you to start with: article and hello-world.

  • article – This is a simple serverless application that helps manage articles. It consists of two Lambda functions triggered by API events. The two functions are GetArticle and PutArticle, which manage storing articles to the backend service and retrieving articles to the front end. This blueprint also leverages an Amazon S3 bucket for storing article content and an Amazon DynamoDB table for storing article metadata.
  • hello-world – This blueprint project only includes a simple stand alone Lambda function, HelloWorld, which is not triggered by any event and not bound to any resource. It simply takes in a String and outputs it with the prefix “Hello”. If an empty String is provided, it outputs “Hello World”.

NewServerlessProject

You can also use a serverless template to create a serverless project by choosing Select a Serverless template file and then importing the template file. This template file is a simplified version of the SAM (AWS Serverless Application Model) file that is used in a serverless application to define the application resources stack. The following snippet is from the blueprint articles template for defining the Lambda function GetArticle. Different from the real SAM file, you don’t need to provide the CodeUri and Runtime properties, and you only need to provide the class name for the Handler property instead of the Fully Qualified Class Name. By importing a template file, the AWS Toolkit for Eclipse will generate all the Lambda function hooks and the Lambda Proxy Integration models used as the API event Input and Output for the Lambda functions.

{
  "Type": "AWS::Serverless::Function",
  "Properties": {
    "Handler": "com.serverless.demo.GetArticle",
    "Runtime": "Java",
    "CodeUri": "s3://serverless-bucket/get-article.zip",
    "Policies": [
      "AmazonDynamoDBReadOnlyAccess",
      "AmazonS3ReadOnlyAccess"
    ],
    ...
}

The following figure shows the startup view after you create the article blueprint project. As you can see from the project structure, the AWS Toolkit for Eclipse puts all the Lambda functions defined in the template into a function package, and all the required models into a model package. You can check the serverless.template file for a closer look at this project. As we mentioned earlier, this is a simplified version of a SAM file, which is derived from AWS CloudFormation template. See the README.html page for the next.

articleStartup

Deploy a Serverless Project

If the serverless project is created from a blueprint, you can deploy it directly to AWS. Notice that the article blueprint will create an S3 bucket and a DynamoDB table for use of the Lambda functions. You can open the serverless.template file and customize the resource names in the Parameters property section, as shown in the following snippet.

"Parameters" : {
    "ArticleBucketName" : {
        "Type" : "String",
        "Default" : "serverless-blueprint-article-bucket",
        "Description" : "Name of S3 bucket used to store the article content.",
        "MinLength" : "0"
    },
    "ArticleTableName" : {
        "Type" : "String",
        "Default" : "serverless-blueprint-article-table",
        "Description" : "Name of DynamoDB table used to store the article metadata.",
        "MinLength" : "0"
      },
      ...
}

To deploy this project to AWS, click the project name in the explorer view, choose Amazon Web Services, and then choose Deploy Serverless Project. Or right click the workspace of any Lambda function file, choose AWS Lambda, and then choose Deploy Serverless Project. You will see the following wizard. Choose the S3 bucket, type the CloudFormation stack name, and then choose Finish. The AWS Toolkit for Eclipse generates the fat JAR file for the underlying Lambda functions, and uploads it to the S3 bucket you chose. It’ll also update the serverless.template file in memory to be a real SAM file and upload it to the S3 bucket. AWS CloudFormation reads this file to create the stack.

DeployArticle

While the AWS CloudFormation stack is being created, a Stack Editor view is shown to indicate the current status of the stack. This page is automatically refreshed every five seconds, but you can also manually refresh it by clicking the refresh icon at the top right of the view. Upon CREATE_COMPLETE, you will see a link to the right  of the Output label in the top section. This link is the Prod stage endpoint of the API Gateway API created by this serverless project.

DeploymentStackEditor

Test a Serverless Project

After successfully deploying the article project, you can test the two APIs by hitting the API Prod endpoint through browser tools or command line tools.

  • Using the Curl command line tool.
    $ curl --data "This is an article!" https://s5cvlouqwe.execute-api.us-west-2.amazonaws.com/Prod?id=1
    Successfully inserted article 1
    $ curl -X GET https://s5cvlouqwe.execute-api.us-west-2.amazonaws.com/Prod?id=1
    This is an article!
  • Using the Simple rest client plugin in Chrome. You can also use this plugin to send a POST request to the endpoint.

We’d like to know what you think of the workflow for developing serverless applications with the AWS Toolkit for Eclipse. Please let us know if there are other features you want to see in this toolkit. We appreciate your comments.

Introducing Support for Java SDK Generation in Amazon API Gateway

by Andrew Shore | on | in Java | Permalink | Comments |  Share

We are excited to announce support for generating a Java SDK for services fronted by Amazon API Gateway. The generated Java SDKs are compatible with Java 8 and later. Generated SDKs have first-class support for API keys, custom or AWS Identity and Access Management (IAM) authentication, automatic and configurable retries, exception handling, and more. In this blog post, we’ll walk through how to create a sample API, and generate a Java SDK from that API, and explore various features of the generated SDK. This post assumes you have some familiarity with API Gateway concepts.

Create an Example API

To start, let’s create a sample API by using the API Gateway console. Navigate to the API Gateway console and select your preferred region. Choose Create API, and then choose the Example API option. Choose Import to create the example API.

create-example-api

The example API is pretty simple. It consists of four operations.

  1. A GET on the API root resource that returns HTML describing the API.
  2. A GET on the /pets resource that returns a list of Pets.
  3. A POST on the /pets resource that creates a new Pet
  4. A GET on the /pets/{petId} resource that returns a specific Pet by ID.

Deploy the API

Next, you’ll deploy the API to a stage.

Under Actions, choose Deploy API, name the stage test, and then choose Deploy.

deploy-example-api

After you deploy the API, on the SDK Generation tab, choose Java as the platform. For Service Name, type PetStore. For Java Package Name, type com.petstore.client. Leave the other fields empty. Choose Generate SDK, and then download and unzip the SDK package.

generate-java-sdk

There are several configuration options available for the Java platform. Before proceeding, let’s go over them.

Service Name – Used to name the Java Interface you’ll use to make calls to your API.

Java Package Name – The name of the package your generated SDK code will be placed under. This name is typically named based on your organization.

The following optional parameters are used when publishing the SDK to a remote repository, like Maven Central.

Java Build System – The build system to configure for the generated SDK, either maven or gradle. The default is maven.

Java Group ID – Typically identifies your organization. Defaults to Java Package Name if not provided.

Java Artifact ID – Identifies the library or product. Defaults to Service Name if not provided.

Java Artifact Version – Version identifier for the published SDK. Defaults to 1.0-SNAPSHOT if not provided.

Compile Client

Navigate to the location where you unzipped the SDK package. If you’ve been following the example, the package will be setup as a Maven project. Ensure Maven and a JDK have been installed correctly, and run the following command to install the client package into your local Maven repository This makes it available for other local projects to use.

mvn install

Set Up an Application

Next, you’ll set up an application that depends on the client package you previously installed. Because the client requires Java 8 or later, any application that depends on the client must also be built with Java 8. Here, you’ll use a simple Maven Archetype to generate an empty Java 8 project.

mvn archetype:generate -B -DarchetypeGroupId=pl.org.miki -DarchetypeArtifactId=java8-quickstart-archetype -DarchetypeVersion=1.0.0 \
    -DgroupId=com.petstore.app \
    -DartifactId=petstore-app \
    -Dversion=1.0 \
    -Dpackage=com.petstore.app

Navigate to the newly created project and open the pom.xml file. Add the following snippet to the <dependencies>…</dependencies section of the XML file. If you changed any of SDK export parameters in the console, use those values instead.

<dependency>
    <groupId>com.petstore.client</groupId>
    <artifactId>PetStore</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

Create a file src/main/java/com/petstore/app/AppMain.java with the following contents.

package com.petstore.app;

import com.petstore.client.*;
import com.petstore.client.model.*;
import com.amazonaws.opensdk.*;
import com.amazonaws.opensdk.config.*;

public class AppMain {

    public static void main(String[] args) {

    }
}

Build the application to ensure everything is configured correctly.

mvn install

To run the application, you can use the following Maven command. (As you make changes, be sure to rerun mvn install before running the application.)

mvn exec:java -Dexec.mainClass="com.petstore.app.AppMain"

Exploring the SDK

Creating the Client

The first thing you need to do is construct an instance of the client. You can use the client builder obtained from a static factory method on the client interface. All configuration methods on the builder are optional (except for authorization related configuration).In the following code, you obtain an instance of the builder, override some of the configuration, and construct a client. The following settings are for demonstration only, and are not necessarily the recommended settings for creating service clients.

PetStore client = PetStore.builder()
        .timeoutConfiguration(new TimeoutConfiguration()
                                      .httpRequestTimeout(20_000)
                                      .totalExecutionTimeout(30_000))
		.connectionConfiguration(new ConnectionConfiguration()
                                      .maxConnections(100)
                                      .connectionMaxIdleMillis(120))
        .build();

The builder exposes a ton of useful configuration methods for timeouts, connection management, proxy settings, custom endpoints, and authorization. Consult the Javadocs for full details on what is configurable.

Making API Calls

Once you’ve built a client, you’re ready to make an API call.

Call the GET /pets API to list the current pets. The following code prints out each pet to STDOUT. For each API in the service, a method is generated on the client interface. That method’s name will be based on a combination of the HTTP method and resource path, although this can be overridden (more on that later in this post).

client.getPets(new GetPetsRequest())
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

The GET /pets operation exposes a query parameter named type that can be used to filter the pets that are returned. You can set modeled query parameters and headers on the request object.

client.getPets(new GetPetsRequest().type("dog"))
        .getPets()
        .forEach(p -> System.out.printf("Dog: %s\n", p));

Let’s try creating a Pet and inspecting the result from the service. Here you call the POST /pets operation, supplying information about the new Pet. The CreatePetResult contains the unmarshalled service response (as modeled in the Method Response) and additional HTTP-level metadata that’s available via the sdkResponseMetadata() method.

final CreatePetResult result = client.createPet(
        new CreatePetRequest().newPet(new NewPet()
                                              .type(PetType.Bird)
                                              .price(123.45)));
System.out.printf("Response message: %s \n", result.getNewPetResponse().getMessage());
System.out.println(result.sdkResponseMetadata().header("Content-Type"));
System.out.println(result.sdkResponseMetadata().requestId());
System.out.println(result.sdkResponseMetadata().httpStatusCode());

The GET /pets/{petId} operation uses a path placeholder to get a specific Pet, identified by its ID. When making a call with the SDK, all you need to do is supply the ID. The SDK handles the rest.

GetPetResult pet = client.getPet(new GetPetRequest().petId("1"));
System.out.printf("Pet by ID: %s\n", pet);

Overriding Configuration at the Request Level

In addition to the client-level configuration you supply when creating the client (by using the client builder), you can also override certain configurations at the request level. This “request config” is scoped only to calls made with that request object, and takes precedence over any configuration in the client.

client.getPets(new GetPetsRequest()
                       .sdkRequestConfig(SdkRequestConfig.builder()
                                                 .httpRequestTimeout(1000).build()))
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

You can also set custom headers or query parameters via the request config. This is useful for adding headers or query parameters that are not modeled by your API. The parameters are scoped to calls made with that request object.

client.getPets(new GetPetsRequest()
                       .sdkRequestConfig(SdkRequestConfig.builder()
                                                 .customHeader("x-my-custom-header", "foo")
                                                 .customQueryParam("MyCustomQueryParam", "bar")
                                                 .build()))
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

Naming Operations

It’s possible to override the default names given to operations through the API Gateway console or during an import from a Swagger file. Let’s rename the GetPet operation (GET /pets/{petId}) to GetPetById by using the console. First, navigate to the GET method on the /pets/{petId} resource.

change-operation-name

Choose Method Request, and then expand the SDK Settings section.

sdk-settings

Edit the Operation Name field and enter GetPetById. Save the change and deploy the API to the stage you created previously. Regenerate a Java SDK, and it should have the updated naming for that operation.

GetPetByIdResult pet = client.getPetById(new GetPetByIdRequest().petId("1"));
System.out.printf("Pet by ID: %s\n", pet);

If you are importing an API from a Swagger file, you can customize the operation name by using the operationId field. The following snippet is from the example API, and shows how the operationId field is used.

...
    "/pets/{petId}": {
      "get": {
        "tags": [
          "pets"
        ],
        "summary": "Info for a specific pet",
        "operationId": "GetPet",
        "produces": [
          "application/json"
        ],
...

Final Thoughts

This post highlights how to generate the Java SDK of an API in API Gateway, and how to call the API using the SDK in an application. For more information about how to build the SDK package, initiate a client with other configuration properties, make raw requests, configure authorization, handle exceptions, and configure retry behavior, see the README.html file in the uncompressed SDK project folder.

AWS Toolkit for Eclipse: VPC Configuration for an AWS Elastic Beanstalk Environment

I’m glad to announce that the AWS Elastic Beanstalk plugin in the AWS Toolkit for Eclipse now supports Configuring VPC with Elastic Beanstalk. If you’re new to AWS Toolkit for Eclipse, see the User Guide for a basic introduction and setup guidance. If you’re new to AWS Elastic Beanstalk plugin, see AWS Elastic Beanstalk and Eclipse Integration to learn how to manage your application and environment within Eclipse. If you’re not familiar with VPC configurations for Elastic Beanstalk environments, see Using Elastic Beanstalk with Amazon VPC.

The following screenshots show the plugin New Server wizard pages for VPC configuration. If you’ve used the Elastic Beanstalk console, these user interfaces should be familiar. On the Configure Application and Environment page, you can choose Select a VPC to use when creating your environment to open the VPC Configuration page. Otherwise, when you click Next, the Permissions page opens directly.

application and environment configuration

Configure Application and Environment Page

On the VPC Configuration page, you can set up the VPC configurations, such as subnets, security group, etc. You must select at least one subnet for a valid configuration. ELB visibility is disabled unless you chose Load Balanced Web Server Environment for the Environment type on the previous Configure Application and Environment page. For more information about the options and how to configure them for your VPC needs, see Using Elastic Beanstalk with Amazon VPC.

VPC Configuration Page

VPC Configuration Page

As you can see, it’s easy to configure a VPC for your Elastic Beanstalk environment using the AWS Toolkit for Eclipse plugin. Please let us know if there are other features you want to see in this toolkit. We appreciate your comments.

Waiters in the AWS SDK for Java

by Meghana Lokesh Byaramadu | on | in Java | Permalink | Comments |  Share

We’re pleased to announce the addition of the waiters feature in the AWS SDK for Java (take a look at the release notes). Waiters make it easier to wait for a resource to transition into a desired state, which is a very common task when you’re working with services that are eventually consistent (such as Amazon DynamoDB) or have a lead time for creating resources (such as Amazon EC2). Before waiters, it was difficult to come up with the polling logic to determine whether a particular resource had transitioned into a desired state. Now with waiters, you can more simply and easily abstract out the polling logic into a simple API call.

Polling without Waiters

For example, let’s say you wanted to create a DynamoDB table and access it soon after it’s created to add an item into it. There’s a chance that if the table isn’t created already, a ResourceNotFoundException error will be thrown. In this scenario, you have to poll until the table becomes active and ready for use.

//Create an AmazonDynamoDb client 
AmazonDynamoDB client = AmazonDynamoDBClientBuilder
                	.standard()
                	.withRegion(Regions.US_WEST_2)
                	.build();

//Create a table
 client.createTable(new CreateTableRequest().withTableName(tableName)
            .withKeySchema(new KeySchemaElement().withKeyType(KeyType.HASH)
                                                 .withAttributeName("hashKey"))
            .withAttributeDefinitions(new AttributeDefinition()
                                                 .withAttributeType(ScalarAttributeType.S)
                                                 .withAttributeName("hashKey"))
            .withProvisionedThroughput(new ProvisionedThroughput(5L, 5L)));

Without waiters, polling would look like this.

//Polling 5 times for table to become active 
int attempts = 0;
while(attempts < 5){
     try{
           DescribeTableRequest request = new DescribeTableRequest(tableName);
           DescribeTableResult result = client.describeTable(request);
           String status = res.getTable().getTableStatus();
           if(status.equals(“ACTIVE”)){
               break;
	   }
	   Thread.sleep(5000);
	   attempts++;
     }
     catch(ResourceNotFoundException e){
     }
}

Polling with Waiters

Waiters make it easier to abstract out the polling logic into a simple API call. Let’s take a look at how you can create and use waiters to more easily determine whether a DynamoDB table is successfully created and ready to use for further transactions.

//Create waiter to wait on successful creation of table.
Waiter waiter = client.waiters().tableExists();
     try{
          waiter.run(new WaiterParameters<>(new DescribeTableRequest(tableName)); 
     }
     catch(WaiterUnrecoverableException e){
          //Explicit short circuit when the resource transitions into 
          //an undesired state. 
     }
     catch(WaiterTimedOutException e){
          //Failed to transition into desired state even after polling
     }
     catch(DynamoDBException e){
          //Unexpected service exception
     }

For more details, see AmazonDynamoDBWaiters.

Async Waiters

We also offer an async variant of waiters that returns a Future object that promises to hold the result of the computation after it’s done. An async waiter requires a callback interface that is invoked after the Future object is fulfilled. Callback provides an interface to carry out other tasks, depending on whether the resource entered a desired state (onWaitSuccess) or not (onWaitFailure).

To use an async waiter, you must call an async variant of run.

Future future = client.waiters()
                   .tableExists()
                   .runAsync(new WaiterParameters()
                      .withRequest(new DescribeTableRequest(tableName)),
                      new WaiterHandler() {                
                      @Override
                      public void onWaitSuccess(DescribeTableRequest request) {
                          System.out.println("Table creation success!!!!!");
                      }

                      @Override
                      public void onWaitFailure(Exception e) {
	                      e.printStackTrace();
                      }
                 });
				
future.get(5, TimeUnit.MINUTES);

To learn more, see Waiters.

We are excited about this new addition to the SDK! Let us know what you think in the comments section below.

Throttled Retries Now Enabled by Default

by Kyle Thomson | on | in Java | Permalink | Comments |  Share

Back in March (1.10.59), the AWS SDK for Java introduced throttled retries, an opt-in feature that could be enabled in the SDK ClientConfiguration to retry failed service requests. Typically, client-side retries are used to avoid unnecessarily surfacing exceptions caused by transient network or service issues. However, when there are longer-running issues (for example, a network or service outage) these retries are less useful. With throttled retries enabled, service calls can fail fast rather than retrying pointlessly.

After testing this code for the past five months, we’ve turned on throttled retries by default.

If you use Amazon CloudWatch to collect AWS SDK metrics (see this post for details), you’ll be pleased to know that there’s a new metric that tracks when retries are throttled. Look for the ThrottledRetryCount metric in the CloudWatch console.

Of course, this remains an option you can configure. If you don’t want to use throttled retries, you can disable the feature through the ClientConfiguration option like so:

ClientConfiguration config = new ClientConfiguration().withThrottledRetries(false);

Feel free to leave questions or feedback in the comments.

DevOps Meets Security: Security Testing Your AWS Application: Part III – Continuous Testing

by Marcilio Mendonca | on | in Java | Permalink | Comments |  Share

This is part III of a blog post series in which we do a deep dive on automated security testing for AWS applications. In part I, we discussed how AWS Java developers can create security unit tests to verify the correctness of their AWS applications by testing individual units of code in isolation. In part II we went one step further and showed how developers can create integration tests that, unlike unit tests, interact with real software components and AWS resources. In this last post in the series, we’ll walk you through how to incorporate the provided security tests into a CI/CD pipeline (created in AWS CodePipeline) to automate security verification when new changes are pushed into the code repository

Security Tests

In part I and part II of this post, we created a suite of unit and integration tests for a simple S3 wrapper Java class. Unit tests focused on testing the class in isolation by using mock objects instead of real Amazon S3 objects and resources. In addition, integration tests were created to complement unit tests and provide an additional layer of verification that uses real objects and resources like S3 buckets, objects, and versions. In this last post in the series (part III), we’ll show how the unit and integration security tests can be incorporated into a CI/CD pipeline to automatically verify the security behavior of code being pushed through the pipeline.

Incorporating Security Tests into a CI/CD Pipeline

Setting Up

Git and CodeCommit 

Follow the steps in the Integrating AWS CodeCommit with Jenkins blog post to install Git and create an AWS CodeCommit repo. Download the source code and push it to the AWS CodeCommit repo you created.

Jenkins and plugins on EC2

Follow the steps in the Building Continuous Deployment on AWS with AWS CodePipeline, Jenkins and AWS Elastic Beanstalk blog post to install and configure Jenkins on Amazon EC2. Make sure you install the AWS CodePipeline Jenkins plugin to enable AWS CodePipeline and Jenkins integration. In addition, create three Jenkins Maven jobs following the steps described in section “Create a Jenkins Build Job” in that blog post. However, for the Job parameters described below use the values indicated in the table instead.

Jenkins Project Name

SecTestsOnAWS
(maven project)

SecUnitTestsOnAWS
(maven project)

SecIntegTestOnAWS
(maven project)

AWS Region

choose an AWS region

choose an AWS region

choose an AWS region

Source Code Mngt: Category

Build

Test

Test

Source Code Mngt: Provider

SecTestsBuildProvider

SecUnitTestsProvider

SecIntegTestsProvider

Build: Goals and options

package -DskipUnitTests=true
-DskipIntegrationTests=true

verify
-DskipIntegrationTests=true

verify
-DskipUnitTests=true

Post-build Actions:AWS CodePipelinePublisher:Output Locations: Location

target/

target/

target/

Make sure you pick an AWS region where AWS CodePipeline is available.

Here’s an example of the configuration options in the Jenkins UI for project SecTestsOnAWS:

Setting up Jenkins to build S3ArtifactManager using Maven

AWS CodePipeline

In the AWS CodePipeline console, create a pipeline with three stages, as shown here.

AWS CodePipeline CI/CD pipeline with security unit/integration tests actions

Stage #1: Source

  • Choose AWS CodeCommit as your source provider and enter your repo and branch names where indicated.

Stage #2: Build

Create a build action with the following parameters:

  • Action category: Build
  • Action name: Build
  • Build provider: SecTestsBuildProvider (must match the corresponding Jenkins entry in project SecTestsOnAWS)
  • Project name: SecTestsOnAWS
  • Input Artifact #1: MyApp
  • Output Artifact #1: MyAppBuild

Stage #3: Security-Tests

Create two pipeline actions as follows:

Action #1: Unit-Tests Action #2: Integration-Tests
  • Action category: Test
  • Action name: Unit-Tests
  • Build provider: SecUnitTestsProvider (must match the corresponding Jenkins entry in project SecUnitTestsOnAWS)
  • Project name: SecUnitTestsOnAWS
  • Input Artifact #1: MyApp
  • Output Artifact #1: MyUnitTestedBuild
  • Action category: Test
  • Action name: Integration-Tests
  • Build provider: SecIntegTestsProvider (must match the corresponding Jenkins entry in project SecIntegTestsOnAWS)
  • Project name: SecIntegTestsOnAWS
  • Input Artifact #1: MyApp
  • Output Artifact #1: MyIntegTestedBuild

We are not adding a pipeline stage/action for application deployment because we have built a software component (S3ArtifactManager), not a full-fledged application. However, we encourage the reader to create a simple web or standalone application that uses the S3ArtifactManager class and then add a deployment action to the pipeline targeting an AWS Elastic Beanstalk environment, as described in this blog post.

Triggering the Pipeline

After the pipeline has been created, choose the Release Change button and watch the pipeline build the S3ArtifactManager component.

If you are looking for a more hands-on experience in writing security tests, we suggest that you extend the S3ArtifactManager API to allow clients to retrieve versioned objects from an S3 bucket (for example, getObject(String bucketName, String key, String versionId)) and write security tests for the new API. 

Final Remarks

In this last post of the series, we showed how to automate the building and testing of our S3ArtifactManager component by creating a pipeline using AWS CodePipeline, AWS CodeCommit, and Jenkins. As a result, any code changes pushed to the repo are now automatically verified by the pipeline and rejected if security tests fail.

We hope you found this series helpful. Feel free to leave your feedback in the comments.

Happy security testing!

DevOps Meets Security: Security Testing Your AWS Application: Part II – Integration Testing

by Marcilio Mendonca | on | in Java | Permalink | Comments |  Share

This is part II of a blog post series in which we do a deep dive on automated security testing for AWS applications. In part I, we discussed how AWS Java developers can create security unit tests to verify the correctness of their AWS applications by testing individual units of code in isolation. In this post, we go one step further and show how developers can create integration tests that, unlike unit tests, interact with real software components and AWS resources. In part III of this series, we’ll walk through how to incorporate the provided security tests into a CI/CD pipeline (created in AWS CodePipeline) to enforce security verification when new code is pushed into the code repository. Even though we focus on security, the tests provided can be easily generalized to other domains.

S3 Artifact Manager

In part I of this post, we introduced a simple S3 wrapper component built to illustrate the security tests discussed in this post series. The wrapper, represented by a Java class named S3ArtifactManager (full source code can be accessed here), uses AWS SDK for Java APIs to provide a more secure way to store objects in Amazon S3.

Here we illustrate the method upload() implemented as part of class S3ArtifactManager that can be used to securely upload objects to an S3 bucket. For details about this method, see part I of this series. In the next section, we’ll write integration tests for method upload().

public String upload(String s3Bucket, String s3Key, File file) 
   throws AmazonServiceException, AmazonClientException {
   if (!s3.doesBucketExist(s3Bucket)) {
      s3.createBucket(s3Bucket);
   }

   // enable bucket versioning
   SetBucketVersioningConfigurationRequest configRequest = 
      new SetBucketVersioningConfigurationRequest(s3Bucket, 
         new BucketVersioningConfiguration(BucketVersioningConfiguration.ENABLED));
   s3.setBucketVersioningConfiguration(configRequest);

   // enable server-side encryption (SSE-S3)
   PutObjectRequest request = new PutObjectRequest(s3Bucket, s3Key, file);
   ObjectMetadata objectMetadata = new ObjectMetadata();
   objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
   request.setMetadata(objectMetadata);

   // upload object to S3
   PutObjectResult putObjectResult = s3.putObject(request);

   return putObjectResult.getVersionId();
}

Security Integration Tests

Security integration tests complement security unit tests by making use of real objects and resources instead of mocking behavior. The integration tests use a real S3 client object and perform security verifications by inspecting real S3 resources, such as S3 buckets, objects, and versions. For each unit test discussed in part I of this series, we have created a corresponding integration test using JUnit, a popular Java test framework.

Verifying bucket versioning enablement on S3 buckets

The first security integration test verifies that upon calling method upload(), versioning is enabled for buckets. We use a real S3 client object using local IAM credentials and AWS region configuration. In the code that follows, prior to uploading an object, we create an unversioned S3 bucket in line 6 by calling the s3Client object directly. Our assumption is that the bucket will be versioned later by calling method upload() in line 9. Lines 13-17 verify that the versionId returned for the object exists in the bucket (that is, the state of the bucket has changed as expected after we called method upload()).

Bucket versioning is asserted in lines 20-23 by checking the bucket’s current versioning configuration value against the expected result (BucketVersioningConfiguration.ENABLED). If this security assertion fails, it means the bucket is not versioned. The test should fail and, as in the case of unit tests, block the CI/CD pipeline until developers can figure out the problem.

After each test is performed, the bucket and all versions of all objects are deleted (see the @After annotation in the source code) .

Important: Each security integration test is fully self-contained and will leave S3 in the state it was in before the test was run. To make this possible, we append UUIDs to bucket and object names. This also allows us to run integration tests in parallel because each test operates in a different bucket.

There is a similar test to check bucket versioning enablement of newly created buckets. See the source code for details.

@Test
public void testUploadWillEnableVersioningOnExistingS3Bucket() 
   throws Exception {   
   
   // bucket must exist prior to uploading object for this test
   s3Client.createBucket(s3Bucket);
   
   // call object under test
   String versionId = s3ArtifactManager.upload(s3Bucket, s3Key, 
      createSampleFile(s3Key));
   
   // assert object's version exists in the bucket
   S3Object s3Object = s3Client.getObject(
      new GetObjectRequest(s3Bucket, s3Key));
   assertEquals("Uploaded S3 object's versionId does not 
      match expected value", versionId, 
         s3Object.getObjectMetadata().getVersionId());
 
   // assert bucket versioning is enabled
   BucketVersioningConfiguration bucketConfig = s3Client
      .getBucketVersioningConfiguration(s3Bucket);
   assertEquals(BucketVersioningConfiguration.ENABLED, 
      bucketConfig.getStatus());
}

Verifying server-side-encryption of uploaded S3 objects

The last security integration test verifies that an object uploaded to S3 is encrypted using SSE-S3. Line 6 calls method upload() to upload a new object. In line 9, we retrieve the object’s metadata and check in lines 11-13 that AES256 is set as the encryption algorithm for SSE-S3, a simple, but effective verification. If this assertion fails, it means SSE-S3 has not been used to encrypt the S3 object, which invalidates our initial assumptions.  

@Test
public void testUploadAddsSSE_S3EncryptedObjectToBucket() 
   throws Exception {
   
   // call object under test
   s3ArtifactManager.upload(s3Bucket, s3Key, createSampleFile(s3Key));
   
   // verify uploaded object is encrypted (SSE-S3)
   ObjectMetadata s3ObjectMetadata = s3Client.getObjectMetadata(
      new GetObjectMetadataRequest(s3Bucket, s3Key));
   assertEquals("Object has not been encrypted using SSE-S3 
      (AES256 encryption algorithm)", AES256, 
         s3ObjectMetadata.getSSEAlgorithm());      
}

Running the Security Tests Locally

Setting Up

Follow these steps in your local workstation to run the units tests locally:

  • Install the AWS Command Line Interface (AWS CLI)
  • Configure the AWS CLI tool
  • Run the AWS CLI tool to configure your environment: aws configure

    • Make sure your AWS credentials under ~/.aws/credentials allow the user running the tests to create S3 buckets and upload S3 objects in your account
    • Make sure you specify a valid AWS region in ~/.aws/config file to run the tests (for example, us-east-1)

 

Running the Integration Tests

You can use Maven to run the provided security tests locally.

  • Navigate to the root directory where you installed the source code. (This is where the pom.xml file resides.)
  • Type mvn verify –DskipUnitTests=true to run the security integration tests.

Expected output:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Concurrency config is parallel='none', perCoreThreadCount=true, threadCount=2, useUnlimitedThreads=false

Running com.amazonaws.samples.s3.artifactmanager.integrationtests.S3ArtifactManagerIntegrationTest

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.802 sec

You’ll see in the output that all three security integration tests passed. Integration tests take longer to run (7.8 seconds) than the unit tests (0.15 seconds) covered in part I. We recommend that these two types of tests are run separately, as we enforced here, and have different triggers as we’ll discuss in part III of this series.

Final Remarks

In this blog post, we discussed how AWS Java developers can create integration tests that verify the integrated behavior of software components in their AWS applications. We used real S3 objects and resources (bucket, objects, versions) and triggered test execution using Maven.

In the third part of this series, we’ll walk through how the provided security tests can be incorporated into a CI/CD pipeline created in AWS CodePipeline to enforce security verification whenever new code changes are pushed into the code repository.

 

DevOps Meets Security: Security Testing Your AWS Application: Part I – Unit Testing

by Marcilio Mendonca | on | in Java | Permalink | Comments |  Share

The adoption of DevOps practices allows organizations to be agile while deploying high-quality software to customers on a regular basis. The CI/CD pipeline is an important component of the DevOps model. It automates critical verification tasks, making fully automated software deployments possible. Security tests are critical to the CI/CD pipeline. These tests verify whether the code conforms to the security specifications. For example, a security unit test could be used to enforce that a given software component must use server-side encryption to upload objects to an Amazon S3 bucket. Similarly, a security integration test could be applied to verify that the same software component always enables S3 bucket versioning.

In this three-part blog post series, we’ll do a deep dive on automated security testing for AWS applications. In this first post, we’ll discuss how AWS Java developers can create security unit tests to verify the correctness of their AWS applications by testing individual units of code in isolation. In part II, we’ll go one step further and show how developers can create integration tests that, unlike unit tests, interact with real software components and AWS resources. Finally, in part III, we’ll walk through how the provided security tests can be incorporated into a CI/CD pipeline (created through AWS CodePipeline) to enforce security verification whenever new code changes are pushed into the code repository. Even though we focus on security, the tests provided can be easily generalized to other domains.

S3 Artifact Manager

We start by introducing a simple S3 wrapper component built to illustrate the security tests discussed in this series. The wrapper, represented by a Java class named S3ArtifactManager (full source code can be accessed here), uses AWS SDK for Java APIs to provide a more secure way to store objects in S3.

Here we show an excerpt of class S3ArtifactManager that describes a method called upload() that can be used to securely upload objects to an S3 bucket. The method uses S3 bucket versioning to make sure  each new upload of the same object will preserve all previous versions of that object. A versionId is returned to clients each time an object (or a new version of it) is stored in the bucket so that specific versions can be retrieved later. Versioning is enabled by using a SetBucketVersioningConfigurationRequest object that takes a BucketVersioningConfiguration(BucketVersioningConfiguration.ENABLED) instance as parameter and by calling s3.setBucketVersioningConfiguration() passing the request object (lines 8-11). 

In addition, method upload() uses the server-side encryption with Amazon S3-managed encryption keys (SSE-S3) feature to enforce that objects stored in the bucket are encrypted. We simply create a metadata object setting – objectMetadata.setSSEAlgorithm() – as the encryption algorithm and attach the metadata object to the PutObjectRequest instance used to store the S3 object (lines 14-17). In line 20, the object is uploaded to S3 and its versionId is returned to client in line 22. 

public String upload(String s3Bucket, String s3Key, File file) 
   throws AmazonServiceException, AmazonClientException {
   if (!s3.doesBucketExist(s3Bucket)) {
      s3.createBucket(s3Bucket);
   }

   // enable bucket versioning
   SetBucketVersioningConfigurationRequest configRequest = 
      new SetBucketVersioningConfigurationRequest(s3Bucket, 
         new BucketVersioningConfiguration(BucketVersioningConfiguration.ENABLED));
   s3.setBucketVersioningConfiguration(configRequest);

   // enable server-side encryption (SSE-S3)
   PutObjectRequest request = new PutObjectRequest(s3Bucket, s3Key, file);
   ObjectMetadata objectMetadata = new ObjectMetadata();
   objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
   request.setMetadata(objectMetadata);

   // upload object to S3
   PutObjectResult putObjectResult = s3.putObject(request);

   return putObjectResult.getVersionId();
}

Because security is key in the cloud, security components like our S3ArtifactManager might interest individuals and organizations responsible for meeting security compliance requirements (for example, PCI). In this context, developers and other users of such components must be confident that the security functionality provided behaves as expected. A bug in the component (for example, an object that is stored unencrypted or overwrites a previous version) can be disastrous. In addition, users must remain confident as new versions of the component are released. How can confidence be achieved continuously?

It turns out that DevOps practices improve confidence. In a traditional software development approach, coding the logic of the method upload() and running a few manuals test might be enough, but in a DevOps setting, this  is not acceptable. DevOps practices require that mechanisms that automatically verify code behavior are in place. In fact, these mechanisms are just as important as the code’s main logic. Which mechanisms are we talking about? Unit and integration tests!

In the next section, we’ll discuss how unit tests can be leveraged to verify the security behavior of our S3ArtifactManager wrapper. In parts II and III of this series, we’ll dive deep into integration tests and CI/CD automation, respectively.

Security Unit Tests

Next, we’ll create a suite of security unit tests to verify the behavior of our upload() method. We’ll leverage two popular test frameworks in Java named JUnit and Mockito to code the unit tests.

The primary purpose of unit tests is to test a unit of code in isolation. Here we define unit as the Java class under test (in our case, the S3ArtifactManager class). In order to isolate the class under test, we mock all other objects used in the class, such as the S3 client object. Mocking means that our unit tests will not interact with a real S3 resource and will not upload objects into a S3 bucket. Instead, we’re using a mock object with predefined behavior.

Verifying bucket versioning enablement on S3 buckets

The first security unit test is named testUploadWillEnableVersioningOnExistingS3Bucket. It verifies whether method upload() enables bucket versioning on an existing bucket upon uploading an object to that bucket. Note that we are using a mock object in Mockito instead of a real object to represent an S3 client instance. For this reason, we need to specify the behavior of the mock object for the functionality used by method upload(). In line 5, we use Mockito’s when statement to return true when s3Client.doesBucketExist() is called because this is the condition we want to test. In line 8, method upload() is called using test values for S3 bucket, key, and file parameters. 

@Test
public void testUploadWillEnableVersioningOnExistingS3Bucket() {
   
   // set Mock behavior
   when(s3Client.doesBucketExist(s3Bucket)).thenReturn(true); 
   
   // call object under test
   String versionId = s3ArtifactManager.upload(s3Bucket, s3Key, file);
   
   // assert versionID is the expected value
   assertEquals("VersionId returned is incorrect", 
      VERSION_ID, versionId);
   
   // assert that a new bucket has NOT been created
   verify(s3Client, never()).createBucket(s3Bucket);
   
   // capture BucketVersioningConfigurationReques object 
   ArgumentCaptor<SetBucketVersioningConfigurationRequest> 
      bucketVerConfigRequestCaptor = ArgumentCaptor.forClass(
         SetBucketVersioningConfigurationRequest.class);
   verify(s3Client).setBucketVersioningConfiguration(
      bucketVerConfigRequestCaptor.capture());
   
   // assert versioning is set on the bucket
   SetBucketVersioningConfigurationRequest bucketVerConfigRequest = 
      bucketVerConfigRequestCaptor.getValue();
   assertEquals("Versioning of S3 bucket could not be 
      verified", BucketVersioningConfiguration.ENABLED, 
         bucketVerConfigRequest.getVersioningConfiguration()
            .getStatus());
}

The first verification we perform is shown in lines 11-12. It verifies that the versionId value returned matches the constant value expected in the test. In line 15, we verify that a call to s3Client.createBucket() has never been made because the bucket already exists (as mocked using when in line 5). These are standard verifications, not related to security.

In line 18, we start verifying security behavior. We use Mockito’s argument captor feature to capture the parameter passed to setBucketVersioningConfiguration, which is a real object (lines 18-22). Later, in lines 25-30, we check whether bucket versioning is enabled in that object by comparing the value captured with constant BucketVersioningConfiguration.ENABLED. If this security verification fails, it means that versioning was not correctly configured. In this scenario, because a critical security assertion could not be verified, the CI/CD pipeline should be blocked until the code is fixed.

We also created a security unit test to verify bucket versioning enablement for newly created buckets. We’ve omitted the code for brevity, but you can download full source here. This test is similar to the one we just discussed. The main differences are in line 5 (which now returns false) and line 15 which verifies that the createBucket API was called once (verify(s3Client, times(1)).createBucket(s3Bucket)).

Verifying server-side-encryption of uploaded S3 objects

The second security unit test verifies that uploaded S3 objects use server-side-encryption with Amazon S3 encryption keys (SSE-S3). In lines 8-12, the security unit test verification starts by once again using Mockito’s argument captor to capture the request object passed to s3Client.putObject(). This object is used later in two ways: first, in lines 16-18, to verify that no customer keys were provided (because upload() is expected to use SSE-S3) and then in lines 22-26 to assert that the object’s metadata is not null and returns AES256 as the encryption algorithm, the value expected for SSE-S3 encryption. Once again, if this security verification fails, the CI/CD pipeline should be blocked until SSE-S3 code implementation is fixed and verified.

@Test
public void TestUploadAddsSSE_S3EncryptedObjectToBucket() {
   
   // call object under test
   s3ArtifactManager.upload(s3Bucket, s3Key, file);
   
   // capture putObjectRequest object
   ArgumentCaptor<PutObjectRequest> putObjectRequestCaptor = 
      ArgumentCaptor.forClass(PutObjectRequest.class);
   verify(s3Client).putObject(putObjectRequestCaptor.capture());
   PutObjectRequest putObjectRequest = 
      putObjectRequestCaptor.getValue();
   
   // assert that there's no customer key provided as 
   // we're expecting SSE-S3
   assertNull("A customer key was incorrectly used (SSE-C). 
      SSE-S3 encryption expected instead.", 
         putObjectRequest.getSSECustomerKey());
   
   // assert that the SSE-S3 'AES256' algorithm was set as part of 
   // the request's metadata 
   assertNotNull("PutObjectRequest's metadata object must be non-null 
      and enforce SSE-S3 encryption", putObjectRequest.getMetadata());
   assertEquals("Object has not been encrypted using SSE-S3 (AES256 
      encryption algorithm)", AES256, putObjectRequest.getMetadata()
         .getSSEAlgorithm());
}

Running the Security Tests Locally

Setting Up

Follow these steps in your local workstation to run the unit tests:

Running the Unit Tests

You can use Maven to run the provided security unit tests locally.

  • Navigate to the root directory where you installed the source code (this is where the pom.xml file resides)
  • Type mvn verify –DskipIntegrationTests=true to run the security unit tests

Expected output:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Running com.amazonaws.samples.s3.artifactmanager.unittests.S3ArtifactManagerUnitTest

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.155 sec

You’ll see in the output that all three security unit tests passed. That is, the individual units of code tested are behaving as expected in isolation.

Final Remarks

In the first part of this series, we have discussed how AWS Java developers can create unit tests that verify the behavior of individual software components in their AWS applications. We used mocks to replace actual objects (for example, an S3 client) and used Maven to trigger test execution.

In the second part of this series, we’ll discuss integration tests that will use real AWS objects and resources.