Category: Java


AWS Toolkit for Eclipse: Support for AWS CodeCommit and AWS CodeStar

by Zhaoxi Zhang | on | in Java | | Comments

I am pleased to announce that the AWS Toolkit for Eclipse now supports AWS CodeCommit and AWS CodeStar. This means you can create, view, clone, and delete your AWS CodeCommit repositories in the AWS Toolkit for Eclipse. You can also import existing projects under your AWS CodeStar account directly into the Eclipse IDE.

Git Credentials Configuration

We recommend that you use Git credentials with HTTPS to connect to your AWS CodeCommit repositories. For more information, see Use Git Credentials and HTTPS with AWS CodeCommit.

In the new version of the AWS Toolkit for Eclipse, you will see an entry for AWS CodeCommit on the Eclipse Preferences page, shown here. To install the AWS Toolkit for Eclipse, follow the instructions on the AWS Toolkit for Eclipse page. You can configure your Git credentials for your AWS accounts on this page. For information, see Create Git Credentials for HTTPS Connections to AWS CodeCommit. You can type in the newly generated user name and password into the text fields, or import the CSV file generated from the IAM console directly into Eclipse.

AWS CodeCommit Explorer

An entry for AWS CodeCommit also appears in AWS Explorer, as shown here. To open this view, click the drop-down box next to the AWS icon in the toolbar, and select Show AWS Explorer View. You can create, view, clone, and delete repositories in this view.

  • Create a Repository
    To create a repository, right-click AWS CodeCommit and then select Create Repository, as shown here. Type the repository name and an optional description in the Create Repository dialog box. The newly created repository will appear under AWS CodeCommit.

    Figure: AWS CodeCommit Explorer View

    Figure: Create Repository Dialog Box

  • View a Repository
    To view a repository, double-click the repository name in AWS Explorer. This will open the repository editor where you can see the metadata for the repository, as shown here. The repository editor also shows the latest 10 commits for the selected branch. To refresh the repository editor, click the refresh icon on the top-right corner of the page.
  • Clone a Repository
    To clone a repository, click the Check out button in the repository editor, or right-click the repository name in AWS Explorer and select Clone Repository. If you haven’t configured Git credentials for your current AWS account in your Eclipse, a dialog box will prompt you to configure them.


    After you have configured your Git credentials, you will see the following pages for selecting a branch and local destination. You’ll see these pages have the same look and feel as EGit. For information about EGit, see the EGit Tutorial. You can use the Eclipse EGit plugin for managing your projects with Git. 

    Figure: Branch Selection Page

    Figure: Destination Page

  • Delete a Repository
    To delete a repository from AWS CodeCommit, right-click the repository name and select Delete Repository. When the following dialog box is displayed, type the repository name.

AWS CodeStar Project Checkout

You can use the AWS Toolkit for Eclipse to check out AWS CodeStar projects and edit them in the Eclipse IDE. To import your AWS CodeStar projects to Eclipse, click the drop-down box next to the AWS icon in the toolbar, and select Import AWS CodeStar Project. You will see all your AWS CodeStar projects under the selected account and region.

The plugin for AWS CodeStar finds all the AWS CodeCommit repositories that are linked to the selected project. From the Select repository drop-down list, choose the repository, and then click Next. You can also configure the Git credentials on this page if they have not been configured on the selected account.

Resources

For information about AWS CodeCommit, see the AWS CodeCommit documentation. For information about AWS CodeStar, see the AWS CodeStar documentation.

Conclusion

We hope you find these new features useful. If you have questions or other feedback about using the AWS Toolkit for Eclipse, feel free to leave it in the comments.

AWS Toolkit for Eclipse: VPC Configuration Enhancement for AWS Elastic Beanstalk Environments

by Zhaoxi Zhang | on | in Java | | Comments

From the blog post VPC Configuration for an AWS Elastic Beanstalk Environment, you learned how to deploy your web application to AWS Elastic Beanstalk by using the AWS Toolkit for Eclipse. In this blog, I’m happy to announce that you can now configure Elastic Load Balancing (ELB) subnets and Amazon EC2 subnets separately. The following screenshots show the experience in the AWS Toolkit for Eclipse is consistent with that in the Elastic Beanstalk console.

 

VPC Configuration in AWS Elastic Beanstalk Console

VPC Configuration in AWS Toolkit for Eclipse

Notice that the ELB subnet configuration is enabled only when the environment type is Load Balanced Web Server Environment (see the following screenshot for the type selection). Please read through Using Elastic Beanstalk with Amazon VPC to be sure you understand all the VPC parameters. Inappropriate parameter combinations can cause deployment failures. Follow the rules below when you create an AWS Elastic Beanstalk environment:

  • You must select at least one subnet for EC2 and for ELB.
  • You must select at least one ELB subnet in each Availability Zone where there is an EC2 subnet, and vice versa.
  • You may only select one EC2 subnet per Availability Zone.
  • When one subnet is used for both EC2 and ELB, select the Associate Public IP Address check box unless you have set up a NAT instance to route traffic from the Internet to your ELB subnet.

Application and Environment Configuration

Client Constructors Now Deprecated in the AWS SDK for Java

by Kyle Thomson | on | in Java | | Comments

A couple of weeks ago you might have noticed that the 1.11.84 version of the AWS SDK for Java included several deprecations – the most notable being the deprecation of the client constructors.

Historically, you’ve been able to create a service client as shown here.

AmazonSNS sns = new AmazonSNSClient();

This mechanism is now deprecated in favor of using one of the builders to create the client as shown here.

AmazonSNS sns = AmazonSNSClient.builder().build();

The client builders (described in detail in this post) are superior to the basic constructors in the following ways.

Immutable

Clients created via the builder are immutable. The region/endpoint (and other data) can’t be changed. Therefore, clients are safe to reuse across multiple threads.

Explicit Region

At build time, the AWS SDK for Java can validate that a client has all the required information to function correctly – namely, a region. A client created via the builders must have a region that is defined either explicitly (i.e. by calling withRegion) or as part of the DefaultAwsRegionProviderChain. If the builder can’t determine the region for a client, an SdkClientException is thrown. Region is an important concept when communicating with services in AWS. It not only determines where your request will go, but also how it is signed. Requiring a region means the SDK can behave predictably without depending on hidden defaults.

Cleaner

Using the builder allows a client to be constructed in a single statement using method chaining.

AmazonSNS sns = AmazonSNSClient.builder()
						.withRegion("us-west-1")
						.withClientConfiguration(cfg)
						.withCredentials(creds)
						.build();

The deprecated constructors are no longer created for new service clients. They will be removed from existing clients in a future major version bump (although they’ll remain in all future releases of the 1.x family of the AWS SDK for Java).

AWS Toolkit for Eclipse: Support for Creating Maven Projects for AWS, Lambda, and Serverless Applications

by Zhaoxi Zhang | on | in Java | | Comments

I’m glad to announce that you can now leverage the AWS Toolkit for Eclipse to create Maven projects for AWS, Lambda, and serverless applications now. If you’re new to using the AWS Toolkit for Eclipse to create a Lambda application, you can see the Lambda plugin for more information. If you’re not familiar with serverless applications, see the Serverless Application Model for more information. If you have been using the AWS Toolkit for Eclipse, you’ll notice the extra Maven configuration panel in the user interface where you can create a new AWS, Lambda, or serverless application (see the following screenshots).

The AWS Toolkit for Eclipse no longer downloads the archived AWS Java SDK ZIP file automatically and puts it in the class path for your AWS application. Instead, it manages the dependencies for using Maven by checking for the latest AWS Java SDK version from the remote Maven repository and downloading it automatically, if you don’t already have it installed in your local Maven repository. This means that if a new version of the AWS SDK for Java released, it can take a while to download it before you can create the new application.

Create a Maven Project for an AWS application

In the Eclipse toolbar, choose the AWS icon drop-down button, and  then choose New AWS Project. You’ll see the following page, where you can configure the AWS SDK for Java samples you want to include in your application.

Sample

Here is the structure of the newly created AWS application Java project. You can edit the pom.xml file later to meet your needs to build, test, and deploy your application with Maven.

SampleStructure

Create a Maven Project for a Lambda Application

Similarly to how you create a new AWS application project, you can create a new AWS Lambda project.  In the Eclipse toolbar, choose the AWS icon drop-down button, and then choose New AWS Lambda Java Project.

Lambda

Here is the structure of the newly created AWS Lambda Java project.

LambdaStructure

Create a Maven Project for a  Serverless Application

To create a new serverless application, choose the AWS icon drop-down button and then choose New AWS Serverless Project. The following screenshot shows the status of the  project creation in the process of downloading application dependencies by Maven.

CreatingServerless

Here is the structure of the newly created serverless application Java project.

ArticleStructure

Build a Serverless Application Locally with Maven

You can also use the Maven command-line in the terminal to build and test the project you just created, as shown in the following screenshot.

MavenCommandLine

Please let us know what you think of the new Maven support in the AWS Toolkit for Eclipse. We appreciate your comments.

Java SDK Bundled Dependency

by Kyle Thomson | on | in Java | | Comments

The AWS SDK for Java depends on a handful of third-party libraries, most notably Jackson for JSON and Apache Commons Http Client for over the wire. For most customers, resolving these as part of their standard Maven dependency resolution is perfectly fine; Maven automatically pulls the required versions in or uses existing versions if they’re specified in the project already.

However, the AWS SDK for Java requires certain minimum versions to function properly and some customers are unable to change the version of the third-party libraries they use. Maybe it’s because another dependency requires a specific version, or there are breaking changes between third-party versions that large portions of the code base relies on. Whatever the case may be, these version conflicts can create problems when you try to use the AWS SDK for Java.

We’re pleased to introduce the AWS SDK for Java Bundle dependency. This new module that you can include in your maven project contains all the SDK clients for all services and all of the third-party libraries in a single JAR. The third-party libraries were “relocated” to new package names to avoid class conflicts with a different version of the same third-party library on a project’s classpath. To use this version of the SDK, simply include the following Maven dependency in your project.

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-java-sdk-bundle</artifactId>
  <version>${aws.sdk.version}</version>
</dependency>

Of course, because we relocated the third-party libraries, they’re no longer available to use under their original import names – unless the project explicitly adds those libraries as dependencies. For example, if a project relied on the AWS SDK for Java bringing in the Joda Time library, when the project switches to use the bundle dependency it also needs to add a specific dependency for Joda Time.

The relocated classes are intended for internal use only by the AWS SDK. It is strongly recommended that you do not refer to classes under com.amazonaws.thirdparty.* in your own code. The following third-party libraries are included in the bundled dependency and moved to the com.amazonaws.thirdparty.* package:

Because the bundle dependency includes all of the dependent libraries, it’s going to be a larger binary to pull down when dependencies get resolved (about 50 MB at the time of this writing, but this will increase with the introduction of each new service and each new third-party library). In addition, if a project explicitly imports one of the third-party libraries that the SDK includes then classes will be duplicated (albeit in different packages). This increases the memory requirement of an application. For these reasons, we recommend that you only use the bundled dependency if you have a need to.

If a project has the combination of a version clash and a limited total project size (e.g., AWS Lambda limits package size to 50MB), the bundled dependency might not be the right solution. Instead, you can build your own version of the AWS SDK for Java from the open sourced code on GitHub. For example, if you needed to resolve a conflict only for the Joda Time library, you can include a build configuration like the following in your maven project:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>2.4.3</version>
  <configuration>
    <artifactSet>
      <includes>
        <include>joda-time:joda-time</include>
        <include>com.amazonaws:*</include>
      </includes>
    </artifactSet>
    <relocations>
      <relocation>
        <pattern>org.joda</pattern>
        <shadedPattern>com.amazonaws.thirdparty.joda</shadedPattern>
      </relocation>
    </relocations>
  </configuration>
</plugin>

Although this means you need to build your own version of the SDK and install it into your own repository, it gives you great flexibility for the third-party libraries and/or services you want to include. Check out the Maven Shade Plugin for more details about how it works.

We hope this new module is useful for projects where there’s a dependency clash. As always, please leave your comments or feedback below!

CHANGELOG for the AWS SDK for Java

by Dongie Agnir | on | in Java | | Comments

We are happy to announce that beginning with version 1.11.82, the source and ZIP distributions of the AWS SDK for Java now include a CHANGELOG.md file that lists the most notable changes for each release.

In the past, changes for each release of the AWS SDK for Java were published to the AWS Release Notes website, but this approach had some drawbacks. Customers wishing to view the set of changes for multiple versions on the website needed to run a search for each version they were interested in. Many customers acquire the source code through our GitHub repository, so viewing the release notes meant potentially opening a browser and navigating away from the code itself. Finally, although rare, sometimes there’s a delay between the release of a new version of the SDK and the availability of the release notes.

By implementing a changelog file, we hope to address these problems in a way that is simple and consistent with many other open source software projects, including other AWS SDKs like JavaScript and .NET. New changes are always prepended to the changelog file in a consistent format, so viewing the changes for multiple versions is now a breeze. The changelog is made available with the source and ZIP distributions, enabling customers to quickly access changes without opening a browser. As an added bonus, because it’s a simple text file, the changes up to the current version can easily be made available for viewing offline. Finally, the file is always updated along with the SDK source, so the list of changes is available as soon as the source code is available.

We hope that with this change, customers will find it easier than ever to keep up to date with the exciting changes being introduced in the AWS SDK for Java. As always, please let us know what you think in the comments below.

AWS Step Functions Fluent Java API

by Andrew Shore | on | in Java | | Comments

AWS Step Functions, a new service that launched at re:Invent 2016, makes it easier to build complex, distributed applications in the cloud. Using this service, you can create state machines that can connect microservices and activities into a visual workflow. State machines support branching, parallel execution, retry/error handling, synchronization (via Wait states), and task execution (via AWS Lambda or an AWS Step Functions Activity).

The Step Functions console provides excellent support for visualizing and debugging a workflow and for creating state machine descriptions. State machines are described in a JSON document, as described in detail here. Although the console has a great editor for building these documents visually, you might want to write state machines in your IDE via a native Java API. Today, we’re launching a fluent builder API to create state machines in a readable, compact way. This new API is included in the AWS SDK for Java.

 

To get started, create a new Maven project and declare a dependency on the aws-java-sdk-stepfunctions client.

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-stepfunctions</artifactId>
    <version>1.11.86</version>
</dependency>

Let’s take a look at some examples. We’ll go through each blueprint available in the console and translate that to the Java API.

Hello World

The following is a JSON representation of a simple state machine that consists of a single task state. The task calls out to a Lambda function (identified by ARN), passing the input of the state machine to the function. When the function completes successfully, the state machine terminates with the same output as the function.
JSON

{
  "Comment" : "A Hello World example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API
Let’s rewrite this simple state machine using the new Java API and transform it to JSON. Be sure you include the static import for the fluent API methods.


package com.example;

import static com.amazonaws.services.stepfunctions.builder.StepFunctionBuilder.*;
import com.amazonaws.services.stepfunctions.builder.ErrorCodes;

public class StepFunctionsDemo {

    public static void main(String[] args) {
        final StateMachine stateMachine = stateMachine()
                .comment("A Hello World example of the Amazon States Language using an AWS Lambda Function")
                .startAt("Hello World")
                .state("Hello World", taskState()
                        .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                        .transition(end()))
                .build();
        System.out.println(stateMachine.toPrettyJson());
    }
}

Let’s take a closer look at the previous example. The very first method you will always call when constructing a state machine, is stateMachine(). This returns a mutable StateMachine.Builder that can be used to configure all properties of a state machine. Here, we’re adding a comment describing the purpose of the state machine, indicating the initial state via the startAt() method, and defining that state via the state() method. Each state machine must have at least one state in it and must have a valid path to a terminal state (that is, a state that causes the state machine to end). In this example, we have a single TaskState (configured via the taskState() method) that also serves as the terminal state via the End transition (configured by transition(end()) ).

Once you configure a state machine to your liking, you can call the build() method on the StateMachineBuilder to produce an immutable StateMachine object. This object can then be transformed into JSON (see toJson() and toPrettyJson()) or it can be passed directly to the CreateStateMachine API in the Java SDK (see below).

The following creates the state machine (created previously) via the service client. The definition() method can take either the raw JSON or a StateMachine object. For more information about getting started with the Java SDK, see our AWS Java Developer Guide.

final AWSStepFunctions client = AWSStepFunctionsClientBuilder.defaultClient();
client.createStateMachine(new CreateStateMachineRequest()
                                          .withName("Hello World State Machine")
                                          .withRoleArn("arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME")
                                          .withDefinition(stateMachine));

 

Wait State

The following state machine demonstrates various uses of the Wait state type, which can be used to wait for a given amount of time or until a specific time. Wait states can dynamically wait based on input using the TimestampPath and SecondsPath properties, which are JSON reference paths to a timestamp or an integer, respectively. The Next property identifies the state to transition to after the wait is complete.
JSON

{
  "Comment" : "An example of the Amazon States Language using wait states",
  "StartAt" : "First State",
  "States" : {
    "First State" : {
      "Next" : "Wait Using Seconds",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    },
    "Wait Using Seconds" : {
      "Seconds" : 10,
      "Next" : "Wait Using Timestamp",
      "Type" : "Wait"
    },
    "Wait Using Timestamp" : {
      "Timestamp" : "2017-01-16T19:18:55.103Z",
      "Next" : "Wait Using Timestamp Path",
      "Type" : "Wait"
    },
    "Wait Using Timestamp Path" : {
      "TimestampPath" : "$.expirydate",
      "Next" : "Wait Using Seconds Path",
      "Type" : "Wait"
    },
    "Wait Using Seconds Path" : {
      "SecondsPath" : "$.expiryseconds",
      "Next" : "Final State",
      "Type" : "Wait"
    },
    "Final State" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API
Again, we call the stateMachine() method to begin constructing the state machine. Our start-at state is a Task state that has a transition to the Wait Using Seconds state. The Wait Using Seconds state is configured to wait for 10 seconds before proceeding to the Wait Using Timestamp state. Notice that we use the waitState() method to obtain an instance of WaitState.Builder, which we then use to configure the state. The waitFor() method can accept different types of wait strategies (Seconds, SecondsPath, Timestamp, TimestampPath). Each strategy has a corresponding method in the fluent API (seconds, secondsPath, timestamp, and timestampPath, respectively). Both the SecondsPath and TimestampPath strategies require a valid JsonPath that references data in the input to the state. This input is then used to determine how long to wait for.

final Date waitUsingTimestamp =
        Date.from(LocalDateTime.now(ZoneOffset.UTC).plusMinutes(15).toInstant(ZoneOffset.UTC));
final StateMachine stateMachine = stateMachine()
        .comment("An example of the Amazon States Language using wait states")
        .startAt("First State")
        .state("First State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(next("Wait Using Seconds")))
        .state("Wait Using Seconds", waitState()
                .waitFor(seconds(10))
                .transition(next("Wait Using Timestamp")))
        .state("Wait Using Timestamp", waitState()
                .waitFor(timestamp(waitUsingTimestamp))
                .transition(next("Wait Using Timestamp Path")))
        .state("Wait Using Timestamp Path", waitState()
                .waitFor(timestampPath("$.expirydate"))
                .transition(next("Wait Using Seconds Path")))
        .state("Wait Using Seconds Path", waitState()
                .waitFor(secondsPath("$.expiryseconds"))
                .transition(next("Final State")))
        .state("Final State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end()))
        .build();
System.out.println(stateMachine.toPrettyJson());

Retry Failure

Retriers are a mechanism to retry certain types of states on a given set of error codes. They define both the condition on which to retry (via ErrorEquals) and the backoff behavior and maximum number of retry attempts. At the time of this post, they may be used only with Task states and Parallel states. In the following state machine, the Task state has three retriers. The first retrier retries a custom error code named HandledError that might be thrown from the Lambda function. The initial delay of the first retry attempt is one second (as defined by IntervalSeconds). The maximum number of retry attempts is set at five. The BackoffRate is used for subsequent retries to determine the next delay; for example, the delays for the first retrier would be 1, 2, 4, 8, etc. The second retrier uses a predefined error code available that is matched whenever the task fails (for whatever reason). A full list of predefined error codes can be found here. Finally, the last retrier uses the special error code States.ALL to retry on everything else. If you use the States.ALL error code, it must appear in the last retrier and must be the only code present in ErrorEquals.
JSON

{
  "Comment" : "A Retry example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Retry" : [ {
        "ErrorEquals" : [ "HandledError" ],
        "IntervalSeconds" : 1,
        "MaxAttempts" : 5,
        "BackoffRate" : 2.0
      }, {
        "ErrorEquals" : [ "States.TaskFailed" ],
        "IntervalSeconds" : 30,
        "MaxAttempts" : 2,
        "BackoffRate" : 2.0
      }, {
        "ErrorEquals" : [ "States.ALL" ],
        "IntervalSeconds" : 5,
        "MaxAttempts" : 5,
        "BackoffRate" : 2.0
      } ],
      "Type" : "Task"
    }
  }
}

Java API

Let’s see what the previous example looks like in the Java API. Here we use the retrier() method to configure a Retrier.Builder. The errorEquals() method can take one or more error codes that indicate what this retrier handles. The second retrier uses a constant defined in the ErrorCodes class, which contains all predefined error codes supported by the States language. The last retrier uses a special method, retryOnAllErrors(), to indicate the retrier handles any other errors. This is equivalent to errorEquals("States.ALL") but is easier to read and easier to remember. Again, the “retry all” retrier must be last or a validation exception will be thrown.

final StateMachine stateMachine = stateMachine()
        .comment("A Retry example of the Amazon States Language using an AWS Lambda Function")
        .startAt("Hello World")
        .state("Hello World", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end())
                .retrier(retrier()
                                 .errorEquals("HandledError")
                                 .intervalSeconds(1)
                                 .maxAttempts(5)
                                 .backoffRate(2.0))
                .retrier(retrier()
                                 .errorEquals(ErrorCodes.TASK_FAILED)
                                 .intervalSeconds(30)
                                 .maxAttempts(2)
                                 .backoffRate(2.0))
                .retrier(retrier()
                                 .retryOnAllErrors()
                                 .intervalSeconds(5)
                                 .maxAttempts(5)
                                 .backoffRate(2.0))
        )
        .build();

System.out.println(stateMachine.toPrettyJson());

Catch Failure

Catchers are a similar error handling mechanism. Like Retriers, they can be defined to handle certain error codes that can be thrown from a state. Catchers define a state transition that occurs when the error code matches the ErrorEquals list. The transition state can handle the recovery steps needed for that particular failure scenario. Much like retriers, ErrorEquals can contain one or more error codes (either custom or predefined). The States.ALL is a special catch all that must be in the last Catcher, if present.
JSON

{
  "Comment" : "A Catch example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Catch" : [ {
        "Next" : "Custom Error Fallback",
        "ErrorEquals" : [ "HandledError" ]
      }, {
        "Next" : "Reserved Type Fallback",
        "ErrorEquals" : [ "States.TaskFailed" ]
      }, {
        "Next" : "Catch All Fallback",
        "ErrorEquals" : [ "States.ALL" ]
      } ],
      "Type" : "Task"
    },
    "Custom Error Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a custom lambda function exception",
      "Type" : "Pass"
    },
    "Reserved Type Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a reserved error code",
      "Type" : "Pass"
    },
    "Catch All Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a reserved error code",
      "Type" : "Pass"
    }
  }
}

Java API

To configure a catcher, first call the catcher() method to obtain a Catcher.Builder. The first Catcher handles the custom error code HandledError, and transitions to the Custom Error Fallback state. The second handles the predefined States.TaskFailed error code, and transitions to the Reserved Type Fallback state. Finally, the last catcher handles all remaining errors and transitions to the Catch All Fallback state. Like Retriers, there is a special method, catchAll(), that configures the catcher to handle all error codes. Use of catchAll() is preferred over errorEquals("States.ALL").

final StateMachine stateMachine = stateMachine()
        .comment("A Catch example of the Amazon States Language using an AWS Lambda Function")
        .startAt("Hello World")
        .state("Hello World", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end())
                .catcher(catcher()
                                 .errorEquals("HandledError")
                                 .transition(next("Custom Error Fallback")))
                .catcher(catcher()
                                 .errorEquals(ErrorCodes.TASK_FAILED)
                                 .transition(next("Reserved Type Fallback")))
                .catcher(catcher()
                                 .catchAll()
                                 .transition(next("Catch All Fallback"))))
        .state("Custom Error Fallback", passState()
                .result("\"This is a fallback from a custom lambda function exception\"")
                .transition(end()))
        .state("Reserved Type Fallback", passState()
                .result("\"This is a fallback from a reserved error code\"")
                .transition(end()))
        .state("Catch All Fallback", passState()
                .result("\"This is a fallback from a reserved error code\"")
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());

Parallel State

You can use a Parallel state to concurrently execute multiple branches. Branches are themselves pseudo state machines and can contain multiple states (and even nested Parallel states). The Parallel state waits until all branches have terminated successfully before transitioning to the next state. Parallel states support retriers and catchers in the event that execution of a branch fails.
JSON

{
  "Comment": "An example of the Amazon States Language using a parallel state to execute two branches at the same time.",
  "StartAt": "Parallel",
  "States": {
    "Parallel": {
      "Type": "Parallel",
      "Next": "Final State",
      "Branches": [
        {
          "StartAt": "Wait 20s",
          "States": {
            "Wait 20s": {
              "Type": "Wait",
              "Seconds": 20,
              "End": true
            }
          }
        },
        {
          "StartAt": "Pass",
          "States": {
            "Pass": {
              "Type": "Pass",
              "Next": "Wait 10s"
            },
            "Wait 10s": {
              "Type": "Wait",
              "Seconds": 10,
              "End": true
            }
          }
        }
      ]
    },
    "Final State": {
      "Type": "Pass",
      "End": true
    }
  }
}

Java API

To create a Parallel state in the Java API, call the parallelState() method to obtain an instance of ParallelState.Builder. Next, you can add branches of execution via the branch() method. Each branch must have StartAt (name of initial state for branch) specified and at least one state.

final StateMachine stateMachine = stateMachine()
        .comment(
                "An example of the Amazon States Language using a parallel state to execute two branches at the same time.")
        .startAt("Parallel")
        .state("Parallel", parallelState()
                .transition(next("Final State"))
                .branch(branch()
                                .startAt("Wait 20s")
                                .state("Wait 20s", waitState()
                                        .waitFor(seconds(20))
                                        .transition(end())))
                .branch(branch()
                                .startAt("Pass")
                                .state("Pass", passState()
                                        .transition(next("Wait 10s")))
                                .state("Wait 10s", waitState()
                                        .waitFor(seconds(10))
                                        .transition(end()))))
        .state("Final State", passState()
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());
System.out.println(stateMachine.toPrettyJson());

Choice State

A Choice state adds branching logic to a state machine. It consists of one or more choices and, optionally, a default state transition if no choices matches. Each choice rule represents a condition and a transition to enact if that condition evaluates to true. Choice conditions can be simple (StringEquals, NumericLessThan, etc) or composite conditions using And, Or, Not.

In the following example, we have a choice state with two choices, both using the NumericEquals condition, and a default transition if neither choice rule matches.
JSON

{
  "Comment" : "An example of the Amazon States Language using a choice state.",
  "StartAt" : "First State",
  "States" : {
    "First State" : {
      "Next" : "Choice State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    },
    "Choice State" : {
      "Default" : "Default State",
      "Choices" : [ {
        "Variable" : "$.foo",
        "NumericEquals" : 1,
        "Next" : "First Match State"
      }, {
        "Variable" : "$.foo",
        "NumericEquals" : 2,
        "Next" : "Second Match State"
      } ],
      "Type" : "Choice"
    },
    "First Match State" : {
      "Next" : "Next State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:OnFirstMatch",
      "Type" : "Task"
    },
    "Second Match State" : {
      "Next" : "Next State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:OnSecondMatch",
      "Type" : "Task"
    },
    "Default State" : {
      "Cause" : "No Matches!",
      "Type" : "Fail"
    },
    "Next State" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API

To add a Choice state to your state machine, use the choiceState() method to obtain an instance of ChoiceState.Builder. You can add choice rules via the choice() method on the builder. For simple conditions, there are several overloads for each comparison operator (LTE, LT, EQ, GT, GTE) and data types (String, Numeric, Timestamp, Boolean). In this example, we’re using the eq() method that takes a string as the first argument, which is the JsonPath expression referencing the input data to apply the condition to. The second argument will differ depending on the type of data you are comparing against. Here we’re using an integer for numeric comparison. Each choice rule must have a transition that should occur if the condition evaluates to true.

final StateMachine stateMachine = stateMachine()
        .comment("An example of the Amazon States Language using a choice state.")
        .startAt("First State")
        .state("First State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(next("Choice State")))
        .state("Choice State", choiceState()
                .choice(choice()
                                .transition(next("First Match State"))
                                .condition(eq("$.foo", 1)))
                .choice(choice()
                                .transition(next("Second Match State"))
                                .condition(eq("$.foo", 2)))
                .defaultStateName("Default State"))
        .state("First Match State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:OnFirstMatch")
                .transition(next("Next State")))
        .state("Second Match State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:OnSecondMatch")
                .transition(next("Next State")))
        .state("Default State", failState()
                .cause("No Matches!"))
        .state("Next State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());

You can find more references and tools for building state machines in the Step Functions documentation, and post your questions and feedback to the Step Functions Developers Forum.

AWS Toolkit for Eclipse: Serverless Applications

by Zhaoxi Zhang | on | in Java | | Comments

I’m glad to announce that the AWS Lambda plugin in the AWS Toolkit for Eclipse now supports serverless application development for Java. The serverless application (also called a Lambda-based application) is composed of functions triggered by events. In this blog, I provide two examples to show you how to leverage the Eclipse IDE to create and deploy a serverless application quickly.

Install the AWS Toolkit for Eclipse

To install the latest AWS Toolkit for Eclipse, go to this page and follow the instructions at the top right of the page. You should install the AWS Toolkit for Eclipse Core, AWS CloudFormation Tool, and AWS Lambda Plugin to use this feature. The following figure shows where you can choose these three components in the installation wizard. To complete the installation, you need to review and accept the license and restart Eclipse.

InstallServerless

Create a Serverless Project

To create a serverless project, click the Toolbar AWS and choose New AWS Serverless Project…, The following wizard opens. You can also create a new serverless project using the AWS Toolkit for Eclipse in the usual way: Choose File, New, Other, AWS and then choose AWS Serverless Java Project. As you can see in the following figure, the Toolkit provides two blueprints for you to start with: article and hello-world.

  • article – This is a simple serverless application that helps manage articles. It consists of two Lambda functions triggered by API events. The two functions are GetArticle and PutArticle, which manage storing articles to the backend service and retrieving articles to the front end. This blueprint also leverages an Amazon S3 bucket for storing article content and an Amazon DynamoDB table for storing article metadata.
  • hello-world – This blueprint project only includes a simple stand alone Lambda function, HelloWorld, which is not triggered by any event and not bound to any resource. It simply takes in a String and outputs it with the prefix “Hello”. If an empty String is provided, it outputs “Hello World”.

NewServerlessProject

You can also use a serverless template to create a serverless project by choosing Select a Serverless template file and then importing the template file. This template file is a simplified version of the SAM (AWS Serverless Application Model) file that is used in a serverless application to define the application resources stack. The following snippet is from the blueprint articles template for defining the Lambda function GetArticle. Different from the real SAM file, you don’t need to provide the CodeUri and Runtime properties, and you only need to provide the class name for the Handler property instead of the Fully Qualified Class Name. By importing a template file, the AWS Toolkit for Eclipse will generate all the Lambda function hooks and the Lambda Proxy Integration models used as the API event Input and Output for the Lambda functions.

{
  "Type": "AWS::Serverless::Function",
  "Properties": {
    "Handler": "com.serverless.demo.GetArticle",
    "Runtime": "Java",
    "CodeUri": "s3://serverless-bucket/get-article.zip",
    "Policies": [
      "AmazonDynamoDBReadOnlyAccess",
      "AmazonS3ReadOnlyAccess"
    ],
    ...
}

The following figure shows the startup view after you create the article blueprint project. As you can see from the project structure, the AWS Toolkit for Eclipse puts all the Lambda functions defined in the template into a function package, and all the required models into a model package. You can check the serverless.template file for a closer look at this project. As we mentioned earlier, this is a simplified version of a SAM file, which is derived from AWS CloudFormation template. See the README.html page for the next.

articleStartup

Deploy a Serverless Project

If the serverless project is created from a blueprint, you can deploy it directly to AWS. Notice that the article blueprint will create an S3 bucket and a DynamoDB table for use of the Lambda functions. You can open the serverless.template file and customize the resource names in the Parameters property section, as shown in the following snippet.

"Parameters" : {
    "ArticleBucketName" : {
        "Type" : "String",
        "Default" : "serverless-blueprint-article-bucket",
        "Description" : "Name of S3 bucket used to store the article content.",
        "MinLength" : "0"
    },
    "ArticleTableName" : {
        "Type" : "String",
        "Default" : "serverless-blueprint-article-table",
        "Description" : "Name of DynamoDB table used to store the article metadata.",
        "MinLength" : "0"
      },
      ...
}

To deploy this project to AWS, click the project name in the explorer view, choose Amazon Web Services, and then choose Deploy Serverless Project. Or right click the workspace of any Lambda function file, choose AWS Lambda, and then choose Deploy Serverless Project. You will see the following wizard. Choose the S3 bucket, type the CloudFormation stack name, and then choose Finish. The AWS Toolkit for Eclipse generates the fat JAR file for the underlying Lambda functions, and uploads it to the S3 bucket you chose. It’ll also update the serverless.template file in memory to be a real SAM file and upload it to the S3 bucket. AWS CloudFormation reads this file to create the stack.

DeployArticle

While the AWS CloudFormation stack is being created, a Stack Editor view is shown to indicate the current status of the stack. This page is automatically refreshed every five seconds, but you can also manually refresh it by clicking the refresh icon at the top right of the view. Upon CREATE_COMPLETE, you will see a link to the right  of the Output label in the top section. This link is the Prod stage endpoint of the API Gateway API created by this serverless project.

DeploymentStackEditor

Test a Serverless Project

After successfully deploying the article project, you can test the two APIs by hitting the API Prod endpoint through browser tools or command line tools.

  • Using the Curl command line tool.
    $ curl --data "This is an article!" https://s5cvlouqwe.execute-api.us-west-2.amazonaws.com/Prod?id=1
    Successfully inserted article 1
    $ curl -X GET https://s5cvlouqwe.execute-api.us-west-2.amazonaws.com/Prod?id=1
    This is an article!
  • Using the Simple rest client plugin in Chrome. You can also use this plugin to send a POST request to the endpoint.

We’d like to know what you think of the workflow for developing serverless applications with the AWS Toolkit for Eclipse. Please let us know if there are other features you want to see in this toolkit. We appreciate your comments.

Introducing Support for Java SDK Generation in Amazon API Gateway

by Andrew Shore | on | in Java | | Comments

We are excited to announce support for generating a Java SDK for services fronted by Amazon API Gateway. The generated Java SDKs are compatible with Java 8 and later. Generated SDKs have first-class support for API keys, custom or AWS Identity and Access Management (IAM) authentication, automatic and configurable retries, exception handling, and more. In this blog post, we’ll walk through how to create a sample API, and generate a Java SDK from that API, and explore various features of the generated SDK. This post assumes you have some familiarity with API Gateway concepts.

Create an Example API

To start, let’s create a sample API by using the API Gateway console. Navigate to the API Gateway console and select your preferred region. Choose Create API, and then choose the Example API option. Choose Import to create the example API.

create-example-api

The example API is pretty simple. It consists of four operations.

  1. A GET on the API root resource that returns HTML describing the API.
  2. A GET on the /pets resource that returns a list of Pets.
  3. A POST on the /pets resource that creates a new Pet
  4. A GET on the /pets/{petId} resource that returns a specific Pet by ID.

Deploy the API

Next, you’ll deploy the API to a stage.

Under Actions, choose Deploy API, name the stage test, and then choose Deploy.

deploy-example-api

After you deploy the API, on the SDK Generation tab, choose Java as the platform. For Service Name, type PetStore. For Java Package Name, type com.petstore.client. Leave the other fields empty. Choose Generate SDK, and then download and unzip the SDK package.

generate-java-sdk

There are several configuration options available for the Java platform. Before proceeding, let’s go over them.

Service Name – Used to name the Java Interface you’ll use to make calls to your API.

Java Package Name – The name of the package your generated SDK code will be placed under. This name is typically named based on your organization.

The following optional parameters are used when publishing the SDK to a remote repository, like Maven Central.

Java Build System – The build system to configure for the generated SDK, either maven or gradle. The default is maven.

Java Group ID – Typically identifies your organization. Defaults to Java Package Name if not provided.

Java Artifact ID – Identifies the library or product. Defaults to Service Name if not provided.

Java Artifact Version – Version identifier for the published SDK. Defaults to 1.0-SNAPSHOT if not provided.

Compile Client

Navigate to the location where you unzipped the SDK package. If you’ve been following the example, the package will be setup as a Maven project. Ensure Maven and a JDK have been installed correctly, and run the following command to install the client package into your local Maven repository This makes it available for other local projects to use.

mvn install

Set Up an Application

Next, you’ll set up an application that depends on the client package you previously installed. Because the client requires Java 8 or later, any application that depends on the client must also be built with Java 8. Here, you’ll use a simple Maven Archetype to generate an empty Java 8 project.

mvn archetype:generate -B -DarchetypeGroupId=pl.org.miki -DarchetypeArtifactId=java8-quickstart-archetype -DarchetypeVersion=1.0.0 \
    -DgroupId=com.petstore.app \
    -DartifactId=petstore-app \
    -Dversion=1.0 \
    -Dpackage=com.petstore.app

Navigate to the newly created project and open the pom.xml file. Add the following snippet to the <dependencies>…</dependencies section of the XML file. If you changed any of SDK export parameters in the console, use those values instead.

<dependency>
    <groupId>com.petstore.client</groupId>
    <artifactId>PetStore</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

Create a file src/main/java/com/petstore/app/AppMain.java with the following contents.

package com.petstore.app;

import com.petstore.client.*;
import com.petstore.client.model.*;
import com.amazonaws.opensdk.*;
import com.amazonaws.opensdk.config.*;

public class AppMain {

    public static void main(String[] args) {

    }
}

Build the application to ensure everything is configured correctly.

mvn install

To run the application, you can use the following Maven command. (As you make changes, be sure to rerun mvn install before running the application.)

mvn exec:java -Dexec.mainClass="com.petstore.app.AppMain"

Exploring the SDK

Creating the Client

The first thing you need to do is construct an instance of the client. You can use the client builder obtained from a static factory method on the client interface. All configuration methods on the builder are optional (except for authorization related configuration).In the following code, you obtain an instance of the builder, override some of the configuration, and construct a client. The following settings are for demonstration only, and are not necessarily the recommended settings for creating service clients.

PetStore client = PetStore.builder()
        .timeoutConfiguration(new TimeoutConfiguration()
                                      .httpRequestTimeout(20_000)
                                      .totalExecutionTimeout(30_000))
		.connectionConfiguration(new ConnectionConfiguration()
                                      .maxConnections(100)
                                      .connectionMaxIdleMillis(120))
        .build();

The builder exposes a ton of useful configuration methods for timeouts, connection management, proxy settings, custom endpoints, and authorization. Consult the Javadocs for full details on what is configurable.

Making API Calls

Once you’ve built a client, you’re ready to make an API call.

Call the GET /pets API to list the current pets. The following code prints out each pet to STDOUT. For each API in the service, a method is generated on the client interface. That method’s name will be based on a combination of the HTTP method and resource path, although this can be overridden (more on that later in this post).

client.getPets(new GetPetsRequest())
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

The GET /pets operation exposes a query parameter named type that can be used to filter the pets that are returned. You can set modeled query parameters and headers on the request object.

client.getPets(new GetPetsRequest().type("dog"))
        .getPets()
        .forEach(p -> System.out.printf("Dog: %s\n", p));

Let’s try creating a Pet and inspecting the result from the service. Here you call the POST /pets operation, supplying information about the new Pet. The CreatePetResult contains the unmarshalled service response (as modeled in the Method Response) and additional HTTP-level metadata that’s available via the sdkResponseMetadata() method.

final CreatePetResult result = client.createPet(
        new CreatePetRequest().newPet(new NewPet()
                                              .type(PetType.Bird)
                                              .price(123.45)));
System.out.printf("Response message: %s \n", result.getNewPetResponse().getMessage());
System.out.println(result.sdkResponseMetadata().header("Content-Type"));
System.out.println(result.sdkResponseMetadata().requestId());
System.out.println(result.sdkResponseMetadata().httpStatusCode());

The GET /pets/{petId} operation uses a path placeholder to get a specific Pet, identified by its ID. When making a call with the SDK, all you need to do is supply the ID. The SDK handles the rest.

GetPetResult pet = client.getPet(new GetPetRequest().petId("1"));
System.out.printf("Pet by ID: %s\n", pet);

Overriding Configuration at the Request Level

In addition to the client-level configuration you supply when creating the client (by using the client builder), you can also override certain configurations at the request level. This “request config” is scoped only to calls made with that request object, and takes precedence over any configuration in the client.

client.getPets(new GetPetsRequest()
                       .sdkRequestConfig(SdkRequestConfig.builder()
                                                 .httpRequestTimeout(1000).build()))
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

You can also set custom headers or query parameters via the request config. This is useful for adding headers or query parameters that are not modeled by your API. The parameters are scoped to calls made with that request object.

client.getPets(new GetPetsRequest()
                       .sdkRequestConfig(SdkRequestConfig.builder()
                                                 .customHeader("x-my-custom-header", "foo")
                                                 .customQueryParam("MyCustomQueryParam", "bar")
                                                 .build()))
        .getPets()
        .forEach(p -> System.out.printf("Pet: %s\n", p));

Naming Operations

It’s possible to override the default names given to operations through the API Gateway console or during an import from a Swagger file. Let’s rename the GetPet operation (GET /pets/{petId}) to GetPetById by using the console. First, navigate to the GET method on the /pets/{petId} resource.

change-operation-name

Choose Method Request, and then expand the SDK Settings section.

sdk-settings

Edit the Operation Name field and enter GetPetById. Save the change and deploy the API to the stage you created previously. Regenerate a Java SDK, and it should have the updated naming for that operation.

GetPetByIdResult pet = client.getPetById(new GetPetByIdRequest().petId("1"));
System.out.printf("Pet by ID: %s\n", pet);

If you are importing an API from a Swagger file, you can customize the operation name by using the operationId field. The following snippet is from the example API, and shows how the operationId field is used.

...
    "/pets/{petId}": {
      "get": {
        "tags": [
          "pets"
        ],
        "summary": "Info for a specific pet",
        "operationId": "GetPet",
        "produces": [
          "application/json"
        ],
...

Final Thoughts

This post highlights how to generate the Java SDK of an API in API Gateway, and how to call the API using the SDK in an application. For more information about how to build the SDK package, initiate a client with other configuration properties, make raw requests, configure authorization, handle exceptions, and configure retry behavior, see the README.html file in the uncompressed SDK project folder.

AWS Toolkit for Eclipse: VPC Configuration for an AWS Elastic Beanstalk Environment

by Zhaoxi Zhang | on | in Java | | Comments

I’m glad to announce that the AWS Elastic Beanstalk plugin in the AWS Toolkit for Eclipse now supports Configuring VPC with Elastic Beanstalk. If you’re new to AWS Toolkit for Eclipse, see the User Guide for a basic introduction and setup guidance. If you’re new to AWS Elastic Beanstalk plugin, see AWS Elastic Beanstalk and Eclipse Integration to learn how to manage your application and environment within Eclipse. If you’re not familiar with VPC configurations for Elastic Beanstalk environments, see Using Elastic Beanstalk with Amazon VPC.

The following screenshots show the plugin New Server wizard pages for VPC configuration. If you’ve used the Elastic Beanstalk console, these user interfaces should be familiar. On the Configure Application and Environment page, you can choose Select a VPC to use when creating your environment to open the VPC Configuration page. Otherwise, when you click Next, the Permissions page opens directly.

application and environment configuration

Configure Application and Environment Page

On the VPC Configuration page, you can set up the VPC configurations, such as subnets, security group, etc. You must select at least one subnet for a valid configuration. ELB visibility is disabled unless you chose Load Balanced Web Server Environment for the Environment type on the previous Configure Application and Environment page. For more information about the options and how to configure them for your VPC needs, see Using Elastic Beanstalk with Amazon VPC.

VPC Configuration Page

VPC Configuration Page

As you can see, it’s easy to configure a VPC for your Elastic Beanstalk environment using the AWS Toolkit for Eclipse plugin. Please let us know if there are other features you want to see in this toolkit. We appreciate your comments.