Category: Java


AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 3 of 3)

by Zhaoxi Zhang | on | in Eclipse, Java | | Comments

In the first part of the blog series, we created a new application named rekognition-service from the rekognition blueprint. In the second part, we deployed this serverless application to AWS CloudFormation. In this last part of the blog series, we describe how to test and check the result of the newly deployed rekognition-service application.

Test the rekognition-service application by using the Amazon S3 bucket editor

You can drag and drop a group of files, including folders, to the Amazon S3 bucket editor so you can upload them to an Amazon S3 bucket. The .jpg files trigger the underlying Lambda function to be tagged with the name and confidence value returned by Amazon Rekognition. You can also manually update and delete these tags by using the tag dialog box.

Check the Lambda logs by using the AWS Lambda function editor

You can also check the Lambda function logs by using the Lambda function editor. All the Amazon CloudWatch streams for the Lambda function are listed on the Logs tab in the editor. You can double-click one item to open the underlying one stream in Eclipse. You can also select multiple items, right-click, and then select Show Log Events to open the underlying streams in one batch.

This concludes our three-part series. What do you think of the rekognition serverless blueprint and the working flow in the AWS Toolkit for Eclipse? If you have any requests for new blueprints and features in the AWS Toolkit for Eclipse, please let us know. We appreciate your feedback.

AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 2 of 3)

by Zhaoxi Zhang | on | in Eclipse, Java | | Comments

In the first part of this blog post, we talked about how to create a new AWS SAM application from the rekognition serverless blueprint. In this second part, we describe how to deploy the application to AWS CloudFormation.

Deploy the rekognition-service application to a new CloudFormation stack

This .gif animation shows the steps to deploy an AWS SAM application to AWS CloudFormation.

What does the AWS Toolkit for Eclipse do for you during deployment

  • Creates a .zip file that contains the project and all its dependencies. Then uploads the file to the specified Amazon S3 bucket.
  • Updates the serverless.template (as shown in the following snippet) to fill in the complete properties for the AWS::Serverless::Function resource type:
    • Replaces the Handler with the FQCN of the AWS Lambda function handler.
    • Generates the actual code URI for CodeUri so that AWS CloudFormation can reference the Lambda function artifact in the S3 bucket.
    • Adds the missing configurations (Runtime, Description, MemorySize, Timeout, Role) and use the default values.
  • Creates a new AWS CloudFormation stack using the updated serverless.template file.

Here is the updated snippet for TagImage in the CloudFormation template.

"TagImage" : {
      "Type" : "AWS::Serverless::Function",
      "Properties" : {
        "Handler" : "com.serverless.demo.function.TagImage",
        "Runtime" : "java8", "CodeUri" : "s3://zhaoxiz-us-west-1/rekognition-service-stack-1497642692569-1497643074359.zip", "Description" : null, "MemorySize" : 512, "Timeout" : 300, "Role" : null,
        "Policies" : [ "AmazonS3FullAccess", "AmazonRekognitionFullAccess" ],
        "Events" : {
          "ProcessNewImage" : {
            "Type" : "S3",
            "Properties" : {
              "Bucket" : {"Ref" : "ImageBucket"},
              "Events" : "s3:ObjectCreated:*",
              "Filter" : {
                "S3Key" : {
                  "Rules" : [{"Name": "suffix", "Value": ".jpg"}]
                }
              }
            }
          }
        }
      }
    }

Deploy the rekognition-service application to an existing CloudFormation stack

We want to update the recognition confidence value to 80 in the Lambda function code and redeploy it to the CloudFormation stack. The following .gif animation shows how you can achieve that. When doing a second deployment for the same project, the AWS toolkit for Eclipse remembers the parameters used in the last deployment, so if you want to keep them the same, you don’t have to retype them.

Notice that we need to change the parameter value of ImageBucketExists to true in the parameter page (Fill in stack template parameters) because the bucket was already created during the first deployment. The underlying CloudFormation stack is updated with the new version of the Lambda function whether or not you update the parameters.

Update the Lambda event source by using the parameters page

Now, we want to configure the trigger event for the Lambda function to another new S3 bucket. This removes the bucket we created in the first deployment and creates a new bucket for this deployment. We only need to redeploy the application and update the ImageBucketExists parameter to false, and the ImageBucketName parameter to the new bucket name. After deployment, you see that the name of the ImageStack in the stack outputs is updated to the new name.

In the third part of this blog post, we’ll talk about how to use the AWS Toolkit for Eclipse to check the result of the rekognition-service application.

AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 1 of 3)

by Zhaoxi Zhang | on | in Eclipse, Java | | Comments

I am happy to announce that the latest release of the AWS Toolkit for Eclipse includes a couple new enhancements for developing AWS Serverless Application Model (AWS SAM) applications. In this release, we added a new blueprint: rekognition.

In part 1 of this blog post, we describe and show with an animation what this blueprint does, and how to use the AWS Toolkit for Eclipse to create an application from it. In part 2, we’ll deploy the AWS SAM application to AWS CloudFormation. In part 3, we’ll check the result of the application and test the AWS Lambda function by using the AWS Explorer in the AWS Toolkit for Eclipse.

About the rekognition blueprint

The rekognition blueprint includes a Lambda function TagImage. This Lambda function automatically tags .jpg files uploaded to a specified Amazon S3 bucket by using the Amazon Rekognition service. It applies the top five confident tags recognized by the Amazon Rekognition service as the keys to the Amazon S3 object. It then applies the confident values as tag values, accordingly.

Create an application named rekognition-service from the rekognition blueprint.

This .gif animation shows the steps to create an application from the rekognition blueprint.

About the AWS SAM template

Here is the template snippet from the serverless.template in the project we just created for defining the Lambda function TagImage. Notice that this is a simplified configuration for this Lambda function. This is because during the deployment phase, the AWS Toolkit for Eclipse will fill in all other properties we needed. For a complete configuration set, see Deploying Lambda-based Applications in the AWS Lambda Developer Guide.

In this snippet, we grant the Lambda function permissions to access Amazon S3 and Amazon Rekognition. We also define a triggering event for the Lambda function when uploading .jpg files to the specified Amazon S3 bucket.

"TagImage": {
  "Type": "AWS::Serverless::Function",
  "Properties": {
    "Handler": "TagImage",
    "Policies": [
      "AmazonS3FullAccess",
      "AmazonRekognitionFullAccess"
    ],
    "Events": {
      "ProcessNewImage": {
        "Type": "S3",
        "Properties": {
          "Bucket": {"Ref" : "ImageBucket"},
          "Events": "s3:ObjectCreated:*",
          "Filter": {
            "S3Key": {
              "Rules": [{"Name": "suffix", "Value": ".jpg"}]
            }
          }
        }
      }
    }
  }
}

How the Lambda function works

Here is a snippet from the Lambda function ImageTag. This Lambda function retrieves the Amazon S3 object from the S3Event. Then it calls the Amazon Rekognition service to detect labels with a confidence value of at least 77.

Image imageToTag = new Image().withS3Object(new S3Object().withName(objectKey).withBucket(bucketName));
// Call Rekognition to identify image labels
DetectLabelsRequest request = new DetectLabelsRequest()
                    .withImage(imageToTag)
                    .withMaxLabels(5)
                    .withMinConfidence(77F);
List<Label> labels = rekognitionClient.detectLabels(request).getLabels();

In part 2 of this blog post, we’ll deploy the newly created AWS SAM application to AWS. Then we’ll configure the parameters in the template during the deployment phase. Stay tuned!

AWS Toolkit for Eclipse: Support for AWS CodeCommit and AWS CodeStar

by Zhaoxi Zhang | on | in Java | | Comments

I am pleased to announce that the AWS Toolkit for Eclipse now supports AWS CodeCommit and AWS CodeStar. This means you can create, view, clone, and delete your AWS CodeCommit repositories in the AWS Toolkit for Eclipse. You can also import existing projects under your AWS CodeStar account directly into the Eclipse IDE.

Git Credentials Configuration

We recommend that you use Git credentials with HTTPS to connect to your AWS CodeCommit repositories. For more information, see Use Git Credentials and HTTPS with AWS CodeCommit.

In the new version of the AWS Toolkit for Eclipse, you will see an entry for AWS CodeCommit on the Eclipse Preferences page, shown here. To install the AWS Toolkit for Eclipse, follow the instructions on the AWS Toolkit for Eclipse page. You can configure your Git credentials for your AWS accounts on this page. For information, see Create Git Credentials for HTTPS Connections to AWS CodeCommit. You can type in the newly generated user name and password into the text fields, or import the CSV file generated from the IAM console directly into Eclipse.

AWS CodeCommit Explorer

An entry for AWS CodeCommit also appears in AWS Explorer, as shown here. To open this view, click the drop-down box next to the AWS icon in the toolbar, and select Show AWS Explorer View. You can create, view, clone, and delete repositories in this view.

  • Create a Repository
    To create a repository, right-click AWS CodeCommit and then select Create Repository, as shown here. Type the repository name and an optional description in the Create Repository dialog box. The newly created repository will appear under AWS CodeCommit.

    Figure: AWS CodeCommit Explorer View

    Figure: Create Repository Dialog Box

  • View a Repository
    To view a repository, double-click the repository name in AWS Explorer. This will open the repository editor where you can see the metadata for the repository, as shown here. The repository editor also shows the latest 10 commits for the selected branch. To refresh the repository editor, click the refresh icon on the top-right corner of the page.
  • Clone a Repository
    To clone a repository, click the Check out button in the repository editor, or right-click the repository name in AWS Explorer and select Clone Repository. If you haven’t configured Git credentials for your current AWS account in your Eclipse, a dialog box will prompt you to configure them.


    After you have configured your Git credentials, you will see the following pages for selecting a branch and local destination. You’ll see these pages have the same look and feel as EGit. For information about EGit, see the EGit Tutorial. You can use the Eclipse EGit plugin for managing your projects with Git. 

    Figure: Branch Selection Page

    Figure: Destination Page

  • Delete a Repository
    To delete a repository from AWS CodeCommit, right-click the repository name and select Delete Repository. When the following dialog box is displayed, type the repository name.

AWS CodeStar Project Checkout

You can use the AWS Toolkit for Eclipse to check out AWS CodeStar projects and edit them in the Eclipse IDE. To import your AWS CodeStar projects to Eclipse, click the drop-down box next to the AWS icon in the toolbar, and select Import AWS CodeStar Project. You will see all your AWS CodeStar projects under the selected account and region.

The plugin for AWS CodeStar finds all the AWS CodeCommit repositories that are linked to the selected project. From the Select repository drop-down list, choose the repository, and then click Next. You can also configure the Git credentials on this page if they have not been configured on the selected account.

Resources

For information about AWS CodeCommit, see the AWS CodeCommit documentation. For information about AWS CodeStar, see the AWS CodeStar documentation.

Conclusion

We hope you find these new features useful. If you have questions or other feedback about using the AWS Toolkit for Eclipse, feel free to leave it in the comments.

AWS Toolkit for Eclipse: VPC Configuration Enhancement for AWS Elastic Beanstalk Environments

by Zhaoxi Zhang | on | in Java | | Comments

From the blog post VPC Configuration for an AWS Elastic Beanstalk Environment, you learned how to deploy your web application to AWS Elastic Beanstalk by using the AWS Toolkit for Eclipse. In this blog, I’m happy to announce that you can now configure Elastic Load Balancing (ELB) subnets and Amazon EC2 subnets separately. The following screenshots show the experience in the AWS Toolkit for Eclipse is consistent with that in the Elastic Beanstalk console.

 

VPC Configuration in AWS Elastic Beanstalk Console

VPC Configuration in AWS Toolkit for Eclipse

Notice that the ELB subnet configuration is enabled only when the environment type is Load Balanced Web Server Environment (see the following screenshot for the type selection). Please read through Using Elastic Beanstalk with Amazon VPC to be sure you understand all the VPC parameters. Inappropriate parameter combinations can cause deployment failures. Follow the rules below when you create an AWS Elastic Beanstalk environment:

  • You must select at least one subnet for EC2 and for ELB.
  • You must select at least one ELB subnet in each Availability Zone where there is an EC2 subnet, and vice versa.
  • You may only select one EC2 subnet per Availability Zone.
  • When one subnet is used for both EC2 and ELB, select the Associate Public IP Address check box unless you have set up a NAT instance to route traffic from the Internet to your ELB subnet.

Application and Environment Configuration

Client Constructors Now Deprecated in the AWS SDK for Java

by Kyle Thomson | on | in Java | | Comments

A couple of weeks ago you might have noticed that the 1.11.84 version of the AWS SDK for Java included several deprecations – the most notable being the deprecation of the client constructors.

Historically, you’ve been able to create a service client as shown here.

AmazonSNS sns = new AmazonSNSClient();

This mechanism is now deprecated in favor of using one of the builders to create the client as shown here.

AmazonSNS sns = AmazonSNSClient.builder().build();

The client builders (described in detail in this post) are superior to the basic constructors in the following ways.

Immutable

Clients created via the builder are immutable. The region/endpoint (and other data) can’t be changed. Therefore, clients are safe to reuse across multiple threads.

Explicit Region

At build time, the AWS SDK for Java can validate that a client has all the required information to function correctly – namely, a region. A client created via the builders must have a region that is defined either explicitly (i.e. by calling withRegion) or as part of the DefaultAwsRegionProviderChain. If the builder can’t determine the region for a client, an SdkClientException is thrown. Region is an important concept when communicating with services in AWS. It not only determines where your request will go, but also how it is signed. Requiring a region means the SDK can behave predictably without depending on hidden defaults.

Cleaner

Using the builder allows a client to be constructed in a single statement using method chaining.

AmazonSNS sns = AmazonSNSClient.builder()
						.withRegion("us-west-1")
						.withClientConfiguration(cfg)
						.withCredentials(creds)
						.build();

The deprecated constructors are no longer created for new service clients. They will be removed from existing clients in a future major version bump (although they’ll remain in all future releases of the 1.x family of the AWS SDK for Java).

AWS Toolkit for Eclipse: Support for Creating Maven Projects for AWS, Lambda, and Serverless Applications

by Zhaoxi Zhang | on | in Java | | Comments

I’m glad to announce that you can now leverage the AWS Toolkit for Eclipse to create Maven projects for AWS, Lambda, and serverless applications now. If you’re new to using the AWS Toolkit for Eclipse to create a Lambda application, you can see the Lambda plugin for more information. If you’re not familiar with serverless applications, see the Serverless Application Model for more information. If you have been using the AWS Toolkit for Eclipse, you’ll notice the extra Maven configuration panel in the user interface where you can create a new AWS, Lambda, or serverless application (see the following screenshots).

The AWS Toolkit for Eclipse no longer downloads the archived AWS Java SDK ZIP file automatically and puts it in the class path for your AWS application. Instead, it manages the dependencies for using Maven by checking for the latest AWS Java SDK version from the remote Maven repository and downloading it automatically, if you don’t already have it installed in your local Maven repository. This means that if a new version of the AWS SDK for Java released, it can take a while to download it before you can create the new application.

Create a Maven Project for an AWS application

In the Eclipse toolbar, choose the AWS icon drop-down button, and  then choose New AWS Project. You’ll see the following page, where you can configure the AWS SDK for Java samples you want to include in your application.

Sample

Here is the structure of the newly created AWS application Java project. You can edit the pom.xml file later to meet your needs to build, test, and deploy your application with Maven.

SampleStructure

Create a Maven Project for a Lambda Application

Similarly to how you create a new AWS application project, you can create a new AWS Lambda project.  In the Eclipse toolbar, choose the AWS icon drop-down button, and then choose New AWS Lambda Java Project.

Lambda

Here is the structure of the newly created AWS Lambda Java project.

LambdaStructure

Create a Maven Project for a  Serverless Application

To create a new serverless application, choose the AWS icon drop-down button and then choose New AWS Serverless Project. The following screenshot shows the status of the  project creation in the process of downloading application dependencies by Maven.

CreatingServerless

Here is the structure of the newly created serverless application Java project.

ArticleStructure

Build a Serverless Application Locally with Maven

You can also use the Maven command-line in the terminal to build and test the project you just created, as shown in the following screenshot.

MavenCommandLine

Please let us know what you think of the new Maven support in the AWS Toolkit for Eclipse. We appreciate your comments.

Java SDK Bundled Dependency

by Kyle Thomson | on | in Java | | Comments

The AWS SDK for Java depends on a handful of third-party libraries, most notably Jackson for JSON and Apache Commons Http Client for over the wire. For most customers, resolving these as part of their standard Maven dependency resolution is perfectly fine; Maven automatically pulls the required versions in or uses existing versions if they’re specified in the project already.

However, the AWS SDK for Java requires certain minimum versions to function properly and some customers are unable to change the version of the third-party libraries they use. Maybe it’s because another dependency requires a specific version, or there are breaking changes between third-party versions that large portions of the code base relies on. Whatever the case may be, these version conflicts can create problems when you try to use the AWS SDK for Java.

We’re pleased to introduce the AWS SDK for Java Bundle dependency. This new module that you can include in your maven project contains all the SDK clients for all services and all of the third-party libraries in a single JAR. The third-party libraries were “relocated” to new package names to avoid class conflicts with a different version of the same third-party library on a project’s classpath. To use this version of the SDK, simply include the following Maven dependency in your project.

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-java-sdk-bundle</artifactId>
  <version>${aws.sdk.version}</version>
</dependency>

Of course, because we relocated the third-party libraries, they’re no longer available to use under their original import names – unless the project explicitly adds those libraries as dependencies. For example, if a project relied on the AWS SDK for Java bringing in the Joda Time library, when the project switches to use the bundle dependency it also needs to add a specific dependency for Joda Time.

The relocated classes are intended for internal use only by the AWS SDK. It is strongly recommended that you do not refer to classes under com.amazonaws.thirdparty.* in your own code. The following third-party libraries are included in the bundled dependency and moved to the com.amazonaws.thirdparty.* package:

Because the bundle dependency includes all of the dependent libraries, it’s going to be a larger binary to pull down when dependencies get resolved (about 50 MB at the time of this writing, but this will increase with the introduction of each new service and each new third-party library). In addition, if a project explicitly imports one of the third-party libraries that the SDK includes then classes will be duplicated (albeit in different packages). This increases the memory requirement of an application. For these reasons, we recommend that you only use the bundled dependency if you have a need to.

If a project has the combination of a version clash and a limited total project size (e.g., AWS Lambda limits package size to 50MB), the bundled dependency might not be the right solution. Instead, you can build your own version of the AWS SDK for Java from the open sourced code on GitHub. For example, if you needed to resolve a conflict only for the Joda Time library, you can include a build configuration like the following in your maven project:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>2.4.3</version>
  <configuration>
    <artifactSet>
      <includes>
        <include>joda-time:joda-time</include>
        <include>com.amazonaws:*</include>
      </includes>
    </artifactSet>
    <relocations>
      <relocation>
        <pattern>org.joda</pattern>
        <shadedPattern>com.amazonaws.thirdparty.joda</shadedPattern>
      </relocation>
    </relocations>
  </configuration>
</plugin>

Although this means you need to build your own version of the SDK and install it into your own repository, it gives you great flexibility for the third-party libraries and/or services you want to include. Check out the Maven Shade Plugin for more details about how it works.

We hope this new module is useful for projects where there’s a dependency clash. As always, please leave your comments or feedback below!

CHANGELOG for the AWS SDK for Java

by Dongie Agnir | on | in Java | | Comments

We are happy to announce that beginning with version 1.11.82, the source and ZIP distributions of the AWS SDK for Java now include a CHANGELOG.md file that lists the most notable changes for each release.

In the past, changes for each release of the AWS SDK for Java were published to the AWS Release Notes website, but this approach had some drawbacks. Customers wishing to view the set of changes for multiple versions on the website needed to run a search for each version they were interested in. Many customers acquire the source code through our GitHub repository, so viewing the release notes meant potentially opening a browser and navigating away from the code itself. Finally, although rare, sometimes there’s a delay between the release of a new version of the SDK and the availability of the release notes.

By implementing a changelog file, we hope to address these problems in a way that is simple and consistent with many other open source software projects, including other AWS SDKs like JavaScript and .NET. New changes are always prepended to the changelog file in a consistent format, so viewing the changes for multiple versions is now a breeze. The changelog is made available with the source and ZIP distributions, enabling customers to quickly access changes without opening a browser. As an added bonus, because it’s a simple text file, the changes up to the current version can easily be made available for viewing offline. Finally, the file is always updated along with the SDK source, so the list of changes is available as soon as the source code is available.

We hope that with this change, customers will find it easier than ever to keep up to date with the exciting changes being introduced in the AWS SDK for Java. As always, please let us know what you think in the comments below.

AWS Step Functions Fluent Java API

by Andrew Shore | on | in Java | | Comments

AWS Step Functions, a new service that launched at re:Invent 2016, makes it easier to build complex, distributed applications in the cloud. Using this service, you can create state machines that can connect microservices and activities into a visual workflow. State machines support branching, parallel execution, retry/error handling, synchronization (via Wait states), and task execution (via AWS Lambda or an AWS Step Functions Activity).

The Step Functions console provides excellent support for visualizing and debugging a workflow and for creating state machine descriptions. State machines are described in a JSON document, as described in detail here. Although the console has a great editor for building these documents visually, you might want to write state machines in your IDE via a native Java API. Today, we’re launching a fluent builder API to create state machines in a readable, compact way. This new API is included in the AWS SDK for Java.

 

To get started, create a new Maven project and declare a dependency on the aws-java-sdk-stepfunctions client.

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-stepfunctions</artifactId>
    <version>1.11.86</version>
</dependency>

Let’s take a look at some examples. We’ll go through each blueprint available in the console and translate that to the Java API.

Hello World

The following is a JSON representation of a simple state machine that consists of a single task state. The task calls out to a Lambda function (identified by ARN), passing the input of the state machine to the function. When the function completes successfully, the state machine terminates with the same output as the function.
JSON

{
  "Comment" : "A Hello World example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API
Let’s rewrite this simple state machine using the new Java API and transform it to JSON. Be sure you include the static import for the fluent API methods.


package com.example;

import static com.amazonaws.services.stepfunctions.builder.StepFunctionBuilder.*;
import com.amazonaws.services.stepfunctions.builder.ErrorCodes;

public class StepFunctionsDemo {

    public static void main(String[] args) {
        final StateMachine stateMachine = stateMachine()
                .comment("A Hello World example of the Amazon States Language using an AWS Lambda Function")
                .startAt("Hello World")
                .state("Hello World", taskState()
                        .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                        .transition(end()))
                .build();
        System.out.println(stateMachine.toPrettyJson());
    }
}

Let’s take a closer look at the previous example. The very first method you will always call when constructing a state machine, is stateMachine(). This returns a mutable StateMachine.Builder that can be used to configure all properties of a state machine. Here, we’re adding a comment describing the purpose of the state machine, indicating the initial state via the startAt() method, and defining that state via the state() method. Each state machine must have at least one state in it and must have a valid path to a terminal state (that is, a state that causes the state machine to end). In this example, we have a single TaskState (configured via the taskState() method) that also serves as the terminal state via the End transition (configured by transition(end()) ).

Once you configure a state machine to your liking, you can call the build() method on the StateMachineBuilder to produce an immutable StateMachine object. This object can then be transformed into JSON (see toJson() and toPrettyJson()) or it can be passed directly to the CreateStateMachine API in the Java SDK (see below).

The following creates the state machine (created previously) via the service client. The definition() method can take either the raw JSON or a StateMachine object. For more information about getting started with the Java SDK, see our AWS Java Developer Guide.

final AWSStepFunctions client = AWSStepFunctionsClientBuilder.defaultClient();
client.createStateMachine(new CreateStateMachineRequest()
                                          .withName("Hello World State Machine")
                                          .withRoleArn("arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME")
                                          .withDefinition(stateMachine));

 

Wait State

The following state machine demonstrates various uses of the Wait state type, which can be used to wait for a given amount of time or until a specific time. Wait states can dynamically wait based on input using the TimestampPath and SecondsPath properties, which are JSON reference paths to a timestamp or an integer, respectively. The Next property identifies the state to transition to after the wait is complete.
JSON

{
  "Comment" : "An example of the Amazon States Language using wait states",
  "StartAt" : "First State",
  "States" : {
    "First State" : {
      "Next" : "Wait Using Seconds",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    },
    "Wait Using Seconds" : {
      "Seconds" : 10,
      "Next" : "Wait Using Timestamp",
      "Type" : "Wait"
    },
    "Wait Using Timestamp" : {
      "Timestamp" : "2017-01-16T19:18:55.103Z",
      "Next" : "Wait Using Timestamp Path",
      "Type" : "Wait"
    },
    "Wait Using Timestamp Path" : {
      "TimestampPath" : "$.expirydate",
      "Next" : "Wait Using Seconds Path",
      "Type" : "Wait"
    },
    "Wait Using Seconds Path" : {
      "SecondsPath" : "$.expiryseconds",
      "Next" : "Final State",
      "Type" : "Wait"
    },
    "Final State" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API
Again, we call the stateMachine() method to begin constructing the state machine. Our start-at state is a Task state that has a transition to the Wait Using Seconds state. The Wait Using Seconds state is configured to wait for 10 seconds before proceeding to the Wait Using Timestamp state. Notice that we use the waitState() method to obtain an instance of WaitState.Builder, which we then use to configure the state. The waitFor() method can accept different types of wait strategies (Seconds, SecondsPath, Timestamp, TimestampPath). Each strategy has a corresponding method in the fluent API (seconds, secondsPath, timestamp, and timestampPath, respectively). Both the SecondsPath and TimestampPath strategies require a valid JsonPath that references data in the input to the state. This input is then used to determine how long to wait for.

final Date waitUsingTimestamp =
        Date.from(LocalDateTime.now(ZoneOffset.UTC).plusMinutes(15).toInstant(ZoneOffset.UTC));
final StateMachine stateMachine = stateMachine()
        .comment("An example of the Amazon States Language using wait states")
        .startAt("First State")
        .state("First State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(next("Wait Using Seconds")))
        .state("Wait Using Seconds", waitState()
                .waitFor(seconds(10))
                .transition(next("Wait Using Timestamp")))
        .state("Wait Using Timestamp", waitState()
                .waitFor(timestamp(waitUsingTimestamp))
                .transition(next("Wait Using Timestamp Path")))
        .state("Wait Using Timestamp Path", waitState()
                .waitFor(timestampPath("$.expirydate"))
                .transition(next("Wait Using Seconds Path")))
        .state("Wait Using Seconds Path", waitState()
                .waitFor(secondsPath("$.expiryseconds"))
                .transition(next("Final State")))
        .state("Final State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end()))
        .build();
System.out.println(stateMachine.toPrettyJson());

Retry Failure

Retriers are a mechanism to retry certain types of states on a given set of error codes. They define both the condition on which to retry (via ErrorEquals) and the backoff behavior and maximum number of retry attempts. At the time of this post, they may be used only with Task states and Parallel states. In the following state machine, the Task state has three retriers. The first retrier retries a custom error code named HandledError that might be thrown from the Lambda function. The initial delay of the first retry attempt is one second (as defined by IntervalSeconds). The maximum number of retry attempts is set at five. The BackoffRate is used for subsequent retries to determine the next delay; for example, the delays for the first retrier would be 1, 2, 4, 8, etc. The second retrier uses a predefined error code available that is matched whenever the task fails (for whatever reason). A full list of predefined error codes can be found here. Finally, the last retrier uses the special error code States.ALL to retry on everything else. If you use the States.ALL error code, it must appear in the last retrier and must be the only code present in ErrorEquals.
JSON

{
  "Comment" : "A Retry example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Retry" : [ {
        "ErrorEquals" : [ "HandledError" ],
        "IntervalSeconds" : 1,
        "MaxAttempts" : 5,
        "BackoffRate" : 2.0
      }, {
        "ErrorEquals" : [ "States.TaskFailed" ],
        "IntervalSeconds" : 30,
        "MaxAttempts" : 2,
        "BackoffRate" : 2.0
      }, {
        "ErrorEquals" : [ "States.ALL" ],
        "IntervalSeconds" : 5,
        "MaxAttempts" : 5,
        "BackoffRate" : 2.0
      } ],
      "Type" : "Task"
    }
  }
}

Java API

Let’s see what the previous example looks like in the Java API. Here we use the retrier() method to configure a Retrier.Builder. The errorEquals() method can take one or more error codes that indicate what this retrier handles. The second retrier uses a constant defined in the ErrorCodes class, which contains all predefined error codes supported by the States language. The last retrier uses a special method, retryOnAllErrors(), to indicate the retrier handles any other errors. This is equivalent to errorEquals("States.ALL") but is easier to read and easier to remember. Again, the “retry all” retrier must be last or a validation exception will be thrown.

final StateMachine stateMachine = stateMachine()
        .comment("A Retry example of the Amazon States Language using an AWS Lambda Function")
        .startAt("Hello World")
        .state("Hello World", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end())
                .retrier(retrier()
                                 .errorEquals("HandledError")
                                 .intervalSeconds(1)
                                 .maxAttempts(5)
                                 .backoffRate(2.0))
                .retrier(retrier()
                                 .errorEquals(ErrorCodes.TASK_FAILED)
                                 .intervalSeconds(30)
                                 .maxAttempts(2)
                                 .backoffRate(2.0))
                .retrier(retrier()
                                 .retryOnAllErrors()
                                 .intervalSeconds(5)
                                 .maxAttempts(5)
                                 .backoffRate(2.0))
        )
        .build();

System.out.println(stateMachine.toPrettyJson());

Catch Failure

Catchers are a similar error handling mechanism. Like Retriers, they can be defined to handle certain error codes that can be thrown from a state. Catchers define a state transition that occurs when the error code matches the ErrorEquals list. The transition state can handle the recovery steps needed for that particular failure scenario. Much like retriers, ErrorEquals can contain one or more error codes (either custom or predefined). The States.ALL is a special catch all that must be in the last Catcher, if present.
JSON

{
  "Comment" : "A Catch example of the Amazon States Language using an AWS Lambda Function",
  "StartAt" : "Hello World",
  "States" : {
    "Hello World" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Catch" : [ {
        "Next" : "Custom Error Fallback",
        "ErrorEquals" : [ "HandledError" ]
      }, {
        "Next" : "Reserved Type Fallback",
        "ErrorEquals" : [ "States.TaskFailed" ]
      }, {
        "Next" : "Catch All Fallback",
        "ErrorEquals" : [ "States.ALL" ]
      } ],
      "Type" : "Task"
    },
    "Custom Error Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a custom lambda function exception",
      "Type" : "Pass"
    },
    "Reserved Type Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a reserved error code",
      "Type" : "Pass"
    },
    "Catch All Fallback" : {
      "End" : true,
      "Result" : "This is a fallback from a reserved error code",
      "Type" : "Pass"
    }
  }
}

Java API

To configure a catcher, first call the catcher() method to obtain a Catcher.Builder. The first Catcher handles the custom error code HandledError, and transitions to the Custom Error Fallback state. The second handles the predefined States.TaskFailed error code, and transitions to the Reserved Type Fallback state. Finally, the last catcher handles all remaining errors and transitions to the Catch All Fallback state. Like Retriers, there is a special method, catchAll(), that configures the catcher to handle all error codes. Use of catchAll() is preferred over errorEquals("States.ALL").

final StateMachine stateMachine = stateMachine()
        .comment("A Catch example of the Amazon States Language using an AWS Lambda Function")
        .startAt("Hello World")
        .state("Hello World", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end())
                .catcher(catcher()
                                 .errorEquals("HandledError")
                                 .transition(next("Custom Error Fallback")))
                .catcher(catcher()
                                 .errorEquals(ErrorCodes.TASK_FAILED)
                                 .transition(next("Reserved Type Fallback")))
                .catcher(catcher()
                                 .catchAll()
                                 .transition(next("Catch All Fallback"))))
        .state("Custom Error Fallback", passState()
                .result("\"This is a fallback from a custom lambda function exception\"")
                .transition(end()))
        .state("Reserved Type Fallback", passState()
                .result("\"This is a fallback from a reserved error code\"")
                .transition(end()))
        .state("Catch All Fallback", passState()
                .result("\"This is a fallback from a reserved error code\"")
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());

Parallel State

You can use a Parallel state to concurrently execute multiple branches. Branches are themselves pseudo state machines and can contain multiple states (and even nested Parallel states). The Parallel state waits until all branches have terminated successfully before transitioning to the next state. Parallel states support retriers and catchers in the event that execution of a branch fails.
JSON

{
  "Comment": "An example of the Amazon States Language using a parallel state to execute two branches at the same time.",
  "StartAt": "Parallel",
  "States": {
    "Parallel": {
      "Type": "Parallel",
      "Next": "Final State",
      "Branches": [
        {
          "StartAt": "Wait 20s",
          "States": {
            "Wait 20s": {
              "Type": "Wait",
              "Seconds": 20,
              "End": true
            }
          }
        },
        {
          "StartAt": "Pass",
          "States": {
            "Pass": {
              "Type": "Pass",
              "Next": "Wait 10s"
            },
            "Wait 10s": {
              "Type": "Wait",
              "Seconds": 10,
              "End": true
            }
          }
        }
      ]
    },
    "Final State": {
      "Type": "Pass",
      "End": true
    }
  }
}

Java API

To create a Parallel state in the Java API, call the parallelState() method to obtain an instance of ParallelState.Builder. Next, you can add branches of execution via the branch() method. Each branch must have StartAt (name of initial state for branch) specified and at least one state.

final StateMachine stateMachine = stateMachine()
        .comment(
                "An example of the Amazon States Language using a parallel state to execute two branches at the same time.")
        .startAt("Parallel")
        .state("Parallel", parallelState()
                .transition(next("Final State"))
                .branch(branch()
                                .startAt("Wait 20s")
                                .state("Wait 20s", waitState()
                                        .waitFor(seconds(20))
                                        .transition(end())))
                .branch(branch()
                                .startAt("Pass")
                                .state("Pass", passState()
                                        .transition(next("Wait 10s")))
                                .state("Wait 10s", waitState()
                                        .waitFor(seconds(10))
                                        .transition(end()))))
        .state("Final State", passState()
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());
System.out.println(stateMachine.toPrettyJson());

Choice State

A Choice state adds branching logic to a state machine. It consists of one or more choices and, optionally, a default state transition if no choices matches. Each choice rule represents a condition and a transition to enact if that condition evaluates to true. Choice conditions can be simple (StringEquals, NumericLessThan, etc) or composite conditions using And, Or, Not.

In the following example, we have a choice state with two choices, both using the NumericEquals condition, and a default transition if neither choice rule matches.
JSON

{
  "Comment" : "An example of the Amazon States Language using a choice state.",
  "StartAt" : "First State",
  "States" : {
    "First State" : {
      "Next" : "Choice State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    },
    "Choice State" : {
      "Default" : "Default State",
      "Choices" : [ {
        "Variable" : "$.foo",
        "NumericEquals" : 1,
        "Next" : "First Match State"
      }, {
        "Variable" : "$.foo",
        "NumericEquals" : 2,
        "Next" : "Second Match State"
      } ],
      "Type" : "Choice"
    },
    "First Match State" : {
      "Next" : "Next State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:OnFirstMatch",
      "Type" : "Task"
    },
    "Second Match State" : {
      "Next" : "Next State",
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:OnSecondMatch",
      "Type" : "Task"
    },
    "Default State" : {
      "Cause" : "No Matches!",
      "Type" : "Fail"
    },
    "Next State" : {
      "End" : true,
      "Resource" : "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME",
      "Type" : "Task"
    }
  }
}

Java API

To add a Choice state to your state machine, use the choiceState() method to obtain an instance of ChoiceState.Builder. You can add choice rules via the choice() method on the builder. For simple conditions, there are several overloads for each comparison operator (LTE, LT, EQ, GT, GTE) and data types (String, Numeric, Timestamp, Boolean). In this example, we’re using the eq() method that takes a string as the first argument, which is the JsonPath expression referencing the input data to apply the condition to. The second argument will differ depending on the type of data you are comparing against. Here we’re using an integer for numeric comparison. Each choice rule must have a transition that should occur if the condition evaluates to true.

final StateMachine stateMachine = stateMachine()
        .comment("An example of the Amazon States Language using a choice state.")
        .startAt("First State")
        .state("First State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(next("Choice State")))
        .state("Choice State", choiceState()
                .choice(choice()
                                .transition(next("First Match State"))
                                .condition(eq("$.foo", 1)))
                .choice(choice()
                                .transition(next("Second Match State"))
                                .condition(eq("$.foo", 2)))
                .defaultStateName("Default State"))
        .state("First Match State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:OnFirstMatch")
                .transition(next("Next State")))
        .state("Second Match State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:OnSecondMatch")
                .transition(next("Next State")))
        .state("Default State", failState()
                .cause("No Matches!"))
        .state("Next State", taskState()
                .resource("arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME")
                .transition(end()))
        .build();

System.out.println(stateMachine.toPrettyJson());

You can find more references and tools for building state machines in the Step Functions documentation, and post your questions and feedback to the Step Functions Developers Forum.