Category: Java


AWS SDK for Java 2.0 – Developer Preview

by Andrew Shore | on | in Java | Permalink | Comments |  Share

We’re pleased to announce the Developer Preview of the AWS SDK for Java 2.0. The 2.0 version of the SDK is a major rewrite of the 1.11.x code base. It’s built on top of Java 8 and adds several, frequently requested features, like support for non-blocking I/O and the ability to use a different HTTP implementation at runtime. In addition to these new features, many aspects of the SDK have been refactored and cleaned up with a strong focus on consistency, immutability, and ease of use. The Developer Preview is your chance to influence the direction of the AWS SDK for Java 2.0. Tell us what you like, tell us what you don’t like. Your feedback matters to us. Find details on various ways to give feedback at the bottom of this post.

Although we’re excited about the AWS SDK for Java 2.0 Developer Preview, we also want to reassure customers that we’re not dropping support for the 1.x line of the SDK any time soon. We know there are lots of customers who depend on 1.x versions of the SDK, and we will continue to support them. Version 2.0 is also able to run alongside version 1.x in the same JVM to allow partial migration to the new product. As we get closer to general availability for version 2.0, we’ll share a more detailed plan on how we’ll support the 1.x line.

Getting started

Let’s walk through setting up a project that depends on the SDK and makes a simple service call. The following steps use Maven as an example but you can use any build system that supports MavenCentral as an artifact source (Gradle, sbt, etc). These steps assume you have Maven and a Java 8 JDK already installed. See the developer guide for a more detailed tutorial on getting started.

    1. Create a new Java8 Maven project.
    2. Open the pom.xml file, and add a dependency on the Amazon DynamoDB module (see services/pom.xml for a full list of supported services).
      <dependency>
          <groupId>software.amazon.awssdk</groupId>
          <artifactId>dynamodb</artifactId>
          <version>2.0.0-preview-1</version>
      </dependency>
    3. Create a new class with a main method, and create a DynamoDB service client using the client builder.
      package com.example;
      
      import software.amazon.awssdk.auth.ProfileCredentialsProvider;
      import software.amazon.awssdk.regions.Region;
      import software.amazon.awssdk.services.dynamodb.DynamoDBClient;
      import software.amazon.awssdk.services.dynamodb.model.ListTablesRequest;
      
      public class Main {
      
          public static void main(String[] args) {
              // The region and credentials provider are for demonstration purposes. Feel free to use whatever region and credentials
              // are appropriate for you, or load them from the environment (See http://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/setup-credentials.html)
              DynamoDBClient client = DynamoDBClient.builder()
                  .region(Region.US_EAST_1)
                  .credentialsProvider(ProfileCredentialsProvider.builder()
                                                                 .profileName("my-profile")
                                                                 .build())
                  .build();
          }
      }
    4. Make a service request and do something with the response.
      ListTablesResponse response = client.listTables(ListTablesRequest.builder()
                                                                       .limit(5)
                                                                       .build());
      response.tableNames().forEach(System.out::println);
      

New features

Non-blocking I/O

The SDK now supports truly non-blocking I/O. The 1.11.x version of the SDK already has async variants of service clients. However, they are just a wrapper around a thread pool and the blocking sync client, so they don’t provide the benefits of non-blocking I/O (high concurrency with very few threads). Due to the limitations and poor resource use of the thread-per-connection model, many customers requested support for non-blocking I/O, so we are pleased to announce first class support for non-blocking I/O in our async clients. Under the hood, we use an HTTP client built on top of Netty to make the non-blocking HTTP call.

For non-streaming operations, the interfaces are nearly identical to the sync client. The only difference is that a CompletableFuture containing the response is returned immediately instead of blocking the thread until the response is available. Exceptions are delivered by completing the future exceptionally and can be accessed using the appropriate callbacks on the future (see https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html). Here’s an example of a simple service call using the async/non-blocking client.

// Creates a default async client with credentials and regions loaded from the environment
DynamoDBAsyncClient client = DynamoDBAsyncClient.create();
CompletableFuture<ListTablesResponse> response = client.listTables(ListTablesRequest.builder()
                                                                                    .limit(5)
                                                                                    .build());
// Map the response to another CompletableFuture containing just the table names
CompletableFuture<List<String>> tableNames = response.thenApply(ListTablesResponse::tableNames);
// When future is complete (either successfully or in error) handle the response
tableNames.whenComplete((tables, err) -> {
    if (tables != null) {
        tables.forEach(System.out::println);
    } else {
        // Handle error
        err.printStackTrace();
    }
});

Streaming operations are a bit different to allow for full non-blocking I/O. For streaming inputs (like the Amazon S3 PutObject operation), you must supply an AsyncRequestProvider that can produce content incrementally. To support asynchronous backpressure (to prevent out of memory errors if the SDK can’t send data as fast as it’s being produced) the SDK uses the reactive pull model. This is based on the well-known reactive streams interfaces. In fact, the request provider is simply a Publisher of ByteBuffer chunks. The SDK will call subscribe on that Publisher and request chunks of data as its buffer allows.

Here we upload a file asynchronously using the PutObject operation in Amazon S3. We’re using an implementation of AsyncRequestProvider that produces data from a file. It handles backpressure and retries automatically, reducing the burden on the developer. We want to support common implementations and sources of data out of the box, so if you have any suggestions or requests, please let us know.

public static void main(String[] args) {
    S3AsyncClient client = S3AsyncClient.create();
    CompletableFuture<PutObjectResponse> future = client.putObject(
            PutObjectRequest.builder()
                            .bucket(BUCKET)
                            .key(KEY)
                            .build(),
            AsyncRequestProvider.fromFile(Paths.get("myfile.in"))
    );
    future.whenComplete((resp, err) -> {
        try {
            if (resp != null) {
                System.out.println(resp);
            } else {
                // Handle error
                err.printStackTrace();
            }
        } finally {
            // Lets the application shut down. Only close the client when you are completely done with it.
            FunctionalUtils.invokeSafely(client::close);
        }
    });
}

For operations that have a streaming response (such as Amazon S3 GetObject), you must provide an AsyncResponseHandler that processes and transforms the response. This response handler has callback methods for various events in the response lifecycle. It follows the same reactive streams model for handling the data. (In this case, however, it’s the reverse. The SDK is the data publisher and the response handler implementation must subscribe to the publisher and request data from it.) Consult the Java documentation for a more detailed explanation of how to implement AsyncResponseHandler. In the following example we will use a pre-canned implementation that just emits the data to a file.

public static void main(String[] args) {
    S3AsyncClient client = S3AsyncClient.create();
    final CompletableFuture<Void> future = client.getObject(
            GetObjectRequest.builder()
                            .bucket(BUCKET)
                            .key(KEY)
                            .build(),
            AsyncResponseHandler.toFile(Paths.get("myfile.out")));
    future.whenComplete((resp, err) -> {
        try {
            if (resp != null) {
                System.out.println(resp);
            } else {
                // Handle error
                err.printStackTrace();
            }
        } finally {
            // Lets the application shut down. Only close the client when you are completely done with it
            FunctionalUtils.invokeSafely(client::close);
        }
    });
}

Pluggable HTTP layer

All earlier 1.x.x versions of the SDK have had a hard dependency on the Apache HTTP client to make HTTP calls. Although this is fine for most customers, some advanced customers wanted to swap out the default HTTP implementation to be able to use a more optimized one that’s better suited for their runtime environment. The AWS SDK for Java 2.0 now fully supports a pluggable HTTP layer. The SDK continues to ship Apache as the default, but you can remove it and replace it with another implementation that conforms to the appropriate SPI.

The SDK attempts to load an HTTP implementation from the classpath using the ServiceLoader utility. This enables end users to create their own distributions of the SDK with a different default HTTP implementation (by removing the dependency on Apache’s implementation and replacing it with their own). Customers who want to avoid potentially expensive classpath scanning can set the system property software.amazon.awssdk.http.service.impl to explicitly identify the implementation to use. Finally, for customers wanting precise control over how the HTTP client is created and configured, the SDK accepts either an SdkHttpClient instance or SdkHttpClientFactory instance in each service client builder. Passing in an SdkHttpClient enables customers to share a connection pool across multiple service clients for better resource utilization.

Configuring HTTP settings

Due to the pluggable nature of the HTTP layer, customers who want to configure HTTP specific settings such as socket timeout, proxy settings, etc., must declare a dependency on the underlying implementation and configure the client through implementation provided interfaces. In the following examples we show how to configure the default Apache implementation.

  1. Declare a dependency on the Apache implementation in your project.
    <dependency>
        <artifactId>aws-http-client-apache</artifactId>
        <groupId>software.amazon.awssdk</groupId>
        <version>2.0.0-preview-1</version>
    </dependency>
  2. Create and configure the Apache client factory.
    ApacheSdkHttpClientFactory apacheClientFactory = 
        ApacheSdkHttpClientFactory.builder()
                                  .socketTimeout(Duration.ofSeconds(10))
                                  .connectionTimeout(Duration.ofMillis(750))
                                  .build();
  3. Use the Apache client factory to create a SDK service client.
    DynamoDBClient client =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClientFactory(apacheClientFactory)
                                                                    .build())
                          .build();

Sharing HTTP clients

The SDK now supports sharing HTTP client instances across multiple service clients. This allows you to reuse the same connection pool for better resource utilization. To share a client across multiple SDK service clients, you must depend on a specific implementation and create an HTTP client factory for that implementation, as shown above.

  1. Create an SdkHttpClient instance using the HTTP client factory we created earlier (only follows steps 1 and 2 from above).
    SdkHttpClient sharedClient = apacheClientFactory.createHttpClient();
  2. Register that HTTP client instance with multiple SDK service clients. (You can even share clients across multiple services.)
    DynamoDBClient clientOne =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClient(sharedClient)
                                                                    .build())
                          .build();
    DynamoDBClient clientTwo =
            DynamoDBClient.builder()
                          .httpConfiguration(ClientHttpConfiguration.builder()
                                                                    .httpClient(sharedClient)
                                                                    .build())
                          .build();
  3. Because the client is shared, the SDK will not close it when the service client is closed. Be sure to explicitly close it when it’s no longer needed.
    sharedClient.close();

Pluggable Async HTTP

The non-blocking async HTTP client is also pluggable, and you can configure or share it in exactly the same way as sync. The interfaces for the factory and client are SdkAsyncHttpClient and SdkAsyncHttpClientFactory, respectively. An implementation built on top of Netty is the default. Add the following to your pom.xml file to configure the default Netty implementation.

<dependency>
    <artifactId>aws-http-nio-client-netty</artifactId>
    <groupId>software.amazon.awssdk</groupId>
    <version>2.0.0-preview-1</version>
</dependency>

API changes

We’ve made several public API changes to improve consistency, make the SDK easier to use, strongly enforce immutability for safer concurrent programming, and remove deprecated or confusing APIs. The following are some of the bigger changes included in the AWS SDK for Java 2.0 Developer Preview.

Client Builders

In 1.11.x versions, we recently deprecated all client constructors and all mutable methods on the client in favor of the client builders. In version 2.0, the client builders are now the only way to create a service client. In addition, clients are 100 percent immutable after creation. For a cleaner programming experience, all interaction with the clients is done through interfaces.

To obtain an instance of the builder, you can use a static factory method on the client interface like this.

DynamoDBClient client = DynamoDBClient.builder().build();

If you want just a quick default client that loads region and credentials from the environment you can use the following. This will fail if region or credentials are not properly setup.

DynamoDBClient client = DynamoDBClient.create();

All builders and POJOs in version 2.0 now follow a new naming convention for setter methods. There is no set/with prefix. The setter method is simply the property name. The setter methods return the builder for method chaining.

DynamoDBClient client = DynamoDBClient.builder()
                                      .region(Region.US_EAST_1)
                                      .build();

Most advanced configuration in 1.11.x versions was HTTP related. Due to the pluggable nature of the HTTP layer, you must now configure this via the HTTP implementation directly (see “New features”, earlier in this post). You can change the non-HTTP related advanced configuration via the overrideConfiguration method.

DynamoDBClient client =
        DynamoDBClient.builder()
                      .overrideConfiguration(
                              ClientOverrideConfiguration.builder()
                                                         .retryPolicy(PredefinedRetryPolicies.NO_RETRY_POLICY)
                                                         .build())
                      .build();

Immutable POJOs

Previously, all request/response POJOs were mutable, which violated the thread safety guarantees of the client. In version 2.0, all POJOs are immutable and must be created through a builder.

ListTablesRequest request = ListTablesRequest.builder()
                                             .limit(5)
                                             .build();

You can modify POJOs only by converting the object into a builder, making the modifications, and rebuilding the object. In the example below, originalRequest is unchanged and a new instance of ListTablesRequest is created and returned.

public static ListTablesRequest updatePaginationToken(ListTablesRequest originalRequest, ListTablesResponse response) {
    return originalRequest.toBuilder()
                          .exclusiveStartTableName(response.lastEvaluatedTableName())
                          .build();
}

Due to the immutability of POJOs and the fluent setters, serialization requires some special care. Here’s an example of serializing a request object to JSON using the Jackson library, and deserializing it back into a request object.

ObjectMapper mapper = new ObjectMapper();
ListTablesRequest request = ListTablesRequest.builder()
                                             .limit(5)
                                             .build();
String serialized = mapper.writeValueAsString(request.toBuilder());

ListTablesRequest deserialized = mapper.readValue(serialized, ListTablesRequest.serializableBuilderClass())
                                       .build();

Regions

In 1.11.x versions of the SDK, there were many different classes used for configuring regions or accessing region metadata (Region, Regions, s3.Region, RegionUtils, etc). In version 2.0, these are all collapsed into a single Region class for simplicity and ease of use.

The new Region class looks similar to an enum and has constants for each region.

DynamoDBClient client = DynamoDBClient.builder()
                                      .region(Region.US_EAST_1)
                                      .build();

Creating a new region is safe to do using the static factory method of. This is useful when the region is coming from an external source such as a configuration file, or for using a region that the SDK doesn’t know about yet.

Region newRegion = Region.of("us-east-42");

You can access metadata about the region (name, partition, or domain) via the RegionMetadata interface.

String domain = RegionMetadata.of(Region.US_EAST_1).getDomain();

You can access region metadata for a service (such as which regions that service is supported in) via the ServiceMetadata interface.

DynamoDBClient.serviceMetadata().regions().forEach(System.out::println);

Streaming

There are substantial changes in the APIs for streaming operations (such as the Amazon S3 GetObject and PutObject) due to the newly added support for non-blocking I/O. Because the programming models for blocking I/O and non-blocking I/O are so radically different, we’ve removed the InputStream from the request/response POJO. Now, the sync and async clients have additional parameters for streaming operations to accept streamed content (PutObject) and to process a streamed response (GetObject). We explained the async streaming APIs earlier in this post, so let’s take a look at the sync versions.

In the following example, we’re uploading a file to S3 via the PutObject operation. Notice that we don’t set the content in the PutObjectRequest, but instead provide it as a second parameter to the putObject method. This content is provided using a new class, RequestBody, which has overloads for many common sources of data (File, String, byte array, ByteBuffer, InputStream).

S3Client client = S3Client.create();
client.putObject(PutObjectRequest.builder()
                                 .bucket(BUCKET)
                                 .key(KEY)
                                 .build(),
                 RequestBody.of(Paths.get("myfile.in")));

Next, we download the same object to a file using the GetObject operation. Again, instead of accessing the InputStream from the GetObjectResponse object, you can now provide a StreamingResponseHandler implementation to process the response contents. This is a functional interface that provides the unmarshalled GetObjectResponse and the input stream as parameters and returns some transformed value (or Void). This transformed value becomes the return value of the getObject method. There are a couple of convenience static factory methods on the interface to create handlers for common situations like dumping the data into a file or writing it to an OutputStream. We use the file one below.

S3Client client = S3Client.create();
client.getObject(GetObjectRequest.builder()
                                 .bucket(BUCKET)
                                 .key(KEY)
                                 .build(),
                 StreamingResponseHandler.toFile(Paths.get("myfile.out")));

S3 client changes

In 1.11.x the S3 service client is not generated like the rest of the SDK. Because of this, it’s somewhat inconsistent with other service clients in the AWS SDK for Java. It also doesn’t exactly match the service’s API, so it can be confusing using another SDK’s S3 client after getting used to the Java client. In version 2.0 we are now generating the S3 client like every other service. Play around with it and let us know what you think.

Giving feedback and contributing

You can provide feedback to us in several ways. Both positive and negative feedback is appreciated.

Public feedback

GitHub issues. Customers who are comfortable giving public feedback can open a Github issue in the V2 repo. This is the preferred mechanism to give feedback so that other customers can engage in the conversation, +1 issues, etc. Issues you open will be evaluated, and included in our roadmap for the GA launch.

Gitter Channel. For informal discussion or general feedback, you may join the Gitter chat for the V2 repo. The Gitter channel is also a great place to get help with the Developer Preview, although feel free to open an issue as well.

Private feedback

Those who prefer not to give public feedback can instead email the aws-java-sdk-v2-feedback@amazon.com mailing list. This list is monitored by the AWS SDK for Java team and will not be shared with anyone outside of AWS. An SDK team member may respond back to ask for clarification or acknowledge that the feedback was received and is being evaluated.

Contributing

You can open pull requests for fixes or additions to the AWS SDK for Java 2.0 Developer Preview. All pull requests must be submitted under the Apache 2.0 license and will be reviewed by an SDK team member prior to merging. Accompanying unit tests are appreciated.

Writing Custom Metrics to Amazon CloudWatch Using the AWS SDK for Java

by Sascha Moellering | on | in Java | Permalink | Comments |  Share

Metrics measure the performance of your system. Several AWS services provide free metrics, such as the CPU usage of an Amazon EC2 instance. You can create Amazon CloudWatch alarms based on metrics and send Amazon SNS messages when the alarm state changes. You can use this mechanism to implement elastic scaling if the message is sent to an Auto Scaling group to change the desired capacity of the group. For many workloads, metrics like CPU usage are sufficient. However, from time to time, workloads have specific requirements and need a more complex metric to scale efficiently. It’s possible to publish your own metrics to CloudWatch, known as custom metrics, by using the AWS CLI, an API, or the CloudWatch collectd plugin. In this blog post, we’ll show you a more complex example of using the capabilities of the AWS SDK for Java to implement a framework integration to publish framework-related custom metrics to CloudWatch.

Integrating Vert.x and Amazon CloudWatch

Vert.x is an event-driven, reactive, nonblocking, and polyglot framework to implement microservices. It runs on the Java virtual machine (JVM) by using the low-level IO library Netty. You can write applications in Java, JavaScript, Groovy, Ruby, and Ceylon. The framework offers a simple and scalable actor-like concurrency model: Vert.x calls handlers by using a thread known as an event loop. To use this model, you have to write code known as verticles. Those verticles share certain similarities with actors in the Actor Model, and to use them, you have to implement the `Verticle` interface.
The following example shows a basic verticle implementation.

public class SimpleVerticle extends AbstractVerticle {
      // Method is called when the verticle is deployed
      public void start() {
      }

      // Optional method, called when verticle is undeployed
      public void stop() {
      }
}

Verticles communicate with each other using a single event bus. Those messages are sent on the event bus to a specific address, and verticles can register to this address by using handlers. In our example, we use the default event bus address cloudwatch.metrics. Then we register this address to consume all messages and push this data into CloudWatch.

With only a few exceptions, none of the APIs in Vert.x block the calling thread. Similar to Node.js, Vert.x uses the reactor pattern. However, in contrast to Node.js, Vert.x uses several event loops. Unfortunately, not all APIs in the Java ecosystem are written asynchronously, for example, the JDBC API. Vert.x offers a possibility to run this, blocking APIs without blocking the event loop. These special verticles are called worker verticles. You don’t execute worker verticles by using the standard Vert.x event loops, but by using a dedicated thread from a worker pool. Basically, this means that worker verticles don’t block the event loop.

If you start writing low-latency applications, you can reach a certain point where internal metrics of frameworks are required for further optimization. By default, Vert.x doesn’t record any metrics, but offers a Service Provider Interface (SPI) that you can implement to get more information about the behavior of Vert.x internals. The interface that you have to implement is described in the API documentation.
Vert.x provides an in-depth look into the framework by offering metrics for the following:

  • Datagram/UDP
  • Vert.x event bus
  • HTTP client
  • HTTP server
  • TCP client
  • TCP server
  • Pools used by Vert.x, such as execute blocking or worker verticle

To receive metrics from Vert.x, for example, HTTP server metrics, you have to implement the `HttpServerMetrics` interface and the following method from the `VertxMetrics` interface :

HttpServerMetrics<?, ?, ?> createMetrics(HttpServer httpServer, SocketAddress address, HttpServerOptions serverOptions);

The following code snippet shows a typical implementation of `HttpServerMetrics`.

private final LongAdder processingTime = new LongAdder();
    private final LongAdder requestCount = new LongAdder();
    private final LongAdder requests = new LongAdder();
    private final SocketAddress localAddress;
    private final HttpServerMetricsSupplier httpServerMetricsSupplier;

    public  HttpServerMetricsImpl(SocketAddress localAddress, HttpServerMetricsSupplier httpServerMetricsSupplier) {
        this.localAddress = localAddress;
        this.httpServerMetricsSupplier = httpServerMetricsSupplier;
        httpServerMetricsSupplier.register(this);
    }

    @Override
    public void responseEnd(Long nanoStart, HttpServerResponse response) {
        long requestProcessingTime = System.nanoTime() - nanoStart;
        processingTime.add(requestProcessingTime);
        requestCount.increment();
        requests.decrement();
    }

In this example, the `responseEnd` method is called if an HTTP server response has ended. The processing time of the request is calculated, the number of requests is incremented, and the number of current requests is decremented. Now we have to send the data we collected to CloudWatch.
To collect metrics data and send it to CloudWatch, we need to implement the `MetricSupplier` interface and override the `collect()` method. Each metric value is represented by an object of type `CloudWatchDataPoint`. This data point class is a simple POJO containing the name of the metric, the value, the timestamp of collection, and a CloudWatch StandardUnit. The `StandardUnit` enumeration represents the unit of the data point in CloudWatch (e.g., Bytes). After collecting a list of data points, the `Sender` class pushes the data to CloudWatch. To connect to CloudWatch, the Sender class uses the AWS SDK for Java and the `DefaultAWSCredentialsProviderChain`. This enables you to use Vert.x-CloudWatch SPI on an EC2 instance, as well as on your local development workstation.

    public Sender(Vertx vertx, VertxCloudwatchOptions options, Context context) {
        this.vertx = vertx;

        // Configuring the CloudWatch client
        // AWS credentials provider chain that looks for credentials in this order:
        //      - Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (RECOMMENDED since they are recognized by all the AWS SDKs and CLI except for .NET), or AWS_ACCESS_KEY and AWS_SECRET_KEY (only recognized by the SDK for Java)
        //      - Java System Properties - aws.accessKeyId and aws.secretKey
        //      - Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI
        //      - Instance profile credentials delivered through the Amazon EC2 metadata service
        this.cloudWatchClient = initCloudWatchClient(options.getCloudwatchRegion());
        this.namespace = options.getNamespace();
        this.instanceId = options.getInstanceId();

        batchSize = options.getBatchSize();
        batchDelay = NANOSECONDS.convert(options.getBatchDelay(), SECONDS);
        queue = new ArrayList<>(batchSize);
        sendTime = System.nanoTime();

        context.runOnContext(aVoid -> timerId = vertx.setPeriodic(MILLISECONDS.convert(batchDelay, NANOSECONDS), this::flushIfIdle));
    }

    ...

    private void send(List<CloudWatchDataPoint> dataPoints) {
        List<MetricDatum> cwData = toCloudwatchData(dataPoints);
        PutMetricDataRequest metricDataRequest = new PutMetricDataRequest();
        metricDataRequest.setMetricData(cwData);
        metricDataRequest.setNamespace(this.namespace);
        Future future = cloudWatchClient.putMetricDataAsync(metricDataRequest);
        sendTime = System.nanoTime();

        try {
            future.get();
        } catch (Exception exc) {
            LOG.error(exc);
        }
    }

    private List<MetricDatum> toCloudwatchData(List<CloudWatchDataPoint> dataPoints) {
        List<MetricDatum> metrics = new ArrayList<>();

        dataPoints.forEach(metric -> {

            MetricDatum point = new MetricDatum();

            point.setTimestamp(new Date(metric.getTimestamp()));
            point.setValue((double) metric.getValue());
            point.setMetricName(metric.getName());
            point.setUnit(metric.getStandardUnit());
            List<Dimension> dimensionList = new ArrayList<>();
            dimensionList.add(new Dimension().withName("InstanceId").withValue(this.instanceId));

            point.setDimensions(dimensionList);
            metrics.add(point);
        });

        return metrics;
    }

To use the CloudWatch Vert.x SPI implementation, we have to set the necessary metrics options. In our case, we want to use the CloudWatch namespace `Vertx/CloudWatch`. Let’s assume that the application runs on an EC2 instance. In this case, the CloudWatch SPI automatically detects the region that the EC2 instance is running in and the instance ID. This information is determined by using the EC2MetadataUtils-class.
After setting the metrics options, we initiate a Vert.x instance and create a simple HTTP server on port 8080 that returns “Hello Vert.x!” in plain text. The SPI automatically detects that an HTTP server is created and collects HTTP server-related metrics such as the number of HTTP connections, the number of bytes sent, and a set of other metrics.
In addition to that, we want to send the consumed memory of the JVM to CloudWatch. This custom metric isn’t collected by the SPI, so we have to calculate the consumed memory by using the Runtime-class. A timer sends this data as a JSON message every five seconds over the event bus to the CloudWatch SPI. The SPI collects the data and sends it to CloudWatch.

    VertxOptions options = new VertxOptions().setMetricsOptions(
                new VertxCloudwatchOptions()
                        .setEnabled(true)
                        .setMetricsBridgeEnabled(true)
                        .setBatchSize(10)
                        .setBatchDelay(30)
                        .setNamespace("Vertx/Cloudwatch"));
    vertx = Vertx.vertx(options);

    // Creating HTTP server for metrics
    HttpServer server = vertx.createHttpServer();

    server.requestHandler(request -> {

        // This handler is called for each request that arrives on the server
        HttpServerResponse response = request.response();
        response.putHeader("content-type", "text/plain");

        // Write to the response and end it
        response.end("Hello Vert.x!");
    });

    vertx.setPeriodic(5000, id -> {
        long usedMem = this.getUsedMemory();
        JsonObject message = new JsonObject()
                .put("metricName", "JVMMemory")
                .put("unit", StandardUnit.Megabytes.toString())
                .put("value", usedMem);

        vertx.eventBus().publish("cloudwatch.metrics", message);
    });

    server.listen(8080);

The following figure shows metrics such as the number of HTTP connections, the number of requests, the amount of bytes sent, and the consumed memory displayed as a graph in CloudWatch.

Vert.x metrics

Note that a custom metric is defined as the unique combination of metric name and dimensions associated with the metric. Custom metrics are priced based on monthly usage per metric. See CloudWatch pricing for details.

Summary

In this blog post we created a Vert.x SPI implementation to write framework metrics to CloudWatch. We used the capabilities of the AWS SDK for Java not only for the communication with CloudWatch, but also to get insights about the instance and the region using EC2 metadata. We hope we’ve given you ideas for creating your own applications and framework integrations by using the AWS SDK for Java. Feel free to share your ideas and thoughts in the comments below!

 

AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 3 of 3)

In the first part of the blog series, we created a new application named rekognition-service from the rekognition blueprint. In the second part, we deployed this serverless application to AWS CloudFormation. In this last part of the blog series, we describe how to test and check the result of the newly deployed rekognition-service application.

Test the rekognition-service application by using the Amazon S3 bucket editor

You can drag and drop a group of files, including folders, to the Amazon S3 bucket editor so you can upload them to an Amazon S3 bucket. The .jpg files trigger the underlying Lambda function to be tagged with the name and confidence value returned by Amazon Rekognition. You can also manually update and delete these tags by using the tag dialog box.

Check the Lambda logs by using the AWS Lambda function editor

You can also check the Lambda function logs by using the Lambda function editor. All the Amazon CloudWatch streams for the Lambda function are listed on the Logs tab in the editor. You can double-click one item to open the underlying one stream in Eclipse. You can also select multiple items, right-click, and then select Show Log Events to open the underlying streams in one batch.

This concludes our three-part series. What do you think of the rekognition serverless blueprint and the working flow in the AWS Toolkit for Eclipse? If you have any requests for new blueprints and features in the AWS Toolkit for Eclipse, please let us know. We appreciate your feedback.

AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 2 of 3)

In the first part of this blog post, we talked about how to create a new AWS SAM application from the rekognition serverless blueprint. In this second part, we describe how to deploy the application to AWS CloudFormation.

Deploy the rekognition-service application to a new CloudFormation stack

This .gif animation shows the steps to deploy an AWS SAM application to AWS CloudFormation.

What does the AWS Toolkit for Eclipse do for you during deployment

  • Creates a .zip file that contains the project and all its dependencies. Then uploads the file to the specified Amazon S3 bucket.
  • Updates the serverless.template (as shown in the following snippet) to fill in the complete properties for the AWS::Serverless::Function resource type:
    • Replaces the Handler with the FQCN of the AWS Lambda function handler.
    • Generates the actual code URI for CodeUri so that AWS CloudFormation can reference the Lambda function artifact in the S3 bucket.
    • Adds the missing configurations (Runtime, Description, MemorySize, Timeout, Role) and use the default values.
  • Creates a new AWS CloudFormation stack using the updated serverless.template file.

Here is the updated snippet for TagImage in the CloudFormation template.

"TagImage" : {
      "Type" : "AWS::Serverless::Function",
      "Properties" : {
        "Handler" : "com.serverless.demo.function.TagImage",
        "Runtime" : "java8", "CodeUri" : "s3://zhaoxiz-us-west-1/rekognition-service-stack-1497642692569-1497643074359.zip", "Description" : null, "MemorySize" : 512, "Timeout" : 300, "Role" : null,
        "Policies" : [ "AmazonS3FullAccess", "AmazonRekognitionFullAccess" ],
        "Events" : {
          "ProcessNewImage" : {
            "Type" : "S3",
            "Properties" : {
              "Bucket" : {"Ref" : "ImageBucket"},
              "Events" : "s3:ObjectCreated:*",
              "Filter" : {
                "S3Key" : {
                  "Rules" : [{"Name": "suffix", "Value": ".jpg"}]
                }
              }
            }
          }
        }
      }
    }

Deploy the rekognition-service application to an existing CloudFormation stack

We want to update the recognition confidence value to 80 in the Lambda function code and redeploy it to the CloudFormation stack. The following .gif animation shows how you can achieve that. When doing a second deployment for the same project, the AWS toolkit for Eclipse remembers the parameters used in the last deployment, so if you want to keep them the same, you don’t have to retype them.

Notice that we need to change the parameter value of ImageBucketExists to true in the parameter page (Fill in stack template parameters) because the bucket was already created during the first deployment. The underlying CloudFormation stack is updated with the new version of the Lambda function whether or not you update the parameters.

Update the Lambda event source by using the parameters page

Now, we want to configure the trigger event for the Lambda function to another new S3 bucket. This removes the bucket we created in the first deployment and creates a new bucket for this deployment. We only need to redeploy the application and update the ImageBucketExists parameter to false, and the ImageBucketName parameter to the new bucket name. After deployment, you see that the name of the ImageStack in the stack outputs is updated to the new name.

In the third part of this blog post, we’ll talk about how to use the AWS Toolkit for Eclipse to check the result of the rekognition-service application.

AWS Toolkit for Eclipse: Improved Support for Serverless Applications (Part 1 of 3)

I am happy to announce that the latest release of the AWS Toolkit for Eclipse includes a couple new enhancements for developing AWS Serverless Application Model (AWS SAM) applications. In this release, we added a new blueprint: rekognition.

In part 1 of this blog post, we describe and show with an animation what this blueprint does, and how to use the AWS Toolkit for Eclipse to create an application from it. In part 2, we’ll deploy the AWS SAM application to AWS CloudFormation. In part 3, we’ll check the result of the application and test the AWS Lambda function by using the AWS Explorer in the AWS Toolkit for Eclipse.

About the rekognition blueprint

The rekognition blueprint includes a Lambda function TagImage. This Lambda function automatically tags .jpg files uploaded to a specified Amazon S3 bucket by using the Amazon Rekognition service. It applies the top five confident tags recognized by the Amazon Rekognition service as the keys to the Amazon S3 object. It then applies the confident values as tag values, accordingly.

Create an application named rekognition-service from the rekognition blueprint.

This .gif animation shows the steps to create an application from the rekognition blueprint.

About the AWS SAM template

Here is the template snippet from the serverless.template in the project we just created for defining the Lambda function TagImage. Notice that this is a simplified configuration for this Lambda function. This is because during the deployment phase, the AWS Toolkit for Eclipse will fill in all other properties we needed. For a complete configuration set, see Deploying Lambda-based Applications in the AWS Lambda Developer Guide.

In this snippet, we grant the Lambda function permissions to access Amazon S3 and Amazon Rekognition. We also define a triggering event for the Lambda function when uploading .jpg files to the specified Amazon S3 bucket.

"TagImage": {
  "Type": "AWS::Serverless::Function",
  "Properties": {
    "Handler": "TagImage",
    "Policies": [
      "AmazonS3FullAccess",
      "AmazonRekognitionFullAccess"
    ],
    "Events": {
      "ProcessNewImage": {
        "Type": "S3",
        "Properties": {
          "Bucket": {"Ref" : "ImageBucket"},
          "Events": "s3:ObjectCreated:*",
          "Filter": {
            "S3Key": {
              "Rules": [{"Name": "suffix", "Value": ".jpg"}]
            }
          }
        }
      }
    }
  }
}

How the Lambda function works

Here is a snippet from the Lambda function ImageTag. This Lambda function retrieves the Amazon S3 object from the S3Event. Then it calls the Amazon Rekognition service to detect labels with a confidence value of at least 77.

Image imageToTag = new Image().withS3Object(new S3Object().withName(objectKey).withBucket(bucketName));
// Call Rekognition to identify image labels
DetectLabelsRequest request = new DetectLabelsRequest()
                    .withImage(imageToTag)
                    .withMaxLabels(5)
                    .withMinConfidence(77F);
List<Label> labels = rekognitionClient.detectLabels(request).getLabels();

In part 2 of this blog post, we’ll deploy the newly created AWS SAM application to AWS. Then we’ll configure the parameters in the template during the deployment phase. Stay tuned!

AWS Toolkit for Eclipse: Support for AWS CodeCommit and AWS CodeStar

by Zhaoxi Zhang | on | in Java | Permalink | Comments |  Share

I am pleased to announce that the AWS Toolkit for Eclipse now supports AWS CodeCommit and AWS CodeStar. This means you can create, view, clone, and delete your AWS CodeCommit repositories in the AWS Toolkit for Eclipse. You can also import existing projects under your AWS CodeStar account directly into the Eclipse IDE.

Git Credentials Configuration

We recommend that you use Git credentials with HTTPS to connect to your AWS CodeCommit repositories. For more information, see Use Git Credentials and HTTPS with AWS CodeCommit.

In the new version of the AWS Toolkit for Eclipse, you will see an entry for AWS CodeCommit on the Eclipse Preferences page, shown here. To install the AWS Toolkit for Eclipse, follow the instructions on the AWS Toolkit for Eclipse page. You can configure your Git credentials for your AWS accounts on this page. For information, see Create Git Credentials for HTTPS Connections to AWS CodeCommit. You can type in the newly generated user name and password into the text fields, or import the CSV file generated from the IAM console directly into Eclipse.

AWS CodeCommit Explorer

An entry for AWS CodeCommit also appears in AWS Explorer, as shown here. To open this view, click the drop-down box next to the AWS icon in the toolbar, and select Show AWS Explorer View. You can create, view, clone, and delete repositories in this view.

  • Create a Repository
    To create a repository, right-click AWS CodeCommit and then select Create Repository, as shown here. Type the repository name and an optional description in the Create Repository dialog box. The newly created repository will appear under AWS CodeCommit.

    Figure: AWS CodeCommit Explorer View

    Figure: Create Repository Dialog Box

  • View a Repository
    To view a repository, double-click the repository name in AWS Explorer. This will open the repository editor where you can see the metadata for the repository, as shown here. The repository editor also shows the latest 10 commits for the selected branch. To refresh the repository editor, click the refresh icon on the top-right corner of the page.
  • Clone a Repository
    To clone a repository, click the Check out button in the repository editor, or right-click the repository name in AWS Explorer and select Clone Repository. If you haven’t configured Git credentials for your current AWS account in your Eclipse, a dialog box will prompt you to configure them.


    After you have configured your Git credentials, you will see the following pages for selecting a branch and local destination. You’ll see these pages have the same look and feel as EGit. For information about EGit, see the EGit Tutorial. You can use the Eclipse EGit plugin for managing your projects with Git. 

    Figure: Branch Selection Page

    Figure: Destination Page

  • Delete a Repository
    To delete a repository from AWS CodeCommit, right-click the repository name and select Delete Repository. When the following dialog box is displayed, type the repository name.

AWS CodeStar Project Checkout

You can use the AWS Toolkit for Eclipse to check out AWS CodeStar projects and edit them in the Eclipse IDE. To import your AWS CodeStar projects to Eclipse, click the drop-down box next to the AWS icon in the toolbar, and select Import AWS CodeStar Project. You will see all your AWS CodeStar projects under the selected account and region.

The plugin for AWS CodeStar finds all the AWS CodeCommit repositories that are linked to the selected project. From the Select repository drop-down list, choose the repository, and then click Next. You can also configure the Git credentials on this page if they have not been configured on the selected account.

Resources

For information about AWS CodeCommit, see the AWS CodeCommit documentation. For information about AWS CodeStar, see the AWS CodeStar documentation.

Conclusion

We hope you find these new features useful. If you have questions or other feedback about using the AWS Toolkit for Eclipse, feel free to leave it in the comments.

AWS Toolkit for Eclipse: VPC Configuration Enhancement for AWS Elastic Beanstalk Environments

by Zhaoxi Zhang | on | in Java | Permalink | Comments |  Share

From the blog post VPC Configuration for an AWS Elastic Beanstalk Environment, you learned how to deploy your web application to AWS Elastic Beanstalk by using the AWS Toolkit for Eclipse. In this blog, I’m happy to announce that you can now configure Elastic Load Balancing (ELB) subnets and Amazon EC2 subnets separately. The following screenshots show the experience in the AWS Toolkit for Eclipse is consistent with that in the Elastic Beanstalk console.

 

VPC Configuration in AWS Elastic Beanstalk Console

VPC Configuration in AWS Toolkit for Eclipse

Notice that the ELB subnet configuration is enabled only when the environment type is Load Balanced Web Server Environment (see the following screenshot for the type selection). Please read through Using Elastic Beanstalk with Amazon VPC to be sure you understand all the VPC parameters. Inappropriate parameter combinations can cause deployment failures. Follow the rules below when you create an AWS Elastic Beanstalk environment:

  • You must select at least one subnet for EC2 and for ELB.
  • You must select at least one ELB subnet in each Availability Zone where there is an EC2 subnet, and vice versa.
  • You may only select one EC2 subnet per Availability Zone.
  • When one subnet is used for both EC2 and ELB, select the Associate Public IP Address check box unless you have set up a NAT instance to route traffic from the Internet to your ELB subnet.

Application and Environment Configuration

Client Constructors Now Deprecated in the AWS SDK for Java

by Kyle Thomson | on | in Java | Permalink | Comments |  Share

A couple of weeks ago you might have noticed that the 1.11.84 version of the AWS SDK for Java included several deprecations – the most notable being the deprecation of the client constructors.

Historically, you’ve been able to create a service client as shown here.

AmazonSNS sns = new AmazonSNSClient();

This mechanism is now deprecated in favor of using one of the builders to create the client as shown here.

AmazonSNS sns = AmazonSNSClient.builder().build();

The client builders (described in detail in this post) are superior to the basic constructors in the following ways.

Immutable

Clients created via the builder are immutable. The region/endpoint (and other data) can’t be changed. Therefore, clients are safe to reuse across multiple threads.

Explicit Region

At build time, the AWS SDK for Java can validate that a client has all the required information to function correctly – namely, a region. A client created via the builders must have a region that is defined either explicitly (i.e. by calling withRegion) or as part of the DefaultAwsRegionProviderChain. If the builder can’t determine the region for a client, an SdkClientException is thrown. Region is an important concept when communicating with services in AWS. It not only determines where your request will go, but also how it is signed. Requiring a region means the SDK can behave predictably without depending on hidden defaults.

Cleaner

Using the builder allows a client to be constructed in a single statement using method chaining.

AmazonSNS sns = AmazonSNSClient.builder()
						.withRegion("us-west-1")
						.withClientConfiguration(cfg)
						.withCredentials(creds)
						.build();

The deprecated constructors are no longer created for new service clients. They will be removed from existing clients in a future major version bump (although they’ll remain in all future releases of the 1.x family of the AWS SDK for Java).

AWS Toolkit for Eclipse: Support for Creating Maven Projects for AWS, Lambda, and Serverless Applications

by Zhaoxi Zhang | on | in Java | Permalink | Comments |  Share

I’m glad to announce that you can now leverage the AWS Toolkit for Eclipse to create Maven projects for AWS, Lambda, and serverless applications now. If you’re new to using the AWS Toolkit for Eclipse to create a Lambda application, you can see the Lambda plugin for more information. If you’re not familiar with serverless applications, see the Serverless Application Model for more information. If you have been using the AWS Toolkit for Eclipse, you’ll notice the extra Maven configuration panel in the user interface where you can create a new AWS, Lambda, or serverless application (see the following screenshots).

The AWS Toolkit for Eclipse no longer downloads the archived AWS Java SDK ZIP file automatically and puts it in the class path for your AWS application. Instead, it manages the dependencies for using Maven by checking for the latest AWS Java SDK version from the remote Maven repository and downloading it automatically, if you don’t already have it installed in your local Maven repository. This means that if a new version of the AWS SDK for Java released, it can take a while to download it before you can create the new application.

Create a Maven Project for an AWS application

In the Eclipse toolbar, choose the AWS icon drop-down button, and  then choose New AWS Project. You’ll see the following page, where you can configure the AWS SDK for Java samples you want to include in your application.

Sample

Here is the structure of the newly created AWS application Java project. You can edit the pom.xml file later to meet your needs to build, test, and deploy your application with Maven.

SampleStructure

Create a Maven Project for a Lambda Application

Similarly to how you create a new AWS application project, you can create a new AWS Lambda project.  In the Eclipse toolbar, choose the AWS icon drop-down button, and then choose New AWS Lambda Java Project.

Lambda

Here is the structure of the newly created AWS Lambda Java project.

LambdaStructure

Create a Maven Project for a  Serverless Application

To create a new serverless application, choose the AWS icon drop-down button and then choose New AWS Serverless Project. The following screenshot shows the status of the  project creation in the process of downloading application dependencies by Maven.

CreatingServerless

Here is the structure of the newly created serverless application Java project.

ArticleStructure

Build a Serverless Application Locally with Maven

You can also use the Maven command-line in the terminal to build and test the project you just created, as shown in the following screenshot.

MavenCommandLine

Please let us know what you think of the new Maven support in the AWS Toolkit for Eclipse. We appreciate your comments.

Java SDK Bundled Dependency

by Kyle Thomson | on | in Java | Permalink | Comments |  Share

The AWS SDK for Java depends on a handful of third-party libraries, most notably Jackson for JSON and Apache Commons Http Client for over the wire. For most customers, resolving these as part of their standard Maven dependency resolution is perfectly fine; Maven automatically pulls the required versions in or uses existing versions if they’re specified in the project already.

However, the AWS SDK for Java requires certain minimum versions to function properly and some customers are unable to change the version of the third-party libraries they use. Maybe it’s because another dependency requires a specific version, or there are breaking changes between third-party versions that large portions of the code base relies on. Whatever the case may be, these version conflicts can create problems when you try to use the AWS SDK for Java.

We’re pleased to introduce the AWS SDK for Java Bundle dependency. This new module that you can include in your maven project contains all the SDK clients for all services and all of the third-party libraries in a single JAR. The third-party libraries were “relocated” to new package names to avoid class conflicts with a different version of the same third-party library on a project’s classpath. To use this version of the SDK, simply include the following Maven dependency in your project.

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-java-sdk-bundle</artifactId>
  <version>${aws.sdk.version}</version>
</dependency>

Of course, because we relocated the third-party libraries, they’re no longer available to use under their original import names – unless the project explicitly adds those libraries as dependencies. For example, if a project relied on the AWS SDK for Java bringing in the Joda Time library, when the project switches to use the bundle dependency it also needs to add a specific dependency for Joda Time.

The relocated classes are intended for internal use only by the AWS SDK. It is strongly recommended that you do not refer to classes under com.amazonaws.thirdparty.* in your own code. The following third-party libraries are included in the bundled dependency and moved to the com.amazonaws.thirdparty.* package:

Because the bundle dependency includes all of the dependent libraries, it’s going to be a larger binary to pull down when dependencies get resolved (about 50 MB at the time of this writing, but this will increase with the introduction of each new service and each new third-party library). In addition, if a project explicitly imports one of the third-party libraries that the SDK includes then classes will be duplicated (albeit in different packages). This increases the memory requirement of an application. For these reasons, we recommend that you only use the bundled dependency if you have a need to.

If a project has the combination of a version clash and a limited total project size (e.g., AWS Lambda limits package size to 50MB), the bundled dependency might not be the right solution. Instead, you can build your own version of the AWS SDK for Java from the open sourced code on GitHub. For example, if you needed to resolve a conflict only for the Joda Time library, you can include a build configuration like the following in your maven project:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>2.4.3</version>
  <configuration>
    <artifactSet>
      <includes>
        <include>joda-time:joda-time</include>
        <include>com.amazonaws:*</include>
      </includes>
    </artifactSet>
    <relocations>
      <relocation>
        <pattern>org.joda</pattern>
        <shadedPattern>com.amazonaws.thirdparty.joda</shadedPattern>
      </relocation>
    </relocations>
  </configuration>
</plugin>

Although this means you need to build your own version of the SDK and install it into your own repository, it gives you great flexibility for the third-party libraries and/or services you want to include. Check out the Maven Shade Plugin for more details about how it works.

We hope this new module is useful for projects where there’s a dependency clash. As always, please leave your comments or feedback below!