Category: Java


Generating Amazon S3 Pre-signed URLs with SSE-C (Part 5 Finale)

by Hanson Char | on | in Java | Permalink | Comments |  Share

In the previous blog (Part 4), we demonstrated how you can generate and consume pre-signed URLs using SSE-C. In this last and final blog of the series, I will provide code examples that show how to generate and consume pre-signed URLs using SSE-C, but restricting the URLs to be used only with specific customer-provided encryption keys.

As indicated in Part 1 of this blog, a prerequisite to this option is that you must use Signature Version 4 (SigV4). You can enable SigV4 in the AWS SDK for Java in various ways, including using S3-specific system properties, or programmatically as demonstrated previously. Here, the code examples will assume you have enabled SigV4.

SSE-C with specific Customer-Provided Encryption Keys

Here’s how to generate a pre-signed PUT URL using SSE-C (with specific customer-provided encryption keys):

String myExistingBucket = ... // an existing bucket
String myKey = ...    // target S3 key
SecretKey customerKey = ...;
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT);
// Restrict the pre-signed PUT URL to be used only against
// a specific customer-provided encryption key
genreq.setSSECustomerKey(new SSECustomerKey(customerKey));
// Note s3 must have been configured to use SigV4
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned PUT URL with SSE-C: " + puturl);

Here’s how to make use of the generated pre-signed PUT URL via the Apache HttpClient (4.3):

File fileToUpload = ...;
SecretKey customerKey = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
// Specify the customer-provided encryption key 
// when consuming the pre-signed URL
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM,
    SSEAlgorithm.AES256.getAlgorithm()));
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY, 
    Base64.encodeAsString(customerKey.getEncoded())));
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5, 
    Md5Utils.md5AsBase64(customerKey.getEncoded())));
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to generate a pre-signed GET URL for use with SSE-C (with specific customer-provided encryption keys):

GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    BUCKET, KEY, HttpMethod.GET);
// Restrict the pre-signed GET URL to be used only against
// a specific customer-provided encryption key
genreq.setSSECustomerKey(new SSECustomerKey(customerKey));
// Note s3 must have been configured to use SigV4
URL geturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned GET URL for SSE-C: " + geturl);

Here’s how to make use of the generated pre-signed GET URL via the Apache HttpClient (4.3):

HttpGet getreq = new HttpGet(URI.create(geturl.toExternalForm()));
// Specify the customer-provided encryption key 
// when consuming the pre-signed URL
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM,
    SSEAlgorithm.AES256.getAlgorithm()));
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY,
    Base64.encodeAsString(customerKey.getEncoded())));
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5,
    Md5Utils.md5AsBase64(customerKey.getEncoded())));
CloseableHttpClient httpclient = HttpClients.createDefault();
CloseableHttpResponse res = httpclient.execute(getreq);
InputStream is = res.getEntity().getContent();
String actual = IOUtils.toString(is);

In summary, we have shown how you can generate and consume pre-signed URLs using SSE-C with specific customer-provided encryption keys.

We hope you find this blog series on generating pre-signed URLs with SSE useful. We would be very interested to hear about how you make use of this feature in your applications. Please feel free to drop us some comments.

Ciao for now!

Generating Amazon S3 Pre-signed URLs with SSE-C (Part 4)

by Hanson Char | on | in Java | Permalink | Comments |  Share

In Part 3 of this blog, we demonstrated how you can generate and consume pre-signed URLs using SSE-S3. In this blog, I will provide code examples to show how you can generate and consume pre-signed URLs using one of the more advanced options, namely SSE-C (server-side encryption with customer-provided encryption keys). The code samples assume the version of the AWS SDK for Java to be 1.9.31 or later.

Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C)

Here’s how to generate a pre-signed PUT URL using SSE-C:

AmazonS3Client s3 = ...;
String myExistingBucket = ... // an existing bucket
String myKey = ...    // target S3 key
// Generate a pre-signed PUT URL for use with SSE-C
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT);
genreq.setSSECustomerKeyAlgorithm(SSEAlgorithm.getDefault());
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned PUT URL with SSE-C: " + puturl);

Here’s how to make use of the generated pre-signed PUT URL via the Apache HttpClient (4.3):

File fileToUpload = ...;
SecretKey customerKey = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
// Note it's necessary to specify the customer-provided encryption key 
// when consuming the pre-signed URL
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM,
    SSEAlgorithm.AES256.getAlgorithm()));
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY, 
    Base64.encodeAsString(customerKey.getEncoded())));
putreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5, 
    Md5Utils.md5AsBase64(customerKey.getEncoded())));
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to generate a pre-signed GET URL for use with SSE-C:

GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    BUCKET, KEY, HttpMethod.GET);
genreq.setSSECustomerKeyAlgorithm(SSEAlgorithm.getDefault());
URL geturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned GET URL for SSE-C: " + geturl);

Here’s how to make use of the generated pre-signed GET URL via the Apache HttpClient (4.3):


HttpGet getreq = new HttpGet(URI.create(geturl.toExternalForm()));
// Note it's necessary to specify the customer-provided encryption key
// when consuming the pre-signed URL
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM,
    SSEAlgorithm.AES256.getAlgorithm()));
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY,
    Base64.encodeAsString(customerKey.getEncoded())));
getreq.addHeader(new BasicHeader(
    Headers.SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5,
    Md5Utils.md5AsBase64(customerKey.getEncoded())));
CloseableHttpClient httpclient = HttpClients.createDefault();
CloseableHttpResponse res = httpclient.execute(getreq);
InputStream is = res.getEntity().getContent();
String actual = IOUtils.toString(is);

In Part 5, the last blog of this series, I will provide code examples that show how to generate and consume pre-signed URLs using SSE-C, but restricting the URLs to be used only with specific customer-provided encryption keys. 

Stay tuned!

Generating Amazon S3 Pre-signed URLs with SSE-S3 (Part 3)

by Hanson Char | on | in Java | Permalink | Comments |  Share

As mentioned in Part 1 and Part 2 of this blog, there are fundamentally four ways you can generate Amazon S3 pre-signed URLs using server-side encryption (SSE). We demonstrated how you could do so with SSE-KMS (server-side encryption with AWS Key Management Service).

In this blog, I will provide further sample code that shows how you can generate and consume pre-signed URLs for SSE-S3 (server-side encryption with Amazon S3-managed keys). The code samples assume the version of the AWS SDK for Java to be 1.9.31 or later.

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

Here’s how to generate a pre-signed PUT URL using SSE-S3:


AmazonS3Client s3 = ...;
String myExistingBucket = ... // an existing bucket
String myKey = ...    // target S3 key
// Generate a pre-signed PUT URL for use with SSE-S3
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT);
genreq.setSSEAlgorithm(SSEAlgorithm.getDefault());
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Pre-signed PUT URL with SSE-S3: " + puturl);

Here’s how to make use of the generated pre-signed PUT URL via the Apache HttpClient (4.3):


File fileToUpload = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
// AES256 is currently the only supported algorithm for SSE-S3
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION,
    SSEAlgorithm.AES256.getAlgorithm()));
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to generate a pre-signed GET URL for use with SSE-S3:


GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    BUCKET, KEY, HttpMethod.GET);
URL geturl = s3.generatePresignedUrl(genreq);
System.out.println("Pre-signed GET URL for SSE-S3: " + geturl);

(Note in particular that generating a pre-signed GET URL for an S3 object encrypted using SSE-S3 is as simple as generating a regular pre-signed URL!)

Here’s how to make use of the generated pre-signed GET URL via the Apache HttpClient (4.3):


HttpGet getreq = new HttpGet(URI.create(geturl.toExternalForm()));
CloseableHttpClient httpclient = HttpClients.createDefault();
CloseableHttpResponse res = httpclient.execute(getreq);
InputStream is = res.getEntity().getContent();
String actual = IOUtils.toString(is);

In Part 4 and 5, I will provide code examples to show how you can generate and consume pre-signed URLs using server-side encryption with customer-provided encryption keys (SSE-C).

Enjoy!

DynamoDB XSpec API

by Hanson Char | on | in Java | Permalink | Comments |  Share

One of the most powerful tools for accessing Amazon DynamoDB is the use of a DynamoDB domain-specific language (DSL) called expressions. If you look closely, you will find the support of DynamoDB expressions everywhere. For instance, you can access the attributes of an item using projection expressions. You can query or scan items using filter expressions and key condition expressions. Likewise, you can specify the details of updating an item using update expressions and condition expressions.

Why the need for expressions? Not only can you use DynamoDB expressions to perform typical operations such as PutItem, GetItem, Query, etc., you can also use expressions to specify arbitrarily complex operations and conditions that are otherwise not possible with the regular APIs. This can best be illustrated with examples.

But first, with the latest release 1.9.34 of the AWS SDK of Java, we are excited to announce the beta release of the Expression Specification (XSpec) API, which makes it easy to build and make use of expressions.

Let’s take a look at the code snippet copied from the earlier blog, Introducing DynamoDB Document API (Part 1). This is what the code looks like to perform a conditional update without the use of expressions:

UpdateItemOutcome outcome = table.updateItem(new UpdateItemSpec()
    .withReturnValues(ReturnValue.ALL_NEW)
    .withPrimaryKey("GameId", "abc")
    .withAttributeUpdate(
        new AttributeUpdate("Player1-Position").addNumeric(1))
    .withExpected(
        new Expected("Player1-Position").lt(20),
        new Expected("Player2-Position").lt(20),
        new Expected("Status").eq("IN_PROGRESS"))); 

This is all well and good. But what if you need to specify a more complex condition such as the use of disjunction or nested conditions? This is where you will find the DynamoDB XSpec API handy. For example, suppose you want to specify a nested or-condition, together with a function that checks if a specific attribute exists. Here is how you can do that using the DynamodB XSpec API:

import static com.amazonaws.services.dynamodbv2.xspec.ExpressionSpecBuilder.*;
...
UpdateItemOutcome outcome = table.updateItem(new UpdateItemSpec()
    .withReturnValues(ReturnValue.ALL_NEW)
    .withPrimaryKey("GameId", "abc")
    .withExpressionSpec(new ExpressionSpecBuilder()
        .addUpdate(N("Player1-Position").add(1))
        .withCondition(
                  N("Player1-Position").lt(20)
            .and( N("Player2-Position").lt(20) )
            .and( S("Status").eq("IN_PROGRESS")
                .or( attribute_not_exists("Status") )))
        .buildForUpdate()));

Or perhaps you want to specify an arbitrarily complex condition in a Scan operation. Here is an example:

import static com.amazonaws.services.dynamodbv2.xspec.ExpressionSpecBuilder.*;
...
ScanExpressionSpec xspec = new ExpressionSpecBuilder()
    .withCondition(N("Player1-Position").between(10, 20)
        .and( S("Status").in("IN_PROGRESS", "IDLE")
              .or( attribute_not_exists("Status") )))
    .buildForScan();

for (Item item: table.scan(xspec))
    System.out.println(item.toJSONPretty());

It’s worth pointing out that the only entry point to the DynamoDB XSpec API is the ExpressionSpecBuilder. We also recommend to always specify the static imports of its methods as demonstrated above.

In summary, the DynamoDB expression language allows arbitrarily complex conditions and operations to be specified, whereas the DynamoDB XSpec API makes it easy to harness the full power of this language.

Hope you find this useful. Don’t forget to download the latest AWS SDK for Java and give it a spin. Let us know what you think!

Generating Amazon S3 Pre-signed URLs with SSE-KMS (Part 2)

by Hanson Char | on | in Java | Permalink | Comments |  Share

To continue from the previous blog, I will provide specific code examples that show how you can generate and consume pre-signed URLs using server-side encryption with AWS Key Management Service (SSE-KMS). A pre-requisite to this option is that you must be using Signature Version 4 (SigV4). You can enable SigV4 in the AWS SDK for Java in various ways, including using S3-specific system properties. Here, I will provide a less known but programmatic way to achieve that by explicitly configuring the signer. The code samples assume the version of the AWS SDK for Java to be 1.9.31 or later.

Configure AmazonS3Client to use SigV4

AmazonS3Client s3 = new AmazonS3Client(
    new ClientConfiguration().withSignerOverride("AWSS3V4SignerType"));

Once this is in place, you are good to go.

Server-Side Encryption with AWS Key Management Service (SSE-KMS)

Example A. Here’s how to generate a pre-signed PUT URL using SSE-KMS:

String myExistingBucket = ... // an existing bucket
String myKey = ...    // target S3 key
// Generate a pre-signed PUT URL for use with SSE-KMS
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT)
    .withSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm())
    ;
// s3 is assumed to have been configured to use SigV4
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned PUT URL with SSE-KMS: " + puturl);

In the above example, Amazon S3 will make use of the default KMS master key for S3 that is automatically created for you. (See Creating Keys in AWS Key Management Service for more information on how you can set up your AWS KMS customer master keys.)

However, you can also choose to explicitly specify your KMS customer master key id as part of the pre-signed URLs.

Example B. Here’s how to generate a pre-signed PUT URL using SSE-KMS with an explicit KMS customer master key id:


// Generate a pre-signed PUT URL for use with SSE-KMS with an
// explicit KMS Customer Master Key ID
String myKmsCmkId = ...;
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    myExistingBucket, myKey, HttpMethod.PUT)
    .withSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm())
    // Explicitly specifying your KMS customer master key id
    .withKmsCmkId(myKmsCmkId)
    ;
URL puturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned PUT URL using SSE-KMS with explicit CMK ID: "
    + puturl);

Here’s how to make use of the generated pre-signed PUT URL (from Example A) via the Apache HttpClient (4.3):


File fileToUpload = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION,
    SSEAlgorithm.KMS.getAlgorithm()));
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to make use of the generated pre-signed PUT URL from (Example B) via the Apache HttpClient (4.3):


File fileToUpload = ...;
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION,
    SSEAlgorithm.KMS.getAlgorithm()));
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION_AWS_KMS_KEYID,
    myKmsCmkId)); // Explicitly specifying your KMS customer master key id
putreq.setEntity(new FileEntity(fileToUpload));
CloseableHttpClient httpclient = HttpClients.createDefault();
httpclient.execute(putreq);

Here’s how to generate a pre-signed GET URL for use with SSE-KMS:


GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
    BUCKET, KEY, HttpMethod.GET);
// s3 configured to use SigV4
URL geturl = s3.generatePresignedUrl(genreq);
System.out.println("Presigned GET URL for SSE-KMS: " + geturl);

(Note in particular that generating a pre-signed GET URL for an S3 object encrypted using SSE-KMS is as simple as generating a regular pre-signed URL!)

Here’s how to make use of the generated pre-signed GET URL via the Apache HttpClient (4.3):


HttpGet getreq = new HttpGet(URI.create(geturl.toExternalForm()));
CloseableHttpClient httpclient = HttpClients.createDefault();
CloseableHttpResponse res = httpclient.execute(getreq);
InputStream is = res.getEntity().getContent();
String actual = IOUtils.toString(is);

In the next blog (Part 3), I will provide specific code examples that show how you can generate and consume pre-signed URLs using server side encryption with Amazon S3-managed keys (SSE-S3). 

Stay tuned!

Generating Amazon S3 Pre-signed URLs with SSE (Part 1)

by Hanson Char | on | in Java | Permalink | Comments |  Share

By default, all objects and buckets are private in Amazon S3. Pre-Signed URLs are a popular way to let your users or customers upload or download specific objects to/from your bucket, but without requiring them to have AWS security credentials or permissions.

In part 1 of this blog, we will take a look at all the different types of pre-signed URLs that can be generated to work with Amazon S3 server-side encryption (SSE). In part 2 of this blog, I will provide concrete sample code that shows how you can generate and consume pre-signed URLs for one of AWS’s most recommended security best practices – server-side encryption with AWS Key Management Service (SSE-KMS). To find out more about the considerable benefits of using AWS Key Management Service provide, see the official blog New AWS Key Management Service (KMS).

To begin with, the generation and use of pre-signed URLs requires a request to be signed for authentication purposes. Amazon S3 supports the latest Signature Version 4 (SigV4), which requires the request body to be signed for added security, and the previous Signature Version 2 (SigV2). However, even though pre-signed URLs using different options of SSE is 100 percent supported using SigV4, this is not the case with SigV2.

Here is a summary of all the valid combinations for generating pre-signed URLs using server-side encryption.

Pre-signed URL generation SigV2 SigV4
Using SSE with AWS KMS-managed keys (SSE-KMS) No Yes
Using SSE with Amazon S3-managed keys (SSE-S3) Yes Yes
Using SSE with customer-provided encryption keys (SSE-C) Yes Yes
Using SSE with specific customer-provided encryption keys (SSE-C) No Yes

What is the difference between generating a pre-signed URL using SSE-C versus SSE-C with specific customer-provided encryption keys? In the first case, when you generate the pre-signed URL, the customer-provided encryption key does not need to be specified. Instead, the key only needs to be specified in the request later when the generated pre-signed URL is used (to actually upload or download objects to/from Amazon S3).

On the other hand, you may want to impose further restrictions on a pre-signed URL by requiring that the pre-signed URL can be used only against a specific customer-provided encryption key. In such cases, you can do so by specifying the customer-provided encryption key during the generation of the pre-signed URLs, and enabling the use of SigV4. I will provide specific examples to cover these two cases in Part 4 and 5 of the upcoming blogs.

In the next blog (Part 2), I will provide specific code examples that show how you can generate and consume pre-signed URLs using server side encryption with Amazon KMS-managed keys (SSE-KMS). 

Stay tuned!

AWS Toolkit for Eclipse Integration with AWS OpsWorks

Today, we are introducing a new addition to the AWS toolkit for Eclipse — the AWS OpsWorks plugin. This new plugin allows you to easily deploy your Java web applications from your development environment directly to AWS infrastructures.

So you might remember the AWS CodeDeploy plugin that we introduced recently, and some of you have probably used the AWS Elastic Beanstalk plugin before — they both seem to provide the same functionality of deploying a Java web app. Then why do we need yet another option for accomplishing the very same thing?

AWS Elastic Beanstalk, AWS CodeDeploy and AWS OpsWorks indeed share a lot in common as they are all considered part of the AWS deployment services family. However, they differ from each other in aspects like deployment execution and infrastructure resource management, and these differences make each of them suitable for a specific category of use cases.

  • AWS Elastic Beanstalk is a fully-managed application container service. It’s based on a PaaS (Platform as a Service) model where your application is provisioned by infrastructures that are automatically managed by AWS — there is no need for you to manually build and maintain them. As a container service, it also provides built-in deployment features for a variety of web app frameworks. All of these allow you to focus on your application development, while the deployments and provisioning of the application are handled by the service powered by cloud ninjas. The downside of this whole black box-ish model is of course its limited flexibility and extensibility. For example, you might not have fine-grained control over the underlying infrastructure resource, and it sometimes could be difficult to extend the built-in deployment commands to execute your custom tasks.
  • In contrast, AWS CodeDeploy focuses on only one thing — managing deployments to your existing instances in EC2 or elsewhere. It is not an application container service, so there are no built-in deployment features. You need to write your own deployment logic, which gives you the freedom to perform any kind of custom tasks for your deployments. Another difference compared to Elastic Beanstalk is that the service does not create or maintain the EC2 instances for you; you need to manage them yourself, which also means you have full control over your infrastructure. At a higher-level, you can think of AWS CodeDeploy as a fully-programmable robot that delivers your application artifacts to your EC2 instances fleet and then runs your custom commands on each of the instances for you.
  • Within the range between fully-managed (Elastic Beanstalk) and fully-customizable (CodeDeploy), AWS OpsWorks sits somewhere in the middle. It is an application management service that provides built-in deployment features for instances running a specific web framework (a.k.a., an application server layer). The reason that makes it really stand out compared to Elastic Beanstalk is that it uses Chef to perform the deployment actions. The deployment logic for built-in layers is essentially a default Chef cookbook that is open to all levels of customization. Using Chef allows you to achieve the necessary customization for your specific task while at the same time enjoy all the built-in features that are useful for the majority of use cases.

So generally speaking, AWS Elastic Beanstalk is the easiest option if you need to quickly deploy your application and don’t want to be concerned about infrastructure maintenance. AWS CodeDeploy gives you maximum flexibility but lacks built-in deployment features. AWS OpsWorks has a good tradeoff between flexibility and ease of use, but you need to learn Chef in order to fully utilize it.

Ok, now I hope I have answered your doubts about why you should care about this blog post if you are already familiar with the other deployment services. Then let’s get back to Eclipse and see how the plugin works.

After you install the new AWS OpsWorks Plugin component, you should see the “AWS OpsWorks” node under the AWS Explorer View. (Make sure you select “US East (Virginia)” as the current region since OpsWorks is available only in this region.)

The top-level elements under the service node are your stacks, each of which includes all the resources serving the same high-level purpose (e.g., hosting a Tomcat application). Each stack consists of one or more layers, where each layer represents a system component that consists of a set of instances that are functionally the same. For each layer that acts as an application server, it is associated with one app, which is where the revision of the application code for this layer should be deployed to.

For this demo, I have created a sample stack that has only one Java App Server layer in it and I have started two EC2 instances for this layer. Creating all of these can be done in a couple of minutes using the AWS OpsWorks console. We will create the Java app for this layer inside Eclipse.

To start with, let’s switch to “Java” or “Java EE” perspective and then create a sample web project. File -> New -> AWS Java Web Project. Then in the Project Explorer, right-click on the sample project that we just created, and select Amazon Web Services -> Deploy to AWS OpsWorks.

In the first page of the deployment wizard, choose our target region (US East) and target stack (MyStack) and then let’s create a new Java app called “My App“.

In the App Configuration page, you are required to specify an S3 location where the application artifact will the uploaded to. You can optionally pass in additional environment variables, specify a custom domain, and enable SSL for your application.

Click Next, and you will see the Deployment Action Configuration page. Here you can optionally add a comment for your deployment and  provide custom Chef JSON input.

Now click Finish, and the deployment will be initiated immediately. Wait for a couple of minutes until all the instances in the layer are successfully deployed.

After it finishes, you will see a confirmation message that shows you the expected endpoint where your application will be hosted on the instances. You can access the endpoint via web browser to make sure the deployment succeeds (make sure you include the trailing slash character in the URL).

As you can see, because of the built-in support for Tomcat applications, it’s really easy to deploy and host your Java web app using AWS OpsWorks. We want to focus on the deployment experience of this plugin, but we are also interested in what other features you are specifically looking for. More support on service resource creation and configuration? Or integration with Chef cookbook and recipe? Let us know in the comments!

Storing JSON documents in Amazon DynamoDB tables

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

DynamoDBMapper is a high-level abstraction layer in the AWS SDK for Java that allows you to transform java objects into items in Amazon DynamoDB tables and vice versa. All you need to do is annotate your java class in a few places, and the mapper takes care of getting the objects in and out of the database.

DynamoDBMapper has a new feature that allows you to save an object as a JSON document in a DynamoDB attribute. To do this, simply annotate the class with @DynamoDBDocument, and the mapper does the heavy work of converting the object into a JSON document and storing it in DynamoDB. DynamoDBMapper also takes care of loading the java object from the JSON document when requested by the user.

Let’s say your application maintains the inventory of a car dealership in Amazon DynamoDB and uses DynamoDBMapper to save and retrieve data. One of the tables is Car, which holds information about a car and has name as its primary key. Here is how the java class looks for the table:

@DynamoDBTable(tableName = "Car")
public class Car {
		
    private String name;
    private int year;
    private String make;
    private List<String> colors;
    private Spec spec;

    @DynamoDBHashKey
    public String getName() { return name; }
    public void setName(String name) { this.name = name; }

    public int getYear() { return year; }
    public void setYear(int year) { this.year = year; }

    public String getMake() { return make; }
    public void setMake(String make) { this.make = make; }

    public List<String> getColors() { return colors; }
    public void setColors(List<String> colors) { this.colors = colors; }

    public Spec getSpec() { return spec; }
    public void setSpec(Spec spec) { this.spec = spec; }
}

@DynamoDBDocument
public class Spec {

    private String engine;
    private String wheelbase;
    private String length;
    private String width;
    private String height;

    public String getEngine() { return engine; }
    public void setEngine(String engine) { this.engine = engine; }

    public String getWheelbase() { return wheelbase; }
    public void setWheelbase(String wheelbase) { this.wheelbase = wheelbase; }

    public String getLength() { return length; }
    public void setLength(String length) { this.length = length; }

    public String getWidth() { return width; }
    public void setWidth(String width) { this.width = width; }

    public String getHeight() { return height; }
    public void setHeight(String height) { this.height = height; }

}

As you can see, the class Spec is modeled with a @DynamoDBDocument annotation. DynamoDBMapper converts an instance of Spec into a JSON document before storing it in DynamoDB. When stored in DynamoDB, an instance of the class Car will look like this:

{
   "name" : "IS 350",
   "year" : "2015",
   "make" : "Lexus",
   "colors" : ["black","white","grey"],
   "spec" : {
      "engine" : "V6",
      "wheelbase" : "110.2 in",
      "length" : "183.7 in",
      "width" : "71.3 in",
      "height" : "56.3 in"
   }
}

You can also apply other DyanmoDBMapper annotations like @DyanmoDBIgnore and @DynamoDBAttribute to the JSON document. For instance, model the height attribute of the Spec class with @DynamoDBIgnore.

@DynamoDBIgnore
public String getHeight() { return height; }
public void setHeight(String height) { this.height = height; }

The updated item in DynamoDB will look like this:

{
   "name" : "IS 350",
   "year" : "2015",
   "make" : "Lexus",
   "colors" : ["black","white","grey"],
   "spec" : {
      "engine" : "V6",
      "wheelbase" : "110.2 in",
      "length" : "183.7 in",
      "width" : "71.3 in"
   }
}

To learn more, check out our other blog posts and the developer guide

Do you want to see new features in DynamoDBMapper? Let us know what you think!

Amazon S3 Client-side Crypto Meta Information

by Hanson Char | on | in Java | Permalink | Comments |  Share

Are you curious about how the Amazon S3 Encryption Java client makes use of meta information to support client-side encryption?  Have you ever wondered how you can write code in other languages that can encrypt/decrypt S3 objects in a format that is compatible with the AWS SDK for Java, or an AWS SDK for another language?

If so, look no further. We have just published an Appendix to provide a summary of the S3 client-side crypto meta information. Enjoy!

Create, Update, and Delete Global Secondary Indexes Using the Amazon DynamoDB Document API

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

Amazon DynamoDB recently announced a new feature, online indexing that helps you create and modify global secondary indexes (GSI) after table creation. You can also delete a global secondary index associated with a table at any time. This blog post shows how easy it is to use the Amazon DynamoDB Document API of AWS SDK for Java to perform these operations.

Let’s say your application has a Customer table with CustomerId as the primary key and holds the personal details of a customer.

{
   "CustomerId" : 1000,
   "FirstName" : "John",
   "LastName" : "Myers",
   "Gender" : "M",
   "AddressLine1" : "156th Avenue",
   "City" : "Redmond",
   "State" : "WA",
   "Zip" : "98052"
}

You want to create a new global secondary index on the State attribute that helps you in search operations. You can do this with the following code:

// Initialize the DynamoDB object.
DynamoDB dynamo = new DynamoDB(Regions.US_EAST_1);

// Retrieve the reference to an existing Amazon DynamoDB table.
Table table = dynamo.getTable("Customer");

// Create a new Global Secondary Index.
Index index = table.createGSI(
                    new CreateGlobalSecondaryIndexAction()
                        .withIndexName("state-index")
                        .withKeySchema(
                          new KeySchemaElement("State", KeyType.HASH))
                        .withProvisionedThroughput(
                          new ProvisionedThroughput(25L, 25L))
                        .withProjection(
                          new Projection()
                             .withProjectionType(ProjectionType.ALL)),
                    new AttributeDefinition("State", 
                             ScalarAttributeType.S));

// Wait until the index is active.
index.waitForActive();

Amazon DynamoDB allows you to modify the provisioned throughput of a global secondary index at any time after index creation. You can do this with the following code:

// Update the provisioned throughput of the Global Secondary Index.
index.updateGSI(new ProvisionedThroughput(5L, 5L));

// Wait until the index is active.
index.waitForActive();

You can also delete a global secondary index using the following code:

// Delete the Global Secondary Index.
index.deleteGSI();

// Wait until the index is deleted.
index.waitForDelete();

Do you use the Amazon DynamoDB Document API to access Amazon DynamoDB? Let us know what you think!