Category: Java


Amazon S3 Client-side Key Migration to AWS Key Management Service

by Hanson Char | on | in Java | Permalink | Comments |  Share

In an earlier blog, Taming client-side key rotation with the Amazon S3 encryption client, we introduced the putInstructionFile API that makes Amazon S3 client-side key rotation easy. In the long run, however, wouldn’t it be nice if you could eliminate the administrative overhead of managing your client-side master keys, and instead have them fully managed and protected by a trusted, secure, and highly available key management service?

This is exactly where the recently launched AWS Key Management Service (KMS) can help. In this blog, we will provide an example of how you can leverage the putInstructionFile API to migrate from the use of an S3 client-side master key to the use of a KMS-managed customer master key (CMK).  In particular, this means you can re-encrypt your existing S3 data keys (aka envelope keys) with a different master key without touching the encrypted data, and ultimately retire and remove the need to manage your own client-side master keys.

Let’s look at some specific code.

Pre-Key Migration to AWS KMS

Suppose you have a pre-existing Amazon S3 client-side master key used for Amazon S3 client-side encryption. The code involved to encrypt and decrypt would typically look like this:


        // Encryption with a client-side master key
        SecretKey clientSideMasterKey = ...;
        SimpleMaterialProvider clientSideMaterialProvider = 
            new SimpleMaterialProvider().withLatest(
                    new EncryptionMaterials(clientSideMasterKey));
        AmazonS3EncryptionClient s3Old = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                clientSideMaterialProvider)
            .withRegion(Region.getRegion(Regions.US_EAST_1));
        
        // Encrypts and saves the data under the name "sensitive_data.txt" to
        // S3. Under the hood, the one-time randomly generated data key is 
        // encrypted by the client-side master key.
        byte[] plaintext = "Demo S3 Client-side Key Migration to AWS KMS!"
                .getBytes(Charset.forName("UTF-8"));
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength(plaintext.length);
        String bucket = ...;
        PutObjectResult putResult = s3Old.putObject(bucket, "sensitive_data.txt",
                new ByteArrayInputStream(plaintext), metadata);
        System.out.println(putResult);

        // Retrieves and decrypts the S3 object
        S3Object s3object = s3Old.getObject(bucket, "sensitive_data.txt");
        System.out.println(IOUtils.toString(s3object.getObjectContent()));

In this example, the encrypted one-time data key is stored in the metadata of the S3 object, and the metadata of an S3 object is immutable.

Migrating to AWS KMS

To re-encrypt such a data key using a KMS-managed CMK, you can do so via the putInstructionFile API, like so:


        // Configure to use a migrating material provider that uses your
        // KMS-managed CMK for encrypting all new S3 objects, and provide
        // access to your old client-side master key
        SimpleMaterialProvider migratingMaterialProvider = 
            new SimpleMaterialProvider().withLatest(
                new KMSEncryptionMaterials(customerMasterKeyId))
                .addMaterial(new EncryptionMaterials(clientSideMasterKey));

        AmazonS3EncryptionClient s3Migrate = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                migratingMaterialProvider, config)
            .withRegion(Region.getRegion(Regions.US_EAST_1));

        // Re-encrypt the existing data-key from your client-side master key
        // to your KMS-managed CMK
        PutObjectResult result = s3Migrate.putInstructionFile(
            new PutInstructionFileRequest(
                new S3ObjectId(bucket, "sensitive_data.txt"),
                new KMSEncryptionMaterials(customerMasterKeyId), 
                InstructionFileId.DEFAULT_INSTRUCTION_FILE_SUFFIX));
        System.out.println(result);
        // Data key re-encrypted with your KMS-managed CMK!

Post-Key Migration to AWS KMS

Once the data-key re-encryption is complete for all existing S3 objects (created with the client-side master key), you can then begin to exclusively use the KMS-managed CMK without the client-side master key:


        // Data-key re-encryption is complete. No more client-side master key.
        SimpleMaterialProvider kmsMaterialProvider = 
                new SimpleMaterialProvider().withLatest(
                    new KMSEncryptionMaterials(customerMasterKeyId));
        AmazonS3EncryptionClient s3KmsOnly = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                kmsMaterialProvider, config)
            .withRegion(Region.getRegion(Regions.US_EAST_1));

        // Retrieves and decrypts the S3 object
        EncryptedGetObjectRequest getReq = 
            new EncryptedGetObjectRequest(bucket, "sensitive_data.txt")
                .withInstructionFileSuffix(
                    InstructionFileId.DEFAULT_INSTRUCTION_FILE_SUFFIX);
        s3object = s3KmsOnly.getObject(getReq);
        System.out.println(IOUtils.toString(s3object.getObjectContent()));

Why is there the need to use an EncryptedGetObjectRequest in the above example? This is necessary in order to make use of the newly encrypted data key (via KMS) in the instruction file, and not the one (encrypted using your old client-side master key) in the metadata. Of course, had you configured the use of CryptoStorageMode.InstructionFile in the first place, such explicit override during retrieval would not be necessary. You can find such an example (of using the instruction file storage mode) in the earlier blog Taming client-side key rotation with the Amazon S3 encryption client.

That’s all for now. Hope you find this useful. For more S3 encryption options of using AWS KMS, see Amazon S3 Encryption with AWS Key Management Service.

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 3)

In this part of the series, we will show you how easy it is to run deployment commands on your EC2 fleet with the help of the AWS CodeDeploy plugin for Eclipse.

Create AppSpec Template

  • First, let’s create a shell script that executes the command we need to run on our instances:

/home/hanshuo/stop-httpd-server-appspec/template/stop-httpd.sh

#!/bin/bash

service ##HTTPD_SERVICE_NAME## stop

To make it a little bit fancier, instead of hardcoding httpd as the service name, we instead use a place holder ##HTTPD_SERVICE_NAME##. Later, you will learn how this could help you create a configurable deployment task in Eclipse.

  • Next, inside the same directory, let’s create a simple AppSpec file that specifies our shell script as the command for the ApplicationStart lifecycle event.

/home/hanshuo/stop-httpd-server-appspec/template/appspec.yml

version: 0.0
os: linux
hooks:
  ApplicationStart:
    - location: stop-httpd.sh
      timeout: 300
      runas: root

This AppSpec file asks CodeDeploy to run stop-httpd.sh as the root user during the ApplicationStart phase of the deployment. Since this is the only phase mentioned, it basically tells the service to run this single script as the whole deployment process – that’s all we need! You can find more information about the AppSpec file in the AWS CodeDeploy Developer Guide.

  • Now that we have created our template which consists of all the necessary AppSpec and command script files. The final step is to create a metadata file for it, which is in a specific JSON format understood by the Eclipse plugin.

/home/hanshuo/stop-httpd-server-appspec/template.md

{
  "metadataVersion" : "1.0",
  "templateName" : "Stop Apache HTTP Server",
  "templateDescription" : "Stop Apache HTTP Server",
  "templateBasedir" : "/home/hanshuo/stop-httpd-server-appspec/template",
  "isCustomTemplate" : true,
  "warFileExportLocationWithinDeploymentArchive" : "/application.war",
  "parameters" : [
    {
      "name" : "Apache HTTP service name",
      "type" : "STRING",
      "defaultValueAsString" : "httpd",
      "substitutionAnchorText" : "##HTTPD_SERVICE_NAME##",
      "constraints" : {
        "validationRegex" : "[\S]+"
      }
    }
  ]
}
  • templateName, templateDescription – Specifies the name and description for this template
  • templateBasedir –  Specifies the base directory where your AppSpec file and the command scripts are located
  • isCustomTemplate – True if it is a custom template created by the user; this tells the plugin to treat templateBasedir as an absolute path.
  • warFileExportLocationWithinDeploymentArchive – Since this deployment task doesn’t actually consume any WAR file, we can specify any value for this attribute.
  • parameters – Specifies a list of all the configurable parameters in our template. In this case, we have only one parameter ##HTTPD_SERVICE_NAME##
    • ​name – The user-friendly name for the parameter
    • type – Either STRING or INTEGER
    • defaultValueAsString – The default value for this parameter
    • substitutionAnchorText – The place-holder text that represents this parameter in the template files; when copying the template files, the plugin will replace these place holders with the user’s actual input value
    • constraints – The constraints that will be used to validate user input; supported constraints are validationRegex (for STRING), minValue and maxValue (for INTEGER).

Ok, now we that have everything ready, let’s go back to Eclipse and import the AppSpec template we just created.

In the last page of the deployment wizard, click the Import Template button at the top-right corner of the page:

Then find the location of our template metadata file, and click Import.

The plugin will parse the metadata and create a simple UI view for the user input of the template parameter ##HTTPD_SERVICE_NAME##. Let’s just use the default value httpd, and click Finish.

After the deployment completes, all the httpd services running on your EC2 instances will be stopped. If you are interested in how your commands were executed on your hosts, or if you need to debug your deployment, the log output of your scripts can be found at /opt/codedeploy-agent/deployment-root/{deployment-group-ID}/{deployment-ID}/logs/scripts.log

In this example, you are expected to get the following log output. You can see that the httpd service was successfully stopped during the ApplicationStart event:

	2015-01-07 00:28:04 LifecycleEvent - ApplicationStart
	2015-01-07 00:28:04 Script - stop-httpd.sh
	2015-01-07 00:28:04 [stdout]Stopping httpd: [  OK  ]

In the future, if you ever want to repeat this operation on your EC2 instances, just kick off another deployment in Eclipse using the same Stop Apache HTTP Server template, and you are done!

You can use the AppSpec template system to create more complicated deployment tasks. For example, you can define your own template that deploys your Java web app to other servlet containers such us Jetty and JBoss. If you are interested in the Tomcat 7 running on Linux template we used in the walkthrough, you can find the source code in our GitHub repo.

Feel free to customize the source for your specific need, and keep in mind that we are always open to pull-requests if you want to contribute your own templates that might be useful for other Java developers.

Conclusion

The AWS CodeDeploy plugin for Eclipse allows you to easily initiate a deployment directly from your source development environment. It eliminates the need to repeat the manual operations of building, packaging and preparing revision. It also allows you to quickly set up an AppSpec template that represents a repeatable and configurable deployment task.

Give it a try and see whether it can improve how you deploy your Java web project to your EC2 instances. If you have any feedback or feature requests, tell us about them in the comments. We’d love to hear them!

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 2)

In this part of the blog series, we will show you how to deploy a Java web project to your EC2 instances using the AWS CodeDeploy plugin.

Prerequisites

If you want to follow the walkthrough, you will need to create a CodeDeploy deployment group to begin with. The easiest way to do so is to follow the first-run walkthrough in the AWS CodeDeploy Console.

In this example, we have created a CodeDeploy application called DemoApplication and a deployment group called DemoFleet, which includes three EC2 instances running the Amazon Linux AMI.

Deploy an AWS Java Web Project to CodeDeploy

First, let’s open Eclipse and create a new AWS Java Web project in the workspace (File -> New -> Project -> AWS Java Web Project). Select the Basic Java Web Application option to start with. Note that this step is the same as how you would start a project for AWS Elastic Beanstalk.

In Project Explorer, right-click on the new project, and select Amazon Web Services -> Deploy to AWS CodeDeploy….

In the first page of the deployment wizard, you will be asked to select the target CodeDeploy application and deployment group. In this example, we select “DemoApplication” and “DemoFleet” which we just created in the console.

In the next page, you can specify the following options for this deployment.

  • CodeDeploy deployment config – Specifies how many instances you would want the deployment to run in parallel. In this example, we select “CodeDeployDefault.OneAtATime” which is the safest approach to reduce application downtime.
  • Ignore ApplicationStop step failures – Indicates whether or not the deployment should stop if it encounters an error when executing the ApplicationStop lifecycle event command.
  • S3 bucket name – Specifies the S3 bucket where your revision will be uploaded.

Click Next, and you will be asked to select the AppSpec template and parameter values for this deployment. At this moment, you should see only one predefined template, Tomcat 7 running on Linux. This AppSpec template includes the lifecycle event commands that spin up a Tomcat 7 server on your EC2 instances and deploy your application to it. The template accepts parameters including the context-path of the application and the port number that the Tomcat server will listen to.

We will explain later how the AppSpec template is defined and how you can add your custom templates. Here we select deploying to server root and using the default HTTP port 80. Then just click Finish to initiate the deployment.

After the deployment starts, you will be prompted by a dialog that tracks the progress of all the deployments on individual EC2 instances.

You can double-click on any of the instances to open the detailed view of each lifecycle event. If the deployment fails during any of the events, you can click View Diagnostics to see the error code and the log output from your command script.

After the deployment completes, your application will be available at http://{ec2-public-endpoint}

To view the full deployment history of a deployment group, we can visit the deployment group detail page via AWS Explorer View -> AWS CodeDeploy -> DemoApplication -> Double-click DemoFleet.

For some of you who have been following the walkthrough, it’s possible that you might not see the sample JSP page when accessing the EC2 endpoint. Instead it shows the “Amazon Linux AMI Test Page”. This happens because the Amazon Linux AMI pre-bundles a running Apache HTTP server that has occupied the 80 port which our Tomcat server also attempts to bind to.

To solve this problem, you will need to run `sudo service httpd stop` on every EC2 instance before the Java web app is deployed. Without the help of CodeDeploy, you would need to ssh into each of the instances and manually run the command, which is a tedious and time-consuming process. So how can we leverage the CodeDeploy service to ease this process? What would be even better is to have the ability to save this specific deployment task into some configurable format, and make it easily repeatable in the future.

In the next part of our blog series, we will take a look at how we can accomplish this by using the AWS CodeDeploy plugin for Eclipse.

AWS Toolkit for Eclipse Integration with AWS CodeDeploy (Part 1)

We are excited to announce that the AWS Toolkit for Eclipse now includes integration with AWS CodeDeploy and AWS OpsWorks. In addition to the support of AWS Elastic Beanstalk deployment, these two new plugins provide more options for Java developers to deploy their web application to AWS directly from their Eclipse development environment.

In this blog post series, we will take a look at the CodeDeploy plugin and walk you through its features to show you how it can improve your deployment automation.

How to Install?

The AWS CodeDeploy and OpsWorks plugins are available at the official AWS Eclipse Toolkit update site (http://aws.amazon.com/eclipse). Just follow the same steps you took when you installed and updated the AWS plugins, and you will see the two new additions in the plugin list of our update site.

For more information about the installation and basic usage of the AWS Toolkit for Eclipse, go to our official documentation site.

AWS CodeDeploy

If you haven’t heard of it yet, AWS CodeDeploy is a new AWS service that was just launched last year during re:Invent 2014. The service allows you to fully automate the process of deploying your code to a fleet of EC2 instances. It eliminates the need for manual operations by providing a centralized solution that allows you to initiate, control and monitor your deployments.

If you want to learn more about CodeDeploy, here are some useful links:

One of the major design goals of AWS CodeDeploy is to be platform and language agnostic. With a command-based install model, CodeDeploy allows you to specify the commands you want to run during each deployment phase (a.k.a. lifecycle event), and these commands can be written in any kind of code.

The language-agnostic characteristic of CodeDeploy brings maximum flexibility and makes it usable for all kinds of deployment purposes. But meanwhile, because of its generality, the service may not natively support some common use cases that are specific to a particular development language. For example, when working with Java web applications, we would likely not want to deploy the Java source code directly to our hosts – the deployment process always involves some necessary building and packaging phases before publishing the content to the hosts. This is in contrast to many scripting languages where the source code itself can be used directly as the deployment artifact. In its deployment workflow model, CodeDeploy also requires the developer to prepare a revision every time he/she wants to initiate a deployment. This could be either in form of a snapshot of the GitHub repo or an archive bundle uploaded to Amazon S3. This revision should include an Application Specification (AppSpec) file, where the developer’s custom deployment commands are specified.

To summarize, as the following diagram shows, deploying a Java web application via CodeDeploy would require non-trivial manual operations in the development environment, before the deployment actually happens.

Ideally, we would want a tool that is able to:

  • automate the building, packaging, and revision preparation phases for a Java web app
  • support creating configurable and repeatable deployment tasks

In the next part of our blog series, we will walkthrough a simple use case to demonstrate how the AWS CodeDeploy Eclipse plugin solves these problems and makes the deployment as easy as a number of mouse-clicks. Stay tuned!

AWS Resource APIs for SNS and SQS

by David Murray | on | in Java | Permalink | Comments |  Share

Last week we released version 0.0.3 of the AWS Resource APIs for Java, adding support for the Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS). SNS and SQS are similar services that together provide a fully-managed cloud messaging platform. These services expose two powerful primitives — Topics and Queues — which let you decouple message producers from message consumers. The Resource APIs for SNS and SQS make it easier than ever to use these two services. Enough chit-chat, let’s see some code!

Amazon SNS — Topics

SNS is used for multicast messaging. Consumers subscribe to a "Topic," and messages published to the Topic are pushed to all current subscribers. The resource API for SNS exposes a resource object representing a Topic, giving you convenient methods for managing subscriptions to the topic and publishing messages to the topic. It also exposes resource objects for PlatformApplications and PlatformEndpoints, which are used to integrate with various mobile push services. This example demonstrates creating a new Topic, adding a couple of subscribers, and publishing a message to the topic.

SNS sns = ServiceBuilder.forService(SNS.class).build();

// Create a new topic.
Topic topic = sns.createTopic("MyTestTopic");
try {

    // Subscribe an email address.
    topic.subscribe("david@example.com", "email");

    // Subscribe an HTTPS endpoint.
    topic.subscribe("https://api.example.com/notify?user=david", "https");

    // Subscribe all of the endpoints from a previously-created
    // mobile platform application.
    PlatformApplication myMobileApp =
            sns.getPlatformApplication("arn:aws:...");
    for (PlatformEndpoint endpoint : myMobileApp.getEndpoints()) {
        topic.subscribe(endpoint.getArn(), "application");
    }

    // Publish a message to all of the subscribers.
    topic.publish("Hello from Amazon SNS!");

} finally {
    // Clean up after ourselves.
    topic.delete();
}

Amazon SQS — Queues

SQS is used for reliable anycast messaging. Producers write messages to a "Queue," and consumers pull messages from the queue; each message is delivered to a single consumer[1]. The resource API for SQS exposes a Queue resource object, giving you convenient methods for sending messages to a queue and receiving messages from the queue. This example demonstrates creating a queue, sending a couple of messages to it, and then reading those messages back out.

SQS sqs = ServiceBuilder.forService(SQS.class).build();

// Create a new queue.
Queue queue = sqs.createQueue("MyTestQueue");
try {

    // Configure the queue for more efficient long-polling.
    queue.setAttributes(Collections.singletonMap(
            "ReceiveMessageWaitTimeSeconds",
            "20"));

    // Send it a couple messages.
    for (int i = 0; i < 10; ++i) {
        queue.sendMessage("Hello from Amazon SQS: " + i);
    }

    while (true) {
        // Pull a batch of messages from the queue for processing.
        List<Message> messages = queue.receiveMessages();
        for (Message message : messages) {
            System.out.println(message.getBody());

            // Delete the message from the queue to acknowledge that
            // we've successfully processed it.
            message.delete();
        }
    }

} finally {
    // Clean up after ourselves.
    queue.delete();
}

Conclusion

Using SNS or SQS, or interested in getting started with them? Give these new resource APIs a try and let us know what you think, either here or via GitHub issues!

[1] To be precise, it’s delivered to at least one consumer; if the first consumer who reads it does not delete the message from the queue in time (whether due to failure or just being slow), it’ll eventually be delivered to another consumer.

Taming client-side key rotation with the Amazon S3 encryption client

by Hanson Char | on | in Java | Permalink | Comments |  Share

As mentioned in an earlier blog, encrypting data using the Amazon S3 encryption client is one way you can provide an additional layer of protection for sensitive information you store in Amazon S3. Under the hood, the Amazon S3 encryption client randomly generates a one-time data encryption key per S3 object, encrypts the key using your client-side master key, and stores the encrypted data key as metadata in S3 alongside the encrypted data. In particular, one interesting property of such client-side encryption is that the client-side master key is always present only locally on the client side, is never sent to AWS, and therefore enables a high level of security control by our customers.

Every now and then, however, an interesting question arises: How can a user of the Amazon S3 encryption client perform key rotation on the client-side master key? Indeed, for security-conscious customers, rotating a client-side master key from one version to the next can sometimes be a desirable feature, if not a strict security requirement. On the other hand, due to the immutability of the S3 metadata, which is where the encrypted data key is stored by default, it may seem necessary to copy the entire S3 object just to allow re-encryption of the data key. And for large S3 objects, that seems rather inefficient and expensive!

In this blog, we will introduce an existing feature of the Amazon S3 encryption client that makes client-side master key rotation feasible in practice. The feature is related to the use of CryptoStorageMode.InstructionFile. In a nutshell, if you explicitly select InstructionFile as the mode of storage for the meta information of an encrypted S3 object, you would then be able to perform key rotation via the instruction file efficiently without ever touching the encrypted S3 object.

The key idea is that, by using an instruction file, you can perform efficient key rotation from one client-side master key to a different client-side master key. The only requirement is that each of the client-side master keys must have a 1:1 mapping with a unique set of identifying information. In the Amazon S3 encryption client, this unique set of identifying information for the client-side master key is called the "material description."

A code sample could be worth a thousand words. :)  To begin with, let’s construct an instance of the Amazon S3 encryption client with the following configuration:

  • An encryption material provider with a v1.0 client-side master key for S3 encryption
  • CryptoStorageMode.InstructionFile
  • Not to ignore any missing instruction file of an encrypted S3 object.  (More on this below.)
// Configures a material provider for a v1.0 client-side master key
SecretKey v1ClientSideMasterKey = ...;
SimpleMaterialProvider origMaterialProvider = new SimpleMaterialProvider().withLatest(
    new EncryptionMaterials(v1ClientSideMasterKey).addDescription("version", "v1.0"));

// Configures to use InstructionFile storage mode
CryptoConfiguration config = new CryptoConfiguration()
            .withStorageMode(CryptoStorageMode.InstructionFile)
            .withIgnoreMissingInstructionFile(false);

final AmazonS3EncryptionClient s3v1 = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                origMaterialProvider, config)
            .withRegion(Region.getRegion(Regions.US_EAST_1));

Now, we are ready to use this encryption client to encrypt and persist objects to Amazon S3. With the above configuration, instead of persisting the encrypted data key in the metadata of an encrypted S3 object, the encryption client persists the encrypted data key into a separate S3 object called an "instruction file." Under the hood, the instruction file defaults to use the same name as that of the original S3 object, but with an additional suffix of ".instruction".

So why do we need to explicitly set the IgnoreMissingInstructionFile to false? This has to do with the eventual consistency model of Amazon S3. In this model, there is a small probability of a momentary delay in the instruction file being made available for reading after it has been persisted to S3. For such edge cases, we’d rather fail fast than to return the raw ciphertext without decryption (which is the default behavior for legacy and backward compatibility reasons).  The eventual consistency model of Amazon S3 also means there are some edge cases that you’ll want to watch out for when updating the S3 data object; but we’ll cover that in one of our upcoming posts.

To continue,

// Encrypts and saves the data under the name "sensitive_data.txt"
// to S3. Under the hood, the v1.0 client-side master key is used
// to encrypt the randomly generated data key which gets automatically
// saved in a separate "instruction file".
byte[] plaintext = "Hello S3 Client-side Master Key Rotation!".getBytes(Charset.forName("UTF-8"));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(plaintext.length);
String bucket = ...;
PutObjectResult putResult = s3v1.putObject(bucket, "sensitive_data.txt", new ByteArrayInputStream(plaintext), metadata);
System.out.println(putResult);

// Retrieves and decrypts the data.
S3Object s3object = s3v1.getObject(bucket, "sensitive_data.txt");
System.out.println("Encrypt/Decrypt using the v1.0 client-side master key: "
                + IOUtils.toString(s3object.getObjectContent()));
s3v1.shutdown();

Now, to update the client-side master key from v1.0 to v2.0, we simply specify a different EncryptionMaterials in a PutInstructionFileRequest. In this example, the encryption client would proceed to do the following:

  1. Decrypt the encrypted data-key using the original v1.0 client-side master key.
  2. Re-encrypt the data-key using a v2.0 client-side master key specified by the EncryptionMaterials in the PutInstructionFileRequest.
  3. Re-persist the instruction file that contains the newly re-encrypted data key along with other meta information to S3.
// Time to rotate to v2.0 client-side master key, but we still need access
// to the v1.0 client-side master key until the key rotation is complete.
SecretKey v2ClientSideMasterKey = ...;
SimpleMaterialProvider materialProvider = 
            new SimpleMaterialProvider()
                .withLatest(new EncryptionMaterials(v2ClientSideMasterKey)
                                .addDescription("version", "v2.0"))
                .addMaterial(new EncryptionMaterials(v1ClientSideMasterKey)
                                .addDescription("version", "v1.0"));

final AmazonS3EncryptionClient s3 = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                materialProvider, config)
            .withRegion(Region.getRegion(Regions.US_EAST_1));

// Decrypts the data-key using v1.0 client-side master key
// and re-encrypts the data-key using v2.0 client-side master key,
// overwriting the "instruction file"
PutObjectResult result = s3.putInstructionFile(new PutInstructionFileRequest(
            new S3ObjectId(bucket, "sensitive_data.txt"),
            materialProvider.getEncryptionMaterials(), 
            InstructionFileId.DEFAULT_INSTRUCTION_FILE_SUFFIX));
System.out.println(result);

// Retrieves and decrypts the S3 object using v2.0 client-side master key
s3object = s3.getObject(bucket, "sensitive_data.txt");
System.out.println("Client-side master key rotated from v1.0 to v2.0: "
                + IOUtils.toString(s3object.getObjectContent()));
s3.shutdown();
// Key rotation success!

Once the key rotation is finished, you can now use the v2.0 cilent-side master key exclusively without the v1.0 client-side master key. For example:

// Once the key rotation is complete, you need only the v2.0 client-side
// master key. Note the absence of the v1.0 client-side master key.
SimpleMaterialProvider v2materialProvider =
            new SimpleMaterialProvider()
                .withLatest(new EncryptionMaterials(getTestKeyPair())
                                .addDescription("version", "v2.0"));
final AmazonS3EncryptionClient s3v2 = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                v2materialProvider, config)
            .withRegion(Region.getRegion(Regions.US_EAST_1));

// Retrieves and decrypts the S3 object using v2.0 client-side master key
s3object = s3v2.getObject(bucket, "sensitive_data.txt");
System.out.println("Decrypt using v2.0 client-side master key: "
                + IOUtils.toString(s3object.getObjectContent()));
s3v2.shutdown();

In conclusion, we have demonstrated how you can efficiently rotate your client-side master key for Amazon S3 client-side encryption without the need to modify the existing data keys, or mutate the ciphertext of your existing S3 data objects.

We hope you find this useful.  For more information about S3 encryption, see Amazon S3 client-side encryption and Amazon S3 Encryption with AWS Key Management Service.

AWS re:Invent 2014 Recap

by David Murray | on | in Java | Permalink | Comments |  Share

I’m almost done getting readjusted to regular life after AWS re:Invent! It was really awesome to meet so many developers building cool stuff on AWS, and to talk about how we can make your lives easier with great SDKs and tools.

If you didn’t make it to re:Invent this year, or if you opted for a session about one of the exciting new services we announced instead of my session about the AWS SDK for Java, the video and slides of my talk are available online. If you’re interested in going deeper, here are some links to more info on all of the topics I covered:

As always, check out the AWS SDK for Java and all the AWS Labs projects on GitHub, and help us make it even easier for you to build really cool applications on AWS. And don’t forget to follow along on twitter for all the latest news and information!

Amazon S3 Encryption with AWS Key Management Service

by Hanson Char | on | in Java | Permalink | Comments |  Share

With version 1.9.5 of the AWS SDK for Java, we are excited to announce the full support of S3 object encryption using AWS Key Management Service (KMS). Why KMS, you may ask? In a nutshell, AWS Key Management Service provides many security and administrative benefits, including centralized key management, better security in protecting your master keys, and it leads to simpler code!

In this blog, we will provide two quick examples of how you can make use of AWS KMS for client-side encryption via Amazon S3 Encryption Client, and compare it with the use of AWS KMS for server-side encryption via Amazon S3 Client.

The first example demonstrates how you can make use of KMS for client-side encryption in the Amazon S3 Encryption Client. As you see, it can be as simple as configuring a KMSEncryptionMaterialsProvider with a KMS Customer Master Key ID (generated a-priori, for example, via the AWS management console). Every object put to Amazon S3 would then result in a data key generated by AWS KMS for use in client-side encryption before sending the data (along with other metadata such as the KMS "wrapped" data key) to S3 for storage. During retrieval, KMS would automatically "unwrap" the encrypted data key, and the Amazon S3 Encryption Client would then use it to decrypt the ciphertext locally on the client side.

S3 client-side encryption using AWS KMS

String customerMasterKeyId = ...;
AmazonS3EncryptionClient s3 = new AmazonS3EncryptionClient(
            new ProfileCredentialsProvider(),
            new KMSEncryptionMaterialsProvider(customerMasterKeyId))
        .withRegion(Region.getRegion(Regions.US_EAST_1));

String bucket = ...;
byte[] plaintext = "Hello S3/KMS Client-side Encryption!"
            .getBytes(Charset.forName("UTF-8"));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(plaintext.length);

PutObjectResult putResult = s3.putObject(bucket, "hello_s3_kms.txt",
        new ByteArrayInputStream(plaintext), metadata);
System.out.println(putResult);

S3Object s3object = s3.getObject(bucket, "hello_s3_kms.txt");
System.out.println(IOUtils.toString(s3object.getObjectContent()));
s3.shutdown();

The second example demonstrates how you can delegate the crypto operations entirely to the Amazon S3 server side, yet using fully managed data keys generated by AWS KMS (instead of having the data key locally generated on the client side). This has the obvious benefit of offloading the computationally expensive operations to the server side, and potentially improving the client-side performance. Similar to what you did in the first example, all you need to do is to specify your KMS Customer Master Key ID (generated a-priori, for example, via the AWS management console) in the S3 put request.

S3 server-side encryption using AWS KMS

String customerMasterKeyId = ...;
AmazonS3Client s3 = new AmazonS3Client(new ProfileCredentialsProvider())
        .withRegion(Region.getRegion(Regions.US_EAST_1));

String bucket = ...;
byte[] plaintext = "Hello S3/KMS SSE Encryption!"
            .getBytes(Charset.forName("UTF-8"));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(plaintext.length);

PutObjectRequest req = new PutObjectRequest(bucket, "hello_s3_sse_kms.txt",
        new ByteArrayInputStream(plaintext), metadata)
        .withSSEAwsKeyManagementParams(
            new SSEAwsKeyManagementParams(customerMasterKeyId));
PutObjectResult putResult = s3.putObject(req);
System.out.println(putResult);

S3Object s3object = s3.getObject(bucket, "hello_s3_sse_kms.txt");
System.out.println(IOUtils.toString(s3object.getObjectContent()));
s3.shutdown();

For more information about AWS KMS, check out the AWS Key Management Service whitepaper, or the blog New AWS Key Management Service (KMS). Don’t forget to download the latest AWS SDK for Java and give it a spin!

Announcing the AWS CloudTrail Processing Library

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

We’re excited to announce a new extension to the AWS SDK for Java: The AWS CloudTrail Processing Library.

AWS CloudTrail delivers log files containing AWS API activity to a customer’s Amazon S3 bucket. The AWS CloudTrail Processing Library makes it easy to build applications that read and process those CloudTrail logs and incorporate their own business logic. For example, developers can filter events by event source or event type, or persist events into a database such as Amazon RDS or Amazon Redshift or any third-party data store.

The AWS CloudTrail Processing Library, or CPL, eliminates the need to write code that polls Amazon SQS queues, reads and parses queue messages, downloads CloudTrail log files, and parses and serializes events in the log file. Using CPL, developers can read and process CloudTrail log files in as few as 10 lines of code. CPL handles transient and enduring failures related to network timeouts and inaccessible resources in a resilient and fault tolerant manner. CPL is built to scale easily and can process an unlimited number of log files in parallel. If needed, any number of hosts can each run CPL, processing the same S3 bucket and same SQS queue in parallel.

Getting started with CPL is easy. After configuring your AWS credentials and SQS queue, you simply implement a callback method to be called for every event, and start the AWSCloudTrailProcessingExecutor.

// This file contains your AWS security credentials and the name
// of an Amazon SQS queue to poll for updates
String myPropertiesFileName = "myCPL.properties";

// An EventsProcessor is what processes each event from AWS CloudTrail
final AmazonSNSClient sns = new AmazonSNSClient();
EventsProcessor eventsProcessor = new EventsProcessor() {
    public void process(List<CloudTrailEvent> events) {
        for (CloudTrailEvent event : events) {
            CloudTrailEventData data = event.getEventData();
            if (data.getEventSource().equals("ec2.amazonaws.com") &&
                data.getEventName().equals("ModifyVpcAttribute")) {
                System.out.println("Processing event: " + data.getRequestId());
                sns.publish(myQueueArn, "{ " + 
                    "'requestId'= '" + data.getRequestId() + "'," + 
                    "'request'  = '" + data.getRequestParameters() + "'," + 
                    "'response' = '" + data.getResponseElements() + "'," +
                    "'source'   = '" + data.getEventSource() + "'," +
                    "'eventName'= '" + data.getEventName() + "'" +
                    "}");
            }
        }
    }
};

// Create AWSCloudTrailProcessingExecutor and start it
final AWSCloudTrailProcessingExecutor executor = 
            new AWSCloudTrailProcessingExecutor
                .Builder(eventsProcessor, myPropertiesFileName)
                .build();
executor.start();

The preceding example creates an implementation of EventsProcessor that processes each of our events. If the event was from a user modifying an Amazon EC2 VPC through the ModifyVPCAttribute operation, then this code publishes a message to an Amazon SNS topic, so that an operator can review this potentially large change to the account’s VPC configuration.

This example shows how easy it is to use the CPL to process your AWS CloudTrail events. You’ve seen how to create your own implementation of EventsProcessor to specify your own custom logic for acting on CloudTrail events. In addition to EventsProcessor, you can also control the behavior of AWSCloudTrailProcessingExecutor with these interfaces:

  • EventFilter allows you to easily filter specific events that you want to process. For example, if you only want to process CloudTrail events in a specific region, or from a specific service, you can use a EventFilter to easily select those events.
  • SourceFilters allow you to perform filtering using data specific to the source of the events. In this case, the SQSBasedSource contains additional information you can use for filtering, such as how many times a message has been delivered.
  • ProgressReporters allow you to report back progress through your application so you can tell your users how far along in the processing your application is.
  • ExceptionHandlers allow you to add custom error handling for any errors encountered during event processing.

You can find the full source for the AWS CloudTrail Processing Library in the aws-cloudtrail-processing-library project on GitHub, and you can easily pick up the CPL as a dependency in your Maven-based projects:

<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-cloudtrail-processing-library</artifactId>
	<version>1.0.0</version>
</dependency>

For more information, go to the CloudTrail FAQ and documentation.

How are you using AWS CloudTrail to track your AWS usage?

AWS SDK for Java Maven Modules

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

We’re excited to announce that with version 1.9.0 of the AWS SDK for Java, we’ve switched to a modular Maven project structure. You can now selectively pick up which components of the SDK you want. For example, if your project uses only Amazon S3 and Amazon DynamoDB, you can configure the dependencies in your project’s pom.xml to pick up only those components, like this:

<dependencies>
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-s3</artifactId>
        <version>1.9.0</version>
    </dependency>
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-dynamodb</artifactId>
        <version>1.9.0</version>
    </dependency>
</dependencies>

Notice that we’re picking up the same version for each of the two components we declare dependencies on. The individual components are versioned together, so you can easily pick compatible versions of different components that you know work together and have been tested together.

Just like before, if you want to pick up the entire SDK, with support for all AWS infrastructure services, you can configure your project’s pom.xml like this:

<dependencies>
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk</artifactId>
        <version>1.9.0</version>
    </dependency>
</dependencies>

You can also still download the complete AWS SDK for Java as a .zip file.

This was the most requested issue on GitHub, and we were very excited to deliver it to customers!

Do you have other feature requests for the AWS SDK for Java?