Category: Java


A Fast and Correct Base 64 Codec

by Hanson Char | on | in Java | Permalink | Comments |  Share

In AWS, we always strive to make our tools and services better for our customers. One example is the recent improvement we made to the AWS Java SDK’s Base 64 encoding and decoding. In essence, we’ve replaced the use of Jakarta Commons Codec 1.x with a different implementation throughout the entire SDK. Why, you may wonder? There are two reasons.

Performance

The first is about performance. Here is a graph that summarizes the situation:

Base64 Performance Comparision

This graph is the frequency distribution of a thousand data points captured for each of the two codec’s, Jakarta Commons 1.x vs. AWS SDK for Java. Each data point represents the total number of milliseconds it takes in each iteration to Base 64 encode and decode 2 MB of random binary data. The test was conducted in a Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode). On average, the Java SDK’s Base 64 codec is about 2.47x faster, with a reduction in time variance of about 42.81%. (For readers who are statistically inclined, details are provided in the Appendix below.)

Correctness

The second reason is correctness. Here is a quick quiz:

What is the correct result of Base 64 decoding the string "ZE==" ?

(Stop reading further in case the answer spoils the fun.)

The answer is: the decoding should fail. Why? Even though "ZE==" may look like a valid Base 64 encoded string, it is technically impossible to construct such string via Base 64 encoding from any binary data in the first place! (Don’t take my word of it.  Try it yourself!)

If such invalid string is passed to the latest Java SDK’s Base 64 codec, the Base 64 decoding routine would correctly fail fast with an IllegalArgumentException. As far as I know, there seems to be no other existing Base 64 codec that handles such "illegal" input correctly. Most Base 64 decoders (including the latest in Java 8) would simply silently return some implementation-specific, arbitrary values that could never be Base 64 re-encoded back to the original input string. You could probably imagine how such "random" behavior could make the security engineers quite uncomfortable. :)

Implementation

Under the hood, the latest Base 64 codec in the AWS SDK for Java is a hybrid implementation. For encoding from bytes to string, we directly use javax.xml.bind.DataTypeConverter available from the JDK (1.6+). For decoding, we use our own implementation for reasons of both speed and correctness as discussed above.

Usage

This fast and correct Base 64 codec is now available in the AWS SDK for Java 1.8.3 or later. You can of course directly and independently make use of it. For example:

import com.amazonaws.util.Base64;
...
byte[] bytes = ...
// Base 64 encode
String encoded = Base64.encodeAsString(bytes);
// Base 64 decode
byte[] decoded = Base64.decode(encoded);

For more details, check out Base64.java. Enjoy!

Appendix

(Performance statistics associated with the graph generated via R above.)

                 vars    n  mean  sd median trimmed  mad min max range skew kurtosis  se
Commons    1 1000 47.69 6.75     46    46.9      5.93  38   84     46   1.43     3.62    0.21
SDK             2 1000 19.51 2.89     19    19.1      2.97  16   46     30   1.84     7.90    0.09

   Commons           SDK       
 Min.      :38.00    Min.      :16.00  
 1st Qu. :42.00    1st Qu. :17.00  
 Median :46.00    Median :19.00  
 Mean    :47.69    Mean    :19.51  
 3rd Qu. :51.00    3rd Qu. :21.00  
 Max.     :84.00    Max.     :46.00

 

Amazon S3 Server-Side Encryption with Customer-Provided Keys

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Amazon S3 recently launched a new feature that lets developers take advantage of server-side encryption, but still control their encryption keys. This new server-side encryption mode for Amazon S3 is called Server-Side Encryption with Customer-Provided Keys (SSE-C).

Using server-side encryption in Amazon S3 with your own encryption keys is easy using the AWS SDK for Java. Just pass along an instance of SSECustomerKey with your requests to Amazon S3.

The SSECustomerKey class holds your encryption key material for AES-256 encryption and an optional MD5 for checking the data integrity of the encryption key when it gets passed to Amazon S3. You can specify your AES-256 encryption key as a Java SecretKey object, a byte[] of the raw key material, or as a base64-encoded string. The MD5 is optional since the SDK will automatically generate it for you to ensure your encryption key is transmitted to Amazon S3 without any corruption.

Here’s an example of using server-side encryption with a customer-provided encryption key using the AWS SDK for Java:

AmazonS3 s3 = new AmazonS3Client();
SecretKey secretKey = loadMyEncryptionKey();
SSECustomerKey sseCustomerKey = new SSECustomerKey(secretKey);

// Upload a file that will be encrypted with our key once it gets to S3
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, key, file)
        .withSSECustomerKey(sseCustomerKey);
s3.putObject(putObjectRequest);

// To download data encrypted with SSE-C, you must provide the 
// correct SSECustomerKey, otherwise the request will fail
GetObjectRequest getObjectRequest = new GetObjectRequest(bucketName, key)
        .withSSECustomerKey(sseCustomerKey);
S3Object s3Object = s3.getObject(getObjectRequest);

You can use server-side encryption with customer-provided keys with these Amazon S3 operations in the AWS SDK for Java:

You can also take advantage of server-side encryption with customer-provided keys using the Amazon S3 TransferManager API. Just specify your SSECustomerKey in the same way as you do when using AmazonS3Client:

TransferManager tm = new TransferManager();

PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, key, file)
        .withSSECustomerKey(sseCustomerKey);
Upload upload = tm.upload(putObjectRequest);

// TransferManager processes transfers asynchronously
// waitForCompletion will block the current thread until the transfer finishes
upload.waitForCompletion();

GetObjectRequest getObjectRequest = new GetObjectRequest(bucketName, key)
        .withSSECustomerKey(sseCustomerKey);
Download download = tm.download(getObjectRequest, myFile);

Do you have data that requires being encrypted at rest? How are you planning on using server-side encryption with customer-provided keys?

Amazon S3 Requester Pays

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

You may have heard about the Requester Pays feature in Amazon S3 that allows bucket owners to pass the data transfer costs to users who download the data. Users can now use the AWS SDK for Java to enable/disable Requester Pays on their buckets.

To enable Requester Pays on an Amazon S3 bucket

// create a new AmazonS3 client.
AmazonS3 s3 = new AmazonS3Client();

// call enableRequesterPays method with the bucket name.
s3.enableRequesterPays(bucketName);

To disable Requester Pays on an Amazon S3 bucket

// create a new AmazonS3 client
AmazonS3 s3 = new AmazonS3Client();

// call disableRequesterPays method with the bucket name
s3.disableRequesterPays(bucketName);

In addition, the AWS SDK for Java also allows users to download data from a Requester Pays bucket. The following example shows how easy it is:

// create a new AmazonS3 client
AmazonS3 s3 = new AmazonS3Client();

// The requester pays flag must be set to true for accessing an Amazon S3 bucket that has requester pays enabled
// Otherwise, Amazon S3 would respond with an Access Denied exception.
// The isRequesterPays flag is explicitly set to acknowledge that the requester knows he or she will be charged for the download.

boolean isRequesterPays = true;
S3Object object = s3.getObject(new GetObjectRequest(bucketName,key,isRequesterPays));

Are you using the AWS SDK for Java to access Amazon S3? Let us know your experience.

Secure Local Development with the ProfileCredentialsProvider

We’ve talked in the past about the importance of secure credentials management. When your application is running in production, IAM roles for Amazon EC2 are a great way to securely deliver AWS credentials to your application. However, they’re by definition available only when your application is running on EC2 instances.

If you’re a developer making changes to an application, it’s often convenient to be able to fire up a local instance of the application to see your changes in action without having to spin up a full test environment in the cloud. If your application uses IAM roles for EC2 to pick up credentials when running in the cloud, this means you’ll need an additional way of injecting credentials when running locally on a developer’s box. It’s tempting to simply hardcode a set of credentials into the application for testing purposes, but this makes it distressingly easy to accidentally check those credentials in to source control.

The AWS SDK for Java includes a number of different credential providers that you can use as alternatives to hardcoded credentials. You can easily inject credentials into your application from system properties, environment variables, properties files, and more. All of these choices allow you to keep your credentials separate from your source code and reduce the risk of accidentally checking them in.

We’ve recently added a new credentials provider that loads credentials from a credentials profile file stored in your home directory. This option is particularly exciting because other tools like the AWS CLI and the AWS Toolkit for Eclipse also support reading credentials from and writing credentials to this file. You can configure your credentials in one place, and reuse them whether you’re running one-off CLI commands to check on the state of your resources, browsing around using the Toolkit, or running a local instance of one of your applications.

The default credentials profile file is located at System.getProperty("user.home") + ".aws/credentials". The format allows you to define multiple “profiles,” which makes it easy to maintain different sets of credentials for different projects with appropriately-scoped permissions; this way you don’t have to worry about a bug in the local version of your application accidentally wiping out your production system. Here’s a simple example:

  # Credentials for App-1's production stack (allowing only read-only
  # access for debugging production issues).
  [app-1-production]
  aws_access_key_id={access key id}
  aws_secret_access_key={secret access key}
  aws_session_token={optional session token}

  # Credentials for App-1's development stack, allowing full read-write
  # access.
  [app-1-development]
  aws_access_key_id={another access key id}
  aws_secret_access_key={another secret access key}

  # Default credentials to be used if no profile is specified.
  [default]
  aws_access_key_id=...
  aws_secret_access_key=...

If you’re running a recent version of the AWS CLI, you can set up a file in the correct format by running the aws configure command; you’ll be prompted to enter a set of credentials, which will be stored in the file. Similarly, if you’re running a recent version of the AWS Toolkit for Eclipse, any credentials you configure through its Preferences page will be written into the credentials profile file.

The AWS Toolkit for Eclipse Preferences Page

To use the ProfileCredentialsProvider when running local integration tests, simply add it to your credentials provider chain:

AmazonDynamoDBClient client = new AmazonDynamoDBClient(
      new AWSCredentialsProviderChain(

          // First we'll check for EC2 instance profile credentials.
          new InstanceProfileCredentialsProvider(),

          // If we're not on an EC2 instance, fall back to checking for
          // credentials in the local credentials profile file.
          new ProfileCredentialsProvider("app-1-development"));

The constructor parameter is the name of the profile to use; if you call the parameterless constructor, it will load the “default” profile. Another constructor overload allows you to override the location of the profiles file to load credentials from (or you can change this by setting the AWS_CREDENTIALS_PROFILES_FILE environment variable).

Have you already started using the new ProfileCredentialsProvider? Let us know what you think in the comments below!

Using Improved Conditional Writes in DynamoDB

by David Yanacek | on | in Java | Permalink | Comments |  Share

Last month the Amazon DynamoDB team announced a new pair of features: Improved Query Filtering and Conditional Updates.  In this post, we’ll show how to use the new and improved conditional writes feature of DynamoDB to speed up your app.

Let’s say you’re building a racing game, where two players advance in position until they reach the finish line.  To manage the state in DynamoDB, each game could be stored in its own Item in DynamoDB, in a Game table with GameId as the primary key, and each player position stored in a different attribute.  Here’s an example of what a Game item could look like:

    {
        "GameId": "abc",
        "Status": "IN_PROGRESS",
        "Player1-Position": 0,
        "Player2-Position": 0
    }

To make players move, you can use the atomic counters feature of DynamoDB in the UpdateItem API to send requests like, “increase the player position by 1, regardless of its current value”.  To prevent players from advancing before the game starts, you can use conditional writes to make the same request as before, but only “as long as the game status is IN_PROGRESS.”  Conditional writes are a way of instructing DynamoDB to perform a given write request only if certain attribute values in the item match what you expect them to be at the time of the request.

But this isn’t the whole story.  How do you determine the winner of the game, and prevent players from moving once the game is over?  In other words, we need a way to atomically make it so that all players stop once one reaches the end of the race (no ties allowed!).

This is where the new improved conditional writes come in handy.  Before, the conditional writes feature supported tests for equality (attribute “x” equals “20”).  With improved conditions, DynamoDB supports tests for inequality (attribute “x” is less than “20”).  This is useful for the game application, because now the request can be, “increase the player position by 1 as long as the status of the game equals IN_PROGRESS, and the positions of player 1 and player 2 are less than 20.”  During player movement, one player will eventually reach the finish line first, and any future moves after that will be blocked by the conditional writes.  Here’s the code:


    public static void main(String[] args) {

        // To run this example, first initialize the client, and create a table
        // named 'Game' with a primary key of type hash / string called 'GameId'.
        
        AmazonDynamoDB dynamodb; // initialize the client
        
        try {
            // First set up the example by inserting a new item
            
            // To see different results, change either player's
            // starting positions to 20, or set player 1's location to 19.
            Integer player1Position = 15;
            Integer player2Position = 12;
            dynamodb.putItem(new PutItemRequest()
                    .withTableName("Game")
                    .addItemEntry("GameId", new AttributeValue("abc"))
                    .addItemEntry("Player1-Position",
                        new AttributeValue().withN(player1Position.toString()))
                    .addItemEntry("Player2-Position",
                        new AttributeValue().withN(player2Position.toString()))
                    .addItemEntry("Status", new AttributeValue("IN_PROGRESS")));
            
            // Now move Player1 for game "abc" by 1,
            // as long as neither player has reached "20".
            UpdateItemResult result = dynamodb.updateItem(new UpdateItemRequest()
                .withTableName("Game")
                .withReturnValues(ReturnValue.ALL_NEW)
                .addKeyEntry("GameId", new AttributeValue("abc"))
                .addAttributeUpdatesEntry(
                     "Player1-Position", new AttributeValueUpdate()
                         .withValue(new AttributeValue().withN("1"))
                         .withAction(AttributeAction.ADD))
                .addExpectedEntry(
                     "Player1-Position", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withN("20"))
                         .withComparisonOperator(ComparisonOperator.LT))
                .addExpectedEntry(
                     "Player2-Position", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withN("20"))
                         .withComparisonOperator(ComparisonOperator.LT))
                .addExpectedEntry(
                     "Status", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withS("IN_PROGRESS"))
                         .withComparisonOperator(ComparisonOperator.EQ))
     
            );
            if ("20".equals(result.getAttributes().get("Player1-Position").getN())) {
                System.out.println("Player 1 wins!");
            } else {
                System.out.println("The game is still in progress: "
                    + result.getAttributes());
            }
        } catch (ConditionalCheckFailedException e) {
            System.out.println("Failed to move player 1 because the game is over");
        }
    }

With this algorithm, player movement now takes only one write operation to DynamoDB.  What would it have taken without improved conditions?  Using only equality conditions, the app would have needed to follow the read-modify-write pattern:

  1. Read each item, making note of each player’s position, and verify that neither player already reached the end of the race.
  2. Advance the player’s position by 1, with a condition that both players were still in the position we read in step 1).

Notice that this algorithm requires two round-trips to DynamoDB, whereas with improved conditions, it can be done in only one round-trip.  This reduces both latency and cost.

You can find more information about conditional writes in Amazon DynamoDB in the Developer Guide.

Amazon S3 Client-Side Authenticated Encryption

by Hanson Char | on | in Java | Permalink | Comments |  Share

Encrypting data using the Amazon S3 encryption client is one way you can provide an additional layer of protection for sensitive information you store in Amazon S3. Now the Amazon S3 encryption client provides you with the ability to use authenticated encryption for your stored data via the new CryptoMode.AuthenticatedEncryption option. The Developer Preview of this client-side encryption option utilizes AES-GCM – a standard authenticated encryption algorithm recommended by NIST.

When CryptoMode.AuthenticatedEncryption is in use, an improved key wrapping algorithm will be applied to the envelope key, which is a one-time key randomly generated per S3 object. One of two key wrapping algorithms is used, depending on the encryption material you use. "AESWrap" is applied if the client-supplied encryption material contains a symmetric key; "RSA/ECB/OAEPWithSHA-256AndMGF1Padding" is used if the encryption material contains a key pair. Both key wrapping algorithms improve the level of protection of the envelope key with integrity check in addition to using encryption alone.

Enabling Authenticated Encryption

This new mode of authenticated encryption is disabled by default. This means the Amazon S3 encryption client will continue to function as before unless explicitly configured otherwise.

To enable the use of client-side authenticated encryption, two steps are required:

  1. Include the latest Bouncy Castle jar in the classpath; and
  2. Explicitly specify the cryptographic mode of authenticated encryption when instantiating an S3 encryption client
new AmazonS3EncryptionClient(...,
  new CryptoConfiguration(CryptoMode.AuthenticatedEncryption));

Once enabled, all new S3 objects will be encrypted using AES-GCM before being stored in S3. Otherwise, everything remains the same as described in the Getting Started guide at Client-Side Data Encryption with the AWS SDK for Java and Amazon S3. In other words, all APIs of the S3 encryption client including Range-Get and Multipart Upload will work the same way regardless of the selected cryptographic mode.

How CryptoMode.AuthenticatedEncryption Works

Storage

If CryptoMode.AuthenticatedEncryption is not enabled, the default behavior of the S3 encryption client will persist S3 objects using the same cryptographic algorithm as before, which is encryption-only.

However, if CryptoMode.AuthenticatedEncryption has been enabled, new S3 objects will be encrypted using the standard authenticated encryption algorithm, AES-GCM. Furthermore, the generated one-time envelope key will be protected using a new key-wrapping algorithm.

Retrieval

Existing S3 objects that have been encrypted using the default encryption-only scheme, CryptoMode.EncryptionOnly, will continue to work as before with no behavior changes regardless of whether CryptoMode.AuthenticatedEncryption is enabled or not.

However, if an S3 object that has been encrypted under CryptoMode.AuthenticatedEncryption is retrieved in its entirety, not only is the object automatically decrypted when retrieved, the integrity of the object is also verified (via AES-GCM). If for any reason the object failed the integrity check, a SecurityException would be thrown. A sample exception message:

java.lang.SecurityException: javax.crypto.BadPaddingException: mac check in GCM failed

Note, however, if only part of an object is retrieved from S3 via the Range-Get operation, then only decryption will apply and not authentication since the entire object is required for authentication.

Two Modes of Authenticated Encryption Available

There are actually two authenticated encryption modes available: CryptoMode.AuthenticatedEncryption and CryptoMode.StrictAuthenticatedEncryption.

CryptoMode.StrictAuthenticatedEncryption is a variant of CryptoMode.AuthenticatedEncryption, but it enforces a strict use of authenticated encryption. Specifically, the S3 encryption client running in CryptoMode.StrictAuthenticatedEncryption will only accept retrieval of S3 objects protected via authentication encryption. Retrieving S3 objects in plaintext or encrypted using encryption-only mode will cause a SecurityException to be thrown under the strict mode. A sample exception message:

java.lang.SecurityException: S3 object [bucket: mybucket, key: mykey] not encrypted using authenticated encryption

Furthermore, attempts to perform a Range-get operation in strict authenticated encryption mode will also cause SecurityException to be thrown, since Range-get has no authentication on the data retrieved. A sample exception message:

java.lang.SecurityException: Range get is not allowed in strict crypto mode

The purpose of CryptoMode.StrictAuthenticatedEncryption is to eliminate the possibility of an attacker hypothetically forcing a downgrade to bypass authentication. In other words, running in CryptoMode.StrictAuthenticatedEncryption would provide the highest level of security but potentially at the cost of restricted operations. This strict use of authenticated encryption is meant only for highly security-sensitive applications where there is no need to retrieve S3 objects that have not been previously encrypted using authenticated encryption.

Migrating to Authenticated Encryption

It’s worth pointing out that older versions of the AWS SDK for Java are not equipped with authenticated encryption and therefore will not be able to decrypt objects encrypted with authenticated encryption. Therefore, before enabling CryptoMode.AuthenticatedEncryption, you should upgrade all instances of the AWS SDK for Java in your application to the latest version. With no configuration necessary, the latest version of Java SDK is able to retrieve and decrypt S3 objects that are originally encrypted either in encryption-only mode (AES-CBC) or authenticated encryption mode (AES-GCM). Once all instances of the SDK are upgraded, you can then safely enable CryptoMode.AuthenticatedEncryption to start writing new S3 objects using authenticated encryption. Here is a summary table.

Java SDK CryptoMode Encrypt Decrypt Range Get Multipart Upload Max Size (bytes)
1.7.8.1+ AuthenticatedEncryption AES‑GCM AES‑GCM AES-CBC Yes Yes ~64GB
1.7.8.1+ StrictAuthenticatedEncryption AES‑GCM AES‑GCM No Yes ~64GB
1.7.8.1+ EncryptionOnly AES‑CBC AES‑GCM AES‑CBC Yes Yes 5TB
pre-1.7.8 (Not Applicable) AES‑CBC AES‑CBC Yes Yes 5TB

New Runtime Dependency on Bouncy Castle Library

You may wonder why we do not statically include the Bouncy Castle crypto library jar as a direct dependency. First, by not having a static dependency on the Bouncy Castle Crypto APIs, we believe users can take advantage of the latest releases from Bouncy Castle in a more timely and flexible manner. This is especially relevant should there be security fixes to the library. The other reason is that only users who decide to make use of authenticated encryption would need to depend on the Bouncy Castle library. We therefore do not want to force everyone else to pull in a copy of Bouncy Castle unless they need to.

Authenticated Encryption or Not?

If the protection of S3 objects in your application requires not only confidentiality but also integrity and authenticity, and the size of each object is less than 64 GB, then CryptoMode.AuthenticatedEncryption may be just the option you have been looking for. Why 64GB? It is a limiting factor of the standard AES-GCM. More details can be found in the NIST GCM spec.

Does your application require storing S3 objects with authenticated encryption? Let us know what you think!

Develop, Deploy, and Manage for Scale with AWS Elastic Beanstalk and AWS CloudFormation

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Evan Brown is doing a great five part series on the AWS Application Management Blog on developing, deploying, and managing for scale with Elastic Beanstalk and CloudFormation. In each of his five blog posts, Evan breaks down a different topic and explains best practices as well as practical tips and tricks for working with applications deployed using CloudFormation and Elastic Beanstalk.

Plus, each Thursday at 9 a.m. PDT, during the five part series, Evan and the CloudFormation team host a Google Hangout to discuss the topics in the blog.

This is week three of the five part series, so head over and check out the latest blog post.

Then, this Thursday at 9 a.m. PDT, and the two following Thursdays, head over to the AWS CloudFormation Google Hangout to discuss the post and ask questions of the engineers from the AWS CloudFormation team.

Don’t miss this great opportunity to discuss developing, deploying, and managing applications on AWS with CloudFormation engineers!

Using AmazonS3EncryptionClient to Send Secure Data Between Two Parties

by Hanson Char | on | in Java | Permalink | Comments |  Share

Suppose you have a partner who would like to encrypt and upload some confidential data to you via Amazon S3, but doesn’t want anyone other than you to be able to decrypt the data. Is this possible?

Yes! That’s a classical use case of Public-key Cryptography, and AmazonS3EncryptionClient makes it easy to do.

First of all, since you are the only party that can decrypt the data, your partner will need to have a copy of your public key. Your private key is, of course, kept secret, and therefore is not shared with your partner. Armed with your public key, your partner can then construct an AmazonS3EncryptionClient to encrypt and upload data to you via S3. Notice, however, the relevant API of AmazonS3EncryptionClient requires the use of a KeyPair. How can one construct a KeyPair with only the public but not the private key? Can the private key be null? The short answer is yes. This may not seem obvious, so here is a sample code snippet on how this can be done:

// Create an S3 client using only a public key
AmazonS3 s3 = new AmazonS3EncryptionClient(new EncryptionMaterials(getPublicKeyPair()));
PutObjectResult result = s3.putObject("your_bucket", "location");
// ...

public static KeyPair getPublicKeyPair() throws NoSuchAlgorithmException, InvalidKeySpecException {
    byte[] public_key = … // public key in binary
    KeyFactory kf = KeyFactory.getInstance("RSA");
    PublicKey publicKey = kf.generatePublic(new X509EncodedKeySpec(public_key));
    return new KeyPair(publicKey, null);
}

For obvious reasons, any such key pair with a null private key can only, by definition, be used to encrypt and upload data to S3, but cannot decrypt any data retrieved from S3 using the Amazon S3 encryption client. (Indeed, any such attempt would lead to an exception like "AmazonClientException: Unable to decrypt symmetric key".) On the receiving side, to retrieve and decrypt the Amazon S3 object, you can simply make use of AmazonS3EncryptionClient, but this time instantiated with a KeyPair in the usual way (i.e. by specifying both the public and private keys).

Note that, for performance and security reasons, the encryption material provided to the S3 encryption client is used only as a key-encrypting-key material, and not for content encryption. AmazonS3EncryptionClient always encrypts the content of every S3 object with a randomly generated one-time symmetric key, also known as the "envelope key". The envelope key is therefore globally unique per S3 object. As with most block cipher modes of operation, the security assurance degrades as more data is processed with a single key. The unique envelope key per S3 object therefore enables a maximum level of "key freshness" in terms of security.

For more background information, see Client-Side Data Encryption with the AWS SDK for Java and Amazon S3, and Specifying Client-Side Encryption Using the AWS SDK for Java. Let us know what you think!

Using Transfer Manager to Copy Amazon S3 Objects

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

The latest addition to the list of Transfer Manager features is the ability to easily make copies of your data in Amazon S3.

The new TransferManager.copy method allows you to easily copy an existing Amazon S3 object from one location to another.

Under the hood, TransferManager selects which copy algorithm is best for your data, either single-part copy or multipart copy. When possible, TransferManager initiates multipart copy requests in parallel, each copying a small part of the Amazon S3 object, resulting in better performance, throughput, and resilience to errors. You don’t have to worry about the details of copying your data – just rely on TransferManager's easy to use, asynchronous API for working with Amazon S3.

The following example shows how easy it is to copy data using TransferManager.

// Create a new transfer manager object with your credentials.

TransferManager tm = new TransferManager(new DefaultAWSCredentialsProviderChain());

// The copy method returns immediately as your data copies in the background.
// Use the returned transfer object to track the progress of the copy operation.

Copy copy = tm.copy(sourceBucket, sourceKey,
	              destinationBucket, destinationKey);

// Perform any work while the copy processes

if (copy.isDone()) {
   System.out.println("Copy operation completed.");
}

There’s lots more great functionality in TransferManager. Check out some of our other blog posts on TransferManager

Any new functionality that you’d like us to add to TransferManager? Let us know your ideas.

AWS SDK for Java Maven Archetype

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

If you’re a Maven user, there’s a brand new way to get started building Java applications that use the AWS SDK for Java.

With the new Maven archetype, you can easily create a new Java project configured with the AWS SDK for Java and some sample code to help you find your way around the SDK.

Starting a project from the new archetype is easy:

mvn archetype:generate 
     -DarchetypeGroupId=com.amazonaws 
     -DarchetypeArtifactId=aws-java-sdk-archetype

When you run the Maven archetype:generate goal, you’ll be prompted for some basic Maven values for your new project (groupId, artifactId, version).

[INFO] Generating project in Interactive mode
[INFO] Archetype [com.amazonaws:aws-java-sdk-archetype:1.0.0] 
Define value for property 'groupId': : com.foo   
Define value for property 'artifactId': : my-aws-java-project
Define value for property 'version':  1.0-SNAPSHOT: : 
Define value for property 'package':  com.foo: : 

When the archetype:generate goal completes, you’ll have a new Maven Java project, already configured with a dependency on the AWS SDK for Java and some sample code in the project to help you get started with the SDK.

The POM file in your new project will be configured with the values you just gave Maven:

<project xmlns="http://maven.apache.org/POM/4.0.0">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.foo</groupId>
  <artifactId>my-aws-java-project</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>jar</packaging>

  <name>AWS SDK for Java Sample</name>
  <url>http://aws.amazon.com/sdkforjava</url>

  <dependencies>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk</artifactId>
      <version>[1.7.2,2.0.0)</version>
    </dependency>
  </dependencies>
  ...

Before you can run the sample code, you’ll need to fill in your AWS security credentials. The README.html file details where to put your credentials for this sample. Once your credentials are configured, you’re ready to compile and run your new project. The sample project’s POM file is configured so that you can easily compile, jar, and run the project by executing mvn package exec:java. The package goal compiles the code and creates a jar for it, and the exec:java goal runs the main method in the sample class.

Depending on what’s in your AWS account, you’ll see something like this:

...

[INFO] >>> exec-maven-plugin:1.2.1:java (default-cli) @ my-aws-java-project >>>
[INFO] 
[INFO] <<< exec-maven-plugin:1.2.1:java (default-cli) @ my-aws-java-project <<<
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:java (default-cli) @ my-aws-java-project ---
===========================================
Welcome to the AWS Java SDK!
===========================================
You have access to 3 availability zones:
 - us-east-1a (us-east-1)
 - us-east-1b (us-east-1)
 - us-east-1c (us-east-1)
You have 1 Amazon EC2 instance(s) running.
You have 3 Amazon S3 bucket(s).
The bucket 'aws-demos-265490781088' contains 48 objects with a total size of 376257032 bytes.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10.928s
[INFO] Finished at: Wed Feb 19 15:40:24 PST 2014
[INFO] Final Memory: 24M/222M
[INFO] ------------------------------------------------------------------------

Are you already using Maven for your AWS Java projects? What are your favorite features of Maven? Let us know in the comments below.