Tag: Java


AWS re:Invent 2013 Wrap-up

We’re back in Seattle after spending last week in Las Vegas at AWS re:Invent 2013! It was great to meet so many Java developers building applications on AWS. We heard lots of excellent feature requests for all the different tools and projects our team works on, and we’re excited to get started building them!

The slides from my session on the SDK and Eclipse Toolkit are online, and we’ll let you know as soon as the videos from the sessions start appearing online, too.

I’ve also uploaded the latest code for the AWS Meme Generator to GitHub. I used this simple web application in my session to demonstrate a few features in the AWS SDK for Java and the AWS Toolkit for Eclipse. Check out the project on GitHub and try it out yourself!

If you didn’t make it to AWS re:Invent 2013, or if you were there, but just didn’t get a chance to stop by the AWS SDKs and Tools booth, let us know in the comments below what kinds of features you’d like to see in tools like the AWS SDK for Java and the AWS Toolkit for Eclipse.

High-Level APIs in the AWS SDK for Java

Today, at AWS re:Invent 2013, I’m talking about some of the high-level APIs for Amazon S3 and Amazon DynamoDB, but there are a whole lot more high-level APIs in the SDK that I won’t have time to demo. These high-level APIs are all aimed at specific common tasks that developers face, and each one can save you development time. To help you find all these high-level APIs, we’ve put together the list below. As an added bonus, I’ve thrown in some extra links to some of the more powerful features in the AWS Toolkit for Eclipse.

Take a few minutes to explore the SDK and Eclipse Toolkit features below. Are you already using any of these high-level APIs? What’s your favorite? Let us know in the comments below!

Amazon S3 TransferManager

TransferManager is an easy and efficient way to manage data transfers in and out of Amazon S3. The API is easy to use, provides asynchronous management of your transfers, and has several throughput optimizations.

Amazon S3 Encryption Client

This drop-in replacement for the standard Amazon S3 client gives you control over client-side encryption of your data. The encryption client is easy to use, but also has advanced features like hooks for integrating with existing key management systems.

Amazon DynamoDB Mapper

The DynamoDB Mapper handles marshaling your POJOs into and out of Amazon DynamoDB tables. Just apply a few annotations to your POJOs, and they’re ready to use with the mapper. The mapper also has support for running scans and queries on your data and for batching requests.

S3Link

This new type in the SDK allows you to easily store pointers to data in Amazon S3 inside your POJOs that you’re using with the DynamoDB Mapper. It also makes it easy to perform common operations on the referenced data in Amazon S3, such as replacing the contents, downloading them, or changing access permissions.

Amazon DynamoDB Tables Utility

This class provides common utilities for working with Amazon DynamoDB tables, such as checking if a table exists, and waiting for a new table to transition into an available state.

AWS Flow Framework

AWS Flow is an open-source framework that makes it faster and easier to build apps with Amazon Simple Workflow. The framework handles the interaction with Amazon SWF and keeps your application code simple.

Amazon SES JavaMail Provider

The SDK provides an easy to use JavaMail transport implementation that sends email through the Amazon Simple Email Service.

Amazon SQS Batched Client

This extension of the basic Amazon SQS client provides client-side batching when sending and deleting messages with your Amazon SQS queues. Batching can help reduce the number of round-trip queue requests your application makes and can therefore save you money.

Amazon SNS Topics Utility

This class provides common utilities for working with Amazon SNS topics, such as as subscribing an Amazon SQS queue to an SNS topic to receive published messages.

AWS Policy API

Writing JSON policies by hand can be difficult to maintain, but the Policy API in the AWS SDK for Java gives you an easy way to programmatically create JSON policies for AWS services.

Amazon Glacier ArchiveTransferManager

Glacier’s ArchiveTransferManager makes it easy to get data into and out of Amazon Glacier.

AWS Toolkit for Eclipse

Android Application Development Support

Developing Android applications that use AWS has never been easier. With the AWS Toolkit for Eclipse, you can create new AWS Android projects that have your security credentials configured, Android libraries present, AWS SDK for Android on your build path, and some sample source code to start from.

CloudFormation Support

Lots of new features in the Eclipse Toolkit make working with AWS CloudFormation easy. You can update your CloudFormation stacks directly from Eclipse and use a custom editor to make working with CloudFormation templates easy.

AWS Elastic Beanstalk Deployment

One of the most powerful features of the Eclipse Toolkit is being able to quickly deploy your Java web applications to AWS Elastic Beanstalk directly from within Eclipse. This three-part blog series demonstrates how to get started with AWS Java web projects in Eclipse, how to deploy them to AWS Elastic Beanstalk, and how to manage your applications running in AWS Elastic Beanstalk.

AWS OpsWorks for Java

by Andrew Fitz Gibbon | on | in Java | Permalink | Comments |  Share

Today, we have a guest post by Chris Barclay from the AWS OpsWorks team.


We are pleased to announce that AWS OpsWorks now supports Java applications. AWS OpsWorks is an application management service that makes it easy to model and manage your entire application. You can start from templates for common technologies, or build your own using Chef recipes with full control of deployments, scaling, monitoring, and automation of each component.

The new OpsWorks Java layer automatically configures Amazon EC2 instances with Apache Tomcat using sensible defaults in order to run your Java application. You can deploy one or more Java apps, such as a front-end web server and back-end business logic, on the same server. You can also customize or extend the Java layer. For example, you can choose a different Tomcat version, change the heap size, or use a different JDK.

To get started, go to the OpsWorks console and create a stack. Next, add a Java layer. In the navigation column, click Instances, add an instance, and start it.

Add Layer

Tomcat supports HTML, Java server pages (JSP), and Java class files. In this example, we’ll deploy a simple JSP that prints the date and your Amazon EC2 instance’s IP address, scale the environment using a load balancer, and discuss how OpsWorks can automate other tasks.

<%@ page import="java.net.InetAddress" %>
<html>
<body>
<%
    java.util.Date date = new java.util.Date();
    InetAddress inetAddress = InetAddress.getLocalHost();
%>
The time is 
<%
    out.println( date );
    out.println("<br>Your server's hostname is "+inetAddress.getHostName());
%>
<br>
</body>
</html>

A typical Java development process includes developing and testing your application locally, checking the source code into a repository, and deploying the built assets to your servers. The example has only one JSP file, but your application might have many files. You can handle that case by creating an archive of those files and directing OpsWorks to deploy the contents of the archive.

Let’s create an archive of the JSP and upload that archive to a location that OpsWorks can access. This archive will have only one file, but you can use the same procedure for any number of files.

To create an archive and upload it to Amazon S3

  1. Copy the example code to a file named simplejsp.jsp, and put the file in a directory named simplejsp.
  2. Create a .zip archive of the simplejsp directory.
  3. Create a public Amazon S3 bucket, upload simplejsp.zip to the bucket, and make the file public. For a description of how to perform this task, see Get Started With Amazon Simple Storage Service.

To add and deploy the app

  1. In the navigation column, click Apps, and the click Add an app.
  2. In Settings, specify a name and select the Java App Type.
  3. In Application Source, specify the http archive repository type, and enter the URL for the archive that you just uploaded in S3. It should look something like http://s3.amazonaws.com/your-bucket/simplejsp.zip.
  4. Then click Add App.

Add App

  1. Next, click deploy to deploy the app to your instance. The deployment causes OpsWorks to download the file from S3 to the appropriate location on the Java app server.
  2. Once the deployment is complete, click the Instances page and copy the public IP address to construct a URL as follows: http://publicIP/appShortName/appname.jsp.

Instances

For the example, the URL will look something like http://54.205.11.166/myjavaapp/simplejsp.jsp and when you navigate to the URL you should see something like:

Wed Oct 30 21:06:07 UTC 2013
Your server’s hostname is java-app1

Now that you have one instance running, you can scale the app to handle load spikes using time and load-based instance scaling.

To scale the app to handle load spikes

  1. Under Instances in the left menu, click Time-based and add an instance. You can then select the times that the instance will start and stop. Once you have multiple instances, you will probably want to load balance the traffic among them.
  2. In the Amazon EC2 console, create an Elastic Load Balancer and add it to your OpsWorks layer. OpsWorks automatically updates the load balancer’s configuration when instances are started and stopped.

It’s easy to customize OpsWorks to change the configuration of your EC2 instances. Most settings can be changed directly through the layer settings, such as adding software packages or Amazon EBS volumes. You can change how software is installed using Bash scripts and Chef recipes. You can also change existing recipes by modifying attributes. For example, you can use a different JDK by modifying the stack’s custom JSON:

{
  "opsworks_java" : {
    "jvm_pkg" : {
       "use_custom_pkg_location" : "true",
       "custom_pkg_location_url_rhel" :
           "http://s3.amazonaws.com/your-bucket/jre-7u45-linux-x64.gz"
    }
  }
} 

A few clicks in the AWS Management Console are all it takes to get started with OpsWorks. For more information on using the Java layer or customizing OpsWorks, see the documentation.

Specifying Conditional Constraints with Amazon DynamoDB Mapper

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Conditional constraints are a powerful feature in the Amazon DynamoDB API. Until recently, there was little support for them in the Amazon DynamoDB Mapper. You could specify a version attribute for your mapped objects, and the mapper would automatically apply conditional constraints to give you optimistic locking, but you couldn’t explicitly specify your own custom conditional constraints with the mapper.

We’re excited to show you a new feature in the DynamoDB Mapper that lets you specify your own conditional constraints when saving and deleting data using the mapper. Specifying conditional constraints with the mapper is easy. Just pass in a DynamoDBSaveExpression object that describes your conditional constraints when you call DynamoDBMapper.save. If the conditions are all met when they’re evaluated on the server side, then your data will be saved, but if any of the conditions are not met, you’ll receive an exception in your application, letting you know about the conditional check failure.

Consider an application with a fleet of backend workers. When a worker starts processing a task, it marks the task’s status as IN_PROGRESS in a DynamoDB table. We want to prevent the case where multiple workers start working on the same task. We can do that easily with the new support for conditional constraints in the DynamoDB Mapper. When a worker attempts to set a task to IN_PROGRESS, the mapper simply adds a constraint that the previous state for the task should be READY. That way, if two workers try to start working on the task at the same time, the first one will be able to set it’s status to IN_PROGRESS and start processing the task, but the second worker will receive a ConditionalCheckFailedException since the status field wasn’t what it expected when it saved its data.

Here’s what the code looks like:

try {
   DynamoDBSaveExpression saveExpression = new DynamoDBSaveExpression();
   Map expected = new HashMap();
   expected.put("status", 
      new ExpectedAttributeValue(new AttributeValue("READY").withExists(true));

   saveExpression.setExpected(expected);

   mapper.save(obj, saveExpression);
} catch (ConditionalCheckFailedException e) {
   // This means our save wasn't recorded, since our constraint wasn't met
   // If this happens, the worker can simply look for a new task to work on
}

Archiving and Backing-up Data with the AWS SDK for Java

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Do you or your company have important data that you need to archive? Have you explored Amazon Glacier yet? Amazon Glacier is an extremely low-cost storage service that provides secure and durable storage for data archiving and backup. Just like with other AWS offerings, you pay only for what you use. You don’t have to pay large upfront infrastructure costs or predict capacity requirements like you do with on-premise solutions. Simply use what you need, when you need it, and pay only for what you use.

There are two easy ways to leverage Amazon Glacier for data archives and backups using the AWS SDK for Java. The first option is to interact with the Amazon Glacier service directly. The AWS SDK for Java includes a high-level API called ArchiveTransferManager for easily working with transfers into and out of Amazon Glacier.

ArchiveTransferManager atm = new ArchiveTransferManager(myCredentials);
UploadResult uploadResult = atm.upload("myVaultName", "old logs",
                                       new File("/logs/oldLogs.zip"));

// later, when you need to retrieve your data
atm.download("myVaultName", uploadResult.getArchiveId(), 
             new File("/download/logs.zip"));

The second easy way of getting your data into Amazon Glacier using the SDK is to get your data into Amazon S3 first, and use a bucket lifecycle to automatically archive your objects into Amazon Glacier after a certain period.

It’s easy to configure an Amazon S3 bucket’s lifecycle using the AWS Management Console or the AWS SDKs. Here’s how to create a bucket lifecycle that will copy objects under the "logs/" key prefix to Amazon Glacier after 365 days, and will remove them completely from your Amazon S3 bucket a few days later at the 370 day mark.

AmazonS3 s3 = new AmazonS3Client(myCredentials);
Transition transition = new Transition()
    .withDays(365).withStorageClass(StorageClass.Glacier);
BucketLifecycleConfiguration config = new BucketLifecycleConfiguration()
    .withRules(new Rule()
        .withId("log-archival-rule")
        .withKeyPrefix("logs/")
        .withExpirationInDays(370)
        .withStatus(ENABLED)
        .withTransition(transition));

s3.setBucketLifecycleConfiguration(myBucketName, config);

Are you using Amazon Glacier yet? Let know how you’re using it and how it’s working for you!

Using the SaveBehavior Configuration for the DynamoDBMapper

by Wade Matveyenko | on | in Java | Permalink | Comments |  Share

The high-level save API of DynamoDBMapper provides a convenient way of persisting items in an Amazon DynamoDB table. The underlying implementation uses either a PutItem request to create a new item or an UpdateItem request to edit the existing item. In order to exercise finer control over the low-level service requests, you can use a SaveBehavior configuration to specify the expected behavior when saving an item. First, let’s look at how to set a SaveBehavior configuration. There are two ways of doing it:

  • You can specify the default SaveBehavior when constructing a new mapper instance, which will affect all save operations from this mapper:

    // All save operations will use the UPDATE behavior by default
    DynamoDBMapper mapper = new DynamoDBMapper(dynamoDBClient, 
                            new DynamoDBMapperConfig(SaveBehavior.UPDATE));
  • You can also force a SaveBehavior for a particular save operation:

    // Force this save operation to use CLOBBER, instead of the default behavior of this mapper
    mapper.save(obj, new DynamoDBMapperConfig(SaveBehavior.CLOBBER));

The next step is to understand different SaveBehavior configurations. There are four different configurations you can choose from: UPDATE(default), UPDATE_SKIP_NULL_ATTRIBUTES, CLOBBER, and APPEND_SET. When you add a new item to the table, using any of the four configurations has the same effect—puts the item as specified in the POJO (though it might be achieved by different service request calls). However, when it comes to updating an existing item, these SaveBehavior configurations have different results, and you need to choose the appropriate one according to how you want to control your data. In order to explain this, let’s walk through an example of using different SaveBehavior configurations to update an item specified by the same POJO:

  • Table schema:

    AttributeName key modeled_scalar modeled_set unmodeled
    KeyType Hash Non-key Non-key Non-key
    AttributeType Number String String set String
  • POJO class definition:

    @DynamoDBTable(tableName="TestTable")
    public class TestTableItem {
    
       private int key;
       private String modeledScalar;
       private Set<String> modeledSet;
    
       @DynamoDBHashKey(attributeName="key")
       public int getKey() { return key; }
       public void setKey(int key) { this.key = key; }
    
       @DynamoDBAttribute(attributeName="modeled_scalar")
       public String getModeledScalar() { return modeledScalar; }
       public void setModeledScalar(String modeledScalar) { this.modeledScalar = modeledScalar; }
    	
       @DynamoDBAttribute(attributeName="modeled_set")
       public Set<String> getModeledSet() { return modeledSet; }
       public void setModeledSet(Set<String> modeledSet) { this.modeledSet = modeledSet; }
    
    }
      
  • Existing item:

    {
         "key" : "99",
         "modeled_scalar" : "foo", 
         "modeled_set" : [
              "foo0", 
              "foo1"
         ], 
         "unmodeled" : "bar" 
    }
  • POJO object:

    TestTableItem obj = new TestTableItem();
    obj.setKey(99);
    obj.setModeledScalar(null);
    obj.setModeledSet(Collections.singleton("foo2");

Then let’s look at the effect of using each SaveBehavior configuration:

  • UPDATE (default)

    UPDATE will not affect unmodeled attributes on a save operation, and a null value for the modeled attribute will remove it from that item in DynamoDB.

    Updated item:

    {
         "key" : "99",
         "modeled_set" : [
              "foo2"
         ],
         "unmodeled" : "bar" 
    }
  • UPDATE_SKIP_NULL_ATTRIBUTES

    UPDATE_SKIP_NULL_ATTRIBUTES is similar to UPDATE, except that it ignores any null value attribute(s) and will NOT remove them from that item in DynamoDB.

    Updated item:

    {
         "key" : "99",
         "modeled_scalar" : "foo",
         "modeled_set" : [
              "foo2"
         ], 
         "unmodeled" : "bar" 
    }
  • CLOBBER

    CLOBBER will clear and replace all attributes, including unmodeled ones, (delete and recreate) on save.

    Updated item:

    {
         "key" : "99", 
         "modeled_set" : [
              "foo2"
         ]
    }
  • APPEND_SET

    APPEND_SET treats scalar attributes (String, Number, Binary) the same as UPDATE_SKIP_NULL_ATTRIBUTES does. However, for set attributes, it will append to the existing attribute value, instead of overriding it.

    Updated item:

    {
         "key" : "99",
         "modeled_scalar" : "foo",
         "modeled_set" : [
              "foo0", 
              "foo1", 
              "foo2"
         ], 
         "unmodeled" : "bar" 
    }

Here is a summary of the differences between these SaveBehavior configurations:

SaveBehavior On unmodeled attribute On null-value attribute On set attribute
UPDATE keep remove override
UPDATE_SKIP_NULL_ATTRIBUTES keep keep override
CLOBBER remove remove override
APPEND_SET keep keep append

As you can see, SaveBehavior provides great flexibility on how to update your data in Amazon DynamoDB. Do you find these SaveBehavior configurations easy to use? Are there any other save behaviors that you need? Leave your comment here and help us improve our SDK!

AWS re:Invent 2013

We’re all getting very excited about AWS re:Invent 2013. In just over a month, we’ll be down in Las Vegas talking to developers and customers from all over the world.

There’s a huge amount of great technical content this year, and attendees will be taking home lots of knowledge on the latest and greatest features of the AWS platform, and learning best practices for building bigger, more robust applications faster. Our team will be giving a few presentations, including TLS301 – Accelerate Your Java Development on AWS.

I hope we’ll get to meet you at the conference this year. If you weren’t able to make it last year, you can find lots of great videos of the sessions online. One of my favorites is Andy Jassy’s re:Invent Day 1 Keynote. Some of you might remember Zach Musgrave’s session last year on Developing, Deploying, and Debugging AWS Applications with Eclipse, and a few of you might have been there for my session on Being Productive with the AWS SDK for Java.

See you in Las Vegas!

Using S3Link with Amazon DynamoDB

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Today we’re excited to talk about the new S3Link class. S3Link allows you to easily link to an Amazon S3 resource in your Amazon DynamoDB data. You can use S3Link when storing Java objects in Amazon DynamoDB tables with the DynamoDBMapper class.

To use the new S3Link class, just add a member of type S3Link to your annotated class. The following User class has an S3Link member named avatar:

@DynamoDBTable(tableName = "user-table")
public class User {
    private String username;
    private S3Link avatar;

    @DynamoDBHashKey
    public String getUsername() {
        return username;
    }

    public void setUsername(String username) {
        this.username = username;
    }

    public S3Link getAvatar() {
        return avatar;
    }

    public void setAvatar(S3Link avatar) {
        this.avatar = avatar;
    }
}

Now that we have our POJO annotated, we’re ready to use DynamoDBMapper to work with our data. The following example shows three ways to use S3Link:

  • Upload a file to Amazon S3
  • Download a file from Amazon S3
  • Get an Amazon S3 client to perform more advanced operations
// Construct a mapper and pass in credentials to use when sending requests to Amazon S3
DynamoDBMapper mapper = new DynamoDBMapper(myDynamoClient, myCredentialsProvider);

// Create your objects
User user = new User();
user.setUsername("jamestkirk");

// Create a link to your data in Amazon S3
user.setAvatar(mapper.createS3Link(myBucketName, "avatars/jamestkirk.jpg"));
  
// Save the Amazon DynamoDB data for your object (does not write to Amazon S3)
mapper.save(user);
  
// Use S3Link to easily upload to Amazon S3
user.getAvatar().uploadFrom(new File("/path/to/all/those/user/avatars/jamestkirk.jpg"));

// Or use S3Link to easily download from Amazon S3
user = mapper.load("spock");
user.getAvatar().downloadTo(new File("/path/to/downloads/spock.jpg"));

// Or grab a full Amazon S3 client to perform more advanced operations
user.getAvatar().getAmazonS3Client();

That’s all there is to using the new S3Link class. Just point it at your data in Amazon S3, and then use the link to upload and download your data.

For more information about using DynamoDBMapper, see the Using the Object Persistence Model with Amazon DynamoDB section in the Amazon DynamoDB Developer Guide.

Release: AWS SDK for Java 1.6.0

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

We released version 1.6.0 of the AWS SDK for Java last Friday. This version has some exciting features!

  • A new type of POJO attribute named S3Link for the DynamoDBMapper class. This new attribute allows you to easily work with binary data in Amazon S3 and store links to that data in Amazon DynamoDB.
  • The Amazon CloudSearch client now supports setting the text processor for TextOptions.
  • The Amazon CloudFront client now allows you to display custom error pages for origin errors and control how long error responses are cached.
  • Many new enums for common string values in the Amazon EC2 client. You now have these string values at your fingertips in the SDK and in the SDK documentation, instead of having to go to the Amazon EC2 API Reference to search for them.

Download the latest release of the AWS SDK for Java now!

Amazon S3 TransferManager – Batched File Uploads

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

In addition to all the cool features in TransferManager around asynchronous upload and download management, there are some other great features around batched uploads and downloads of multiple files.

The uploadDirectory and uploadFileList methods in TransferManager make it easy to upload a complete directory, or a list of specific files to Amazon S3, as one background, asynchronous task.

In some cases though, you might want more control over how that data is uploaded, particularly around additional metadata you want to provide for the data you’re uploading. A second form of uploadFileList allows you to pass in an implementation of an ObjectMetadataProvider interface that will let you do just that. For each of the files being uploaded, this ObjectMetadataProvider will receive a callback via the provideObjectMetadata method, allowing it to fill in any additional metadata you’d like to store alongside your object data in Amazon S3.

The following code demonstrates how easy it is to use the ObjectMetadataProvider interface to pass along additional metadata to your uploaded files.

TransferManager tm = new TransferManager(myCredentials);

ObjectMetadataProvider metadataProvider = new ObjectMetadataProvider() {
    void provideObjectMetadata(File file, ObjectMetadata metadata) {
        // If this file is a JPEG, then parse some additional info
        // from the EXIF metadata to store in the object metadata
        if (isJPEG(file)) {
            metadata.addUserMetadata("original-image-date", 
                                     parseExifImageDate(file));
        }
    }
}

MultipleFileUpload upload = tm.uploadFileList(
        myBucket, myKeyPrefix, rootDirectory, fileList, metadataProvider);