AWS Developer Blog

Release: AWS Toolkit for Eclipse 2.3

We’ve just released a new version of the AWS Toolkit for Eclipse that adds support for managing your AWS Identity and Access Management (IAM) resources directly from within Eclipse, and updates the Amazon DynamoDB Create Table Wizard in the toolkit to support creating tables with Local Secondary Indexes.

Check out the new functionality and let us know what you think in comments below!

Eclipse Deployment: Part 1 – AWS Java Web Applications

In this three part series, we’ll show how easy it is to deploy a Java web application to AWS Elastic Beanstalk using the AWS Toolkit for Eclipse.

The first post in this series demonstrates how to create an AWS Java Web Project, and explains how that project interacts with the existing web development tools in Eclipse.

The AWS Toolkit for Eclipse builds on top of the standard Eclipse tooling for developing and deploying web applications, the Eclipse Web Tools Platform (WTP). This means you’ll be able to leverage all of the tools provided by WTP with your new AWS Java Web Project, as we’ll see later in this post.

After you’ve installed the AWS Toolkit for Eclipse, open the New AWS Java Web Project wizard.

The wizard lets you enter your project name, AWS account, and whether you want to start with a bare bones project, or a more advanced reference application. We recommend starting with the basic Java web application for your first time through. If you haven’t configured an AWS account yet, you’ll want to follow the link in the wizard to add an account. Your account information will be used to configure your project so that your application code can make requests to AWS. Once you’ve got an AWS account selected, go ahead and fill out a project name, and keep the default option to start with a basic Java web application.

After you’ve finished the wizard, you’ll have an AWS Java Web Project, ready for you to start building your application in, or to go ahead and deploy somewhere.

One of the great things about building on top of the Eclipse Web Tools Platform is that your project can use all the great tools provided by WTP for developing and deploying Java web applications. For example, try out the Create Servlet wizard provided by WTP:

The Create Servlet wizard makes it very easy to create new servlets, and in addition to creating the class template for you, it will also update your project’s web.xml with a mapping for the new servlet.

You’ll be able to use many other tools from WTP like custom editors for JSP and XML files, and tools for building and exporting WAR files.

The coolest benefit, however, of building on top of WTP is that you can use the deployment support in WTP to deploy your AWS Java Web Projects in exactly the same way, whether you’re uploading to a local Tomcat server for quick testing, or to a production Elastic Beanstalk environment, like we’ll see in the next part of this series.

Let’s get our new project deployed to a local Tomcat server so we can see it running. Right-click on your project and select Run As -> Run On Server. You’ll need to configure a new Tomcat server using this wizard, then Eclipse will start the server and deploy your project. When you’re done, you should see something like this:

Stay tuned for the next part of this series, where we’ll show how to use the same tools to deploy our new application to AWS Elastic Beanstalk.

AWS at RailsConf 2013

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

Loren and I will be at RailsConf next week. AWS will have a booth on the exhibitor floor. If you have any questions about the AWS SDK for Ruby (or anything really), we’d love to chat. We will have swag and credits to hand out, so come stop by and say hi.

I will also be giving a talk Wednesay morning about using a model do describe your web service. The technical parts of the talk are extracted from some of the cool work we are doing at AWS. If you don’t have a chance to come by the booth, you can also catch us after the talk.

See you in Portland!

Locking Dependency Versions with Bundler

by Loren Segal | on | in Ruby | Permalink | Comments |  Share

When writing a Ruby application or library that makes use of third-party dependencies, it is always best to keep track of those dependency versions. This is an easy thing to do with Bundler, but not everybody does it. We think you should, so here are some quick tips on how to get started.

Locking to Major Versions

Bundler works by providing a Gemfile that contains the list of third-party dependencies for your application or library. This list can specify library dependencies by a specific (or fuzzy) version, but it does not require the version field to be set.

Even though the version field is optional, we recommend that it should always be set to at least a "major version" of your dependencies. If those libraries use Semantic Versioning (SemVer), this means locking to all "1.x", "2.x", "3.x", or other major releases. You can do this in Bundler by using the fuzzy version check syntax:

gem 'some_dependency', '~> 1.0'

This locks the some_dependency gem to any version of 1.x, from 1.0 all the way to (but not including) 2.0. Without this check, you risk breaking your application build (or downstream consumers or your library) when the dependency releases a new major version. Instead, with this simple constraint, your application or library will not automatically pick up a new major version, and the risk of your code randomly breaking due to third-party changes should decrease greatly.

If you are a library developer and are not already following Semantic Versioning, we recommend that you read up on these versioning conventions and consider following them, as it makes your library much more reliable for downstream consumers. Your users will thank you.

Getting Specific

If you want to provide a more fine-grained constraint than a major version, it is possible to do so with the same fuzzy version check syntax as above. This might be necessary when using libraries that do not follow Semantic Versioning conventions. The only syntactic difference is that you also must specify the minor version that you want to lock into.

For example, to lock to any patchlevel release in a 1.5 minor version of a library, you can provide the following constraint:

gem 'some_dependency', '~> 1.5.0'

Note the extra ".0" suffix, which tracks the dependency through all 1.5.x releases. Without the ".0" suffix, the constraint would refer to any 1.x release that is greater than or equal to 1.5.

Notes for the AWS SDK for Ruby

The good news is that the AWS SDK for Ruby follows Semantic Versioning, which means we do not make backward-incompatible changes within a major release. To ensure that such changes will not make their way downstream to your application or library, it is best to always lock your version of the Ruby SDK to a major release. Since we are currently in our 1.x major version, you can do this with Bundler by specifying the aws-sdk dependency as follows:

gem 'aws-sdk', '~> 1.0'

This way you will not accidentally receive any backward-incompatible changes should we ever release a new major version of the Ruby SDK.

In Your Gem Specification

If you release your library and maintain a separate .gemspec file, you can (and should) use the same constraint syntax there too. You can see the RubyGems Specification Reference for more details, but in short, you simply need to list the dependency as:

spec.add_runtime_dependency 'some_dependency', '~> 1.0'

This will provide the same major version constraint that Bundler does for anybody who runs gem install yourgem.

Finishing Up

Specifying major versions for third-party dependencies in your Gemfile or .gemspec file is easy, and we should all be doing it. At the very least, providing a major version helps ensure that your library is more resistant to backward-incompatible changes coming from third-party code, and saves your downstream users from ending up with those breaking changes. As the community starts to make use of more third-party libraries in a single application or gem, it’s much more important to stay on top of dependency management and avoid these kinds of failures.

Using Custom Marshallers to Store Complex Objects in Amazon DynamoDB

by zachmu | on | in Java | Permalink | Comments |  Share

Over the past few months, we’ve talked about using the AWS SDK for Java to store and retrieve Java objects in Amazon DynamoDB. Our first post was about the basic features of the DynamoDBMapper framework, and then we zeroed in on the behavior of auto-paginated scan. Today we’re going to spend some time talking about how to store complex types in DynamoDB. We’ll be working with the User class again, reproduced here:

@DynamoDBTable(tableName = "users")
public class User {
  
    private Integer id;
    private Set<String> friends;
    private String status;
  
    @DynamoDBHashKey
    public Integer getId() { return id; }
    public void setId(Integer id) { this.id = id; }
  
    @DynamoDBAttribute
    public Set<String> getFriends() { return friends; }
    public void setFriends(Set<String> friends) { this.friends = friends; }
  
    @DynamoDBAttribute
    public String getStatus() { return status; }
    public void setStatus(String status) { this.status = status; }

    @DynamoDBAttribute
    public String getStatus() { return status; }
    public void setStatus(String status) { this.status = status; }
}
 

Out of the box, DynamoDBMapper works with String, Date, and any numeric type such as int, Integer, byte, Long, etc. But what do you do when your domain object contains a reference to a complex type that you want persisted into DynamoDB?

Let’s imagine that we want to store the phone number for each User in the system, and that we’re working with a PhoneNumber class to represent it. For the sake of brevity, we are assuming it’s an American phone number. Our simple PhoneNumber POJO looks like this:

public class PhoneNumber {
    private String areaCode;
    private String exchange;
    private String subscriberLineIdentifier;
    
    public String getAreaCode() { return areaCode; }    
    public void setAreaCode(String areaCode) { this.areaCode = areaCode; }
    
    public String getExchange() { return exchange; }   
    public void setExchange(String exchange) { this.exchange = exchange; }
    
    public String getSubscriberLineIdentifier() { return subscriberLineIdentifier; }    
    public void setSubscriberLineIdentifier(String subscriberLineIdentifier) { this.subscriberLineIdentifier = subscriberLineIdentifier; }      
}

If we try to store a reference to this class in our User class, DynamoDBMapper will complain because it doesn’t know how to represent the PhoneNumber class as one of DynamoDB’s basic data types.

Introducing the @DynamoDBMarshalling annotation

The DynamoDBMapper framework supports this use case by allowing you to specify how to convert your class into a String and vice versa. All you have to do is implement the DynamoDBMarshaller interface for your domain object. For a phone number, we can represent it using the standard (xxx) xxx-xxxx pattern with the following class:

public class PhoneNumberMarshaller implements DynamoDBMarshaller<PhoneNumber>
 
   {

    @Override
    public String marshall(PhoneNumber number) {
        return "(" + number.getAreaCode() + ") " + number.getExchange() + "-" + number.getSubscriberLineIdentifier();
    }

    @Override
    public PhoneNumber unmarshall(Class<PhoneNumber> clazz, String s) {
        String[] areaCodeAndNumber = s.split(" ");
        String areaCode = areaCodeAndNumber[0].substring(1,4);
        String[] exchangeAndSlid = areaCodeAndNumber[1].split("-");
        PhoneNumber number = new PhoneNumber();
        number.setAreaCode(areaCode);
        number.setExchange(exchangeAndSlid[0]);
        number.setSubscriberLineIdentifier(exchangeAndSlid[1]);
        return number;
    }    
}

Note that the DynamoDBMarshaller interface is templatized on the domain object you’re working with, making this interface strictly typed.

Now that we have a class that knows how to convert our PhoneNumber class into a String and back, we just need to tell the DynamoDBMapper framework about it. We do so with the @DynamoDBMarshalling annotation.

@DynamoDBTable(tableName = "users")
public class User {
    
    ...
    
    @DynamoDBMarshalling (marshallerClass = PhoneNumberMarshaller.class)
    public PhoneNumber getPhoneNumber() { return phoneNumber; }    
    public void setPhoneNumber(PhoneNumber phoneNumber) { this.phoneNumber = phoneNumber; }             
}

Built-in support for JSON representation

The above example uses a very compact String representation of a phone number to use as little space in your DynamoDB table as possible. But if you’re not overly concerned about storage costs or space usage, you can just use the built-in JSON marshaling capability to marshal your domain object. Defining a JSON marshaller class takes just a single line of code:

class PhoneNumberJSONMarshaller extends JsonMarshaller<PhoneNumber> { }

However, the trade-off of using this built-in marshaller is that it produces a String representation that’s more verbose than you could write yourself. A phone number marshaled with this class would end up looking like this (with spaces added for clarity):

{
  "areaCode" : "xxx",
  "exchange: : "xxx",
  "subscriberLineIdentifier" : "xxxx"
}

When writing a custom marshaller, you’ll also want to consider how easy it will be to write a scan filter that can find a particular value. Our compact phone number representation will be much easier to scan for than the JSON representation.

We’re always looking for ways to make our customers’ lives easier, so please let us know how you’re using DynamoDBMapper to store complex objects, and what marshaling patterns have worked well for you. Share your success stories or complaints in the comments!

Stubbing AWS Responses

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

I come across questions frequently about how to test an application that uses the AWS SDK for Ruby (aws-sdk gem). Testing an application that makes use of an external service is always tricky. One technique is to stub the client-level responses returned by the SDK.

AWS.stub!

Calling the AWS.stub! method in your ruby process configures the client classes (e.g. AWS::EC2::Client) to stub their responses when called. They stop short of making a live HTTP request and instead return an empty response.

AWS.stub!

instance_ids = AWS::EC2.new.instances.map(&:id)
instance_ids #=> will always be empty in this example, not HTTP request made

Under the covers, the code example is constructing an AWS::EC2::Client object and is calling the #describe_instances method. AWS.stub! causes the client to return an empty response that looks like a normal response with a few differences:

  • Lists are returned as empty arrays
  • Maps are returned as empty hashes
  • Numeric values are always zero
  • Dates are returned as now

Localized Stubbing

Calling AWS.stub! is the same as calling AWS.config(:stub_requests => true). You can use this configuration option with any constructor that accepts configuration.

stub_ec2 = AWS::EC2.new(:stub_requests => true)
real_ec2 = AWS::EC2.new

Customizing the Responses

In addition to getting empty responses, you can access the stubbed responses and populate them with fake data.

AWS.stub!

ec2 = AWS::EC2::Client.new
resp = ec2.stub_for(:describe_instances)
resp.data[:reservation_set] = [...]

# now calling ec2.describe_instances will return my fake data
ec2.describe_instances
#=> { :reservation_set => [...] } 

There are two methods you can use here:

  • #stub_for(operation_name)
  • #new_stub_for(operation_name)

The first method, #stub_for, returns the same stubbed response every time. It is the default response for that operation for that client object (not shared between instances). The second method, #new_stub_for, generates a new response. This is useful if you need to stub the client to return different data from multiple calls. This is common for paged responses.

Not a Mock

Unfortunately, this approach stubs responses, but it does not mock AWS services. If I used a stubbed AWS::DynamoDB::Client and call #put_item, I will not be able to get the data back. There are a number of third-party libraries that attempt to provide local service mocks. These can be helpful when you are trying to run local tests without hitting the network.

Working with Different AWS Regions

by zachmu | on | in Java | Permalink | Comments |  Share

Wherever you or your customers are in the world, there are AWS data centers nearby.

Each AWS region is a completely independent stack of services, totally isolated from other regions. You should always host your AWS application in the region nearest your customers. For example, if your customers are in Japan, running your website from Amazon EC2 instances in the Asia Pacific (Tokyo) region will ensure that your customers get the lowest possible latency when they connect to your site.

New in the 1.4 release of the AWS SDK for Java, the SDK now knows how to look up the endpoint for a given service in a particular region. Previously, developers needed to look up these endpoints themselves and then hard-code them into their applications when creating a client, like so:

AmazonDynamoDB dynamo = new AmazonDynamoDBClient(credentials);
dynamo.setEndpoint("https://dynamodb.us-west-2.amazonaws.com");

With the 1.4 release, the SDK will look up a service’s regional endpoint automatically, so all you have to know is which region you want to use. This newer method looks like this:

AmazonDynamoDB dynamo = new AmazonDynamoDBClient(credentials);
dynamo.setRegion(Region.getRegion(Regions.US_WEST_2));

Regions can also create and configure clients for you, like a simple factory. This is especially helpful when you’re working with multiple regions in your application and need to keep them straight. Just use region objects to create every client for you, and it will be obvious which client points to which region.

AmazonDynamoDB dynamo = Region.getRegion(Regions.US_WEST_2)
                        .createClient(AmazonDynamoDBClient.class, credentials, clientConfig);

It’s important to note that the setRegion() method isn’t thread-safe. We recommend setting the region once, when a client object is first created, then leaving it alone for the duration of the client’s life cycle. Otherwise, the SDK’s automatic retry logic could yield unexpected behavior if setRegion() is called at the wrong time. Using the Region objects as client factories encourages this pattern. If you need to talk to more than one region for a particular service, we recommend creating one service client object per region, rather than trying to share.

Finally, at times it may be useful to programmatically determine which regions a given service is available in. It’s possible to ask a Region object if a given service is supported there:

Region.getRegion(Regions.US_WEST_2).isServiceSupported(ServiceAbbreviations.Dynamodb);

For more information about which services are available in each region, see http://aws.amazon.com/about-aws/globalinfrastructure/regional-product-services/.

For more information about the available regions and edge locations, see http://aws.amazon.com/about-aws/globalinfrastructure/.

Logging HTTP Wire Traces

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

In a previous previous blog post, I wrote about how to log requests generated by the AWS SDK for Ruby (aws-sdk gem). While this can be a valuable tool in seeing how your code translates into requests to AWS, it doesn’t do everything. What if you think the SDK is serializing your request incorrectly? Sometimes anything short of a HTTP wire trace just isn’t enough.

That problem is easily solved:

ddb = AWS::DynamoDB.new(:http_wire_trace => true)
ddb.tables.first.name
#=> 'aws-sdk-test'

This will send the following to your configured logger:

opening connection to dynamodb.us-east-1.amazonaws.com...
opened
<- "POST / HTTP/1.1rnContent-Type: application/x-amz-json-1.0rnX-Amz-Target: DynamoDB_20111205.ListTablesrnContent-Length: 11rnUser-Agent: aws-sdk-ruby/1.8.5 ruby/1.9.3 x86_64-darwin11.4.2rnHost: dynamodb.us-east-1.amazonaws.comrnX-Amz-Date: 20130315T163624ZrnX-Amz-Content-Sha256: 55522f708dcfebccb7bd3e8d0001a53ecaf2beca9ca801f1e9161e24215faa99rnAuthorization: AWS4-HMAC-SHA256 Credential=AKIAJUNH63P3WCTAYHFA/20130315/us-east-1/dynamodb/aws4_request, SignedHeaders=content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-target, Signature=8a5a5082afb33542eacd3c9429a5efaa80194b6e8390d4592da53f117d1d6d0drnAccept: */*rnrn"
<- "{"Limit":1}"
-> "HTTP/1.1 200 OKrn"
-> "x-amzn-RequestId: HBEC8CIQ525JV2HE430GUFPKONVV4KQNSO5AEMVJF66Q9ASUAAJGrn"
-> "x-amz-crc32: 349808839rn"
-> "Content-Type: application/x-amz-json-1.0rn"
-> "Content-Length: 71rn"
-> "Date: Fri, 15 Mar 2013 16:36:25 GMTrn"
-> "rn"
reading 71 bytes...
-> "{"LastEvaluatedTableName":"aws-sdk-test","TableNames":["aws-sdk-test"]}"
read 71 bytes
Conn keep-alive

If you have not configured a logger, this output will be sent to $stdout (very helpful when you are using IRB). You can also enable HTTP wire logging globally:

AWS.config(:logger => Logger.new($stderr), :http_wire_trace => true)

Enjoy!

The AWS Toolkit for Eclipse at EclipseCon 2013

Jason and I are at EclipseCon in Boston this week to discuss what we’ve learned developing the AWS Toolkit for Eclipse over the last three years. Our session is chock full of advice for how to develop great Eclipse plug-ins, and offers a behind-the-scenes look at how we build the Toolkit. Here’s what we plan to cover:

Learn best practices for Eclipse plug-in development that took us years to figure out!

The AWS Toolkit for Eclipse brings the AWS cloud to the Eclipse workbench, allowing developers to develop, debug, and deploy Java applications on the AWS platform. For three years, we’ve worked to integrate AWS services into your Eclipse development workflow. We started with a small seed of functionality for managing EC2 instances, and today support nine services and counting. We learned a lot on the way, and we’d like to share!

The Toolkit touches a wide array of Eclipse technologies and frameworks, from the Web Tools Platform to the Common Navigator Framework. By now we’ve explored so much of the Eclipse platform that we’ve started to become embarrassed by the parts of the Toolkit that we wrote first. If only someone had told us the right way to do things in the first place! Instead, we had to learn the hard way how to make our code robust, our user interfaces reliable and operating-system independent (not to mention pretty).

We’re here to teach from our experience, to share all the things we wish someone had told us before we learned it the hard way. These are the pointers that will save you hours of frustration and help you deliver a better product to your customers. They’re the tips we would send back in time to tell our younger selves. We’ll show you how we used them to make the Toolkit better and how to incorporate them into your own product.

Topics include getting the most out of SWT layouts, using data binding to give great visual feedback in wizards, managing releases and updates, design patterns for resource sharing, and much more.

If you are attending the conference, come by to say hello and get all your questions about the Toolkit answered! We are also handing out $100 AWS credits to help you get started using AWS services without a financial commitment, so come talk to us and we’ll hook you up.

Eclipse: New AWS Java Project Wizard

If you’re just getting started with the AWS SDK for Java, a great way to learn the SDK is through the AWS Toolkit for Eclipse. In addition to all the tools in the AWS Toolkit for Eclipse for managing your AWS resources, deploying your applications, etc., there are also wizards for creating new AWS projects, including sample code to help get you started.

With the New AWS Java Project wizard, you can create a new Eclipse Java project, already configured with:

  • the AWS SDK for Java – including dependencies and full documentation and source attachment
  • your AWS security credentials – managed through Eclipse’s preferences
  • optional sample code demonstrating how to work with a variety of different AWS services

First, make sure that you have the latest plug-ins for the AWS Toolkit for Eclipse installed, available through the Eclipse Marketplace or directly from our Eclipse update site at http://aws.amazon.com/eclipse.

Once you have the Eclipse tools installed, open the New AWS Java Project wizard, either through the context menu in Package Explorer, or through the File -> New menu.

The New AWS Java Project wizard lets you pick the name for your project, your AWS security credentials, and any sample code that you want to start from. If you don’t have your AWS security credentials configured in Eclipse yet, the link in the wizard takes you directly to the Eclipse preferences where you can manage your AWS accounts.

Once you’ve completed the wizard, your project is all set up with the AWS SDK for Java, and you’re ready to begin coding against the AWS APIs. If you’ve configured your AWS security credentials, and selected any AWS samples to add to your application, you can immediately run the samples and begin experimenting with the APIs.

The Toolkit includes other new project wizards, too. A few months ago, we showed how to use the New AWS Android Project wizard. We plan on demonstrating the New AWS Java Web Project wizard soon.

What functionality in the AWS Toolkit for Eclipse do you find to be the most useful? Let us know in the comments below.

Are you are passionate about open source, Java, and cloud computing? Want to build tools that AWS customers use on a daily basis? Come join the AWS Java SDK and Tools team! We’re hiring!.