Tag: Eclipse


Two New Amazon RDS Database Engines in Eclipse

We’re excited to announce support for two more Amazon RDS database engines in the AWS Toolkit for Eclipse. You can now configure connections to PostgreSQL and Microsoft SQL Server RDS database instances directly from within Eclipse by opening the AWS Explorer view and double-clicking on your RDS database instance.

The first time you select your RDS database instance, you’ll be asked for some basic information about connecting to your database instance, such as: password, JDBC driver, and whether you want Eclipse to automatically open permissions in your security group to allow database connections.

Once you’ve configured a connection to your database, you can use all the tools from the Eclipse Data Tools Platform. You can browse your schemas, export data, run queries in SQL Scrapbook, and more.

If you don’t have any Amazon RDS database instances yet, you can go to the Amazon RDS console and launch a new database instance. With just a few clicks, you can launch a fully managed MySQL, Oracle, PostgreSQL, or Microsoft SQL Server database.

Are you using any of the database tools in Eclipse to work with your RDS databases?

DynamoDB Local Test Tool Integration for Eclipse

We’re excited to announce that the AWS Toolkit for Eclipse now includes integration with the Amazon DynamoDB Local Test Tool. The DynamoDB Local Test Tool allows you to develop and test your application against a DynamoDB-compatible database running locally — no Internet connectivity or credit card required. When your application is ready for prime time, all you need to do is update the endpoint given to your AmazonDynamoDBClient. Neato!

With the DynamoDB Local Test Tool integrated into the AWS Toolkit for Eclipse, using it is easier than ever. Make sure you have a recent version of the Amazon DynamoDB Management plugin (v201311261154 or later) installed and follow along below!

Installing DynamoDB Local

First, head to the Eclipse preferences and make sure you have a JavaSE-1.7 compatible JRE installed. If not, you’ll need to install one and configure Eclipse to know where it is.

Eclipse Execution Environments Preference Page

Then, head to the new DynamoDB Local Test Tool preference page, where you can specify a directory to install the DynamoDB Local Test Tool and a default TCP port for it to bind to.

DynamoDB Local Test Tool Preference Page

The page also lists versions of the DynamoDB Local Test Tool available for installation. There are currently two: the original version (2013-09-12) and a newer version (2013-12-12) which includes support for Global Seconday Indexes. When the DynamoDB team releases future versions of the test tool, they will also show up in this list. Select the latest version and hit the Install button that’s above the list of versions; the DynamoDB Local Test Tool will be downloaded and installed in the directory you specified.

Starting DynamoDB Local

Once the test tool is installed, pop open the AWS Explorer view and switch it to the Local (localhost) region. This psuedo-region represents test tool services running locally on your machine.

Selecting the Local (localhost) Region

For now, you’ll see a single Amazon DynamoDB node representing the DynamoDB Local Test Tool. Right-click this node and select Start DynamoDB Local.

Starting DynamoDB Local

This will bring up a wizard allowing you to pick which version of the DynamoDB Local Test Tool to launch, and the port to which it should bind. Pick the version you just installed, give it a port (if you didn’t specify a default earlier), and hit Finish. A console window will be opened that should print out a few lines similar to the below when the DynamoDB Local Test Tool finishes initializing itself:

DynamoDB Local Console Output

Using DynamoDB Local

You can now use the DynamoDB Management features of the toolkit to create tables in your local DynamoDB instance, load some data into them, and perform test queries against your tables.

The DynamoDB Table Editor

To write code against DynamoDB Local, simply set your client’s endpoint and region appropriately:

// The secret key doesn't need to be valid, DynamoDB Local doesn't care.
AWSCredentials credentials = new BasicAWSCredentials(yourAccessKeyId, "bogus");
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentials);

// Make sure you use the same port as you configured DynamoDB Local to bind to.
client.setEndpoint("http://localhost:8000");

// Sign requests for the "local" region to read data written by the toolkit.
client.setSignerRegionOverride("local");

And away you go! As mentioned above, DynamoDB Local doesn’t care if your credentials are valid, but it DOES create separate local databases for each unique access key ID sent to it, and for each region you say you’re authenticating to. If you have configured the Toolkit with a real set of AWS credentials, you’ll want to use the same access key ID when programmatically interacting with DynamoDBLocal so you read from and write to the same local database. Since the Toolkit uses the "local" region to authenticate to DynamoDBLocal, you’ll also want to override your client to authenticate to the "local" region as well.

Conclusion

DynamoDB Local is a great way to play around with the DynamoDB API locally while you’re first learning how to use it, and it’s also a great way to integration-test your code even if you’re working without a reliable Internet connection. Now that the AWS Toolkit for Eclipse makes it easy to install and use, you should definitely check it out!

Already using DynamoDB Local or the new Eclipse integration? Let us know how you like it and how we can make it even better in the comments!

Eclipse Support for AWS Elastic Beanstalk Worker Environment Tiers

A web application is typically concerned with quickly sending responses to incoming queries from its users. This model works really well for things like rendering a web page based on a couple of database queries or validating some user input and storing it to a database. The user makes a request to the application to perform some work, the user’s browser waits while the application does the work, and then the application returns a response, which the user’s browser renders to inform him or her of the result of the action.

In some cases, however, servicing a user request requires some serious computation, such as compiling a month-end report based on multiple large scans over the database. The typical web application model gives a poor user experience in this case—the user’s browser simply “spins” with no feedback until the response is completely ready. In the meantime, the job is taking up resources on your front-end web servers, potentially leading to degraded experience for other users if their requests happen to land on the host running the complicated request.

The Solution: Worker Environment Tiers for Elastic Beanstalk

The solution to these problems is to offload the expensive computations to a back-end application tier asynchronously. The user receives a response from the front-end tier as soon as the work item is queued, indicating that the work is in progress. He or she can check back at his or her leisure to see the progress of the request and view the final result once it becomes available. And since the work is being done by a separate set of back-end hosts which are not serving normal user requests, there’s no chance that running this complicated job will negatively impact other customers.

Recently, the fine folks who make AWS Elastic Beanstalk introduced an exciting new feature called worker environment tiers that make it super easy to implement and deploy these kinds of asynchronous background-processing workers within your application. A worker environment tier accepts work requests from your front-end web environment tier via an Amazon SQS queue. All you have to do is write the business logic to be run when a work request is received, deploy it to an Elastic Beanstalk worker environment tier, then start writing asynchronous work requests to the environment’s queue. You can read some more in-depth information about worker environment tiers here.

The AWS Toolkit for Eclipse now includes support for Elastic Beanstalk worker environment tiers, so you can stand up and start deploying code to a worker environment tier with just a few quick clicks. This post will walk you through the process of setting up a new worker environment tier and deploying code to it, all without leaving the comfort of your favorite IDE!

Create a new worker tier project

The easiest way to get started with worker tiers is to create a new AWS Java Web Project. You can find this new project type under the AWS folder in the New Project dialog; it’s the same application type you may have used in the past to create a web application for Elastic Beanstalk. On the New AWS Java Web Project dialog, select the option to start from a Basic Amazon Elastic Beanstalk Worker Tier Application. This will create a basic Java EE Web Application that accepts work requests via an HTTP POST request to a servlet.

  • From the main menu bar, select File -> New -> Project …. Choose the AWS Java Web Project wizard.

The Eclipse New Project Wizard

 

  • Give your project a name and select the Basic Amazon Elastic Beanstalk Worker Tier Application option.

The New AWS Java Web Project Wizard

 

As with the Basic Java Web Application template, the build path for the newly-created project is preconfigured with the latest version of the AWS SDK for Java, so you’re already set up to interact with AWS services like Amazon S3 and Amazon DynamoDB as part of handling a work request. For example, you may want to have your worker write the result of its processing to Amazon S3 for later retrieval by the front-end tier, or use Amazon DynamoDB to track the progress of any work items that are currently being processed.

The example application demonstrates parsing a simple work request from the POST body, simulates doing some complicated work by sleeping for 10 seconds, then writes the “result” of its work to Amazon S3. Take a quick glance at the code to see how it works.

Create a new Elastic Beanstalk worker tier server

Next, we’ll create a new environment through the Eclipse Web Tools Platform’s Servers view. On the first page of the New Server wizard, select the AWS Elastic Beanstalk for Tomcat 7 server type from within the Amazon Web Services folder. After clicking Next, select the AWS region where your application will be hosted, choose Worker Environment from the Environment Type drop-down, and give your new application and environment names.

  • Right-click on the Servers view and select New -> Server.

 

  • Choose the AWS Elastic Beanstalk for Tomcat 7 option.

 

  • Choose a region, application name, environment name, and environment type.

 

On the next page of the wizard, you can modify other optional settings for the environment. If you have an existing SQS queue you would like your worker environment to read from, you can configure that here; otherwise, a new SQS queue will be automatically created for you.

You can also choose to associate an IAM role with your environment on this page, which is an easy way to give your application permission to access AWS resources such as S3. By default, a new role will be created for you that grants your environment permission to access your SQS queues and publish metrics into Amazon CloudWatch. Your environment won’t be able to function correctly if it doesn’t have these permissions, so if you pick a custom role here, make sure it includes those permissions as well.

  • Configure advanced settings for the environment.

 

On the final page, select your worker project to be deployed to the environment.

 

Now that everything is configured, right-click your new environment in the Servers view and start it. This process may take several minutes as Elastic Beanstalk provisions EC2 instances and deploys your application to them. Once the process has completed and your application is displayed as “Started”, your workers are ready to go!

  • Start your environment.

 

By default, your environment starts out with a single worker process running on a t1.micro EC2 instance, and will auto-scale up to as many as four instances if CPU usage on the workers is high. Double-clicking the environment in the Servers view will open up a configuration page where you can tweak these settings, along with many more.

Testing out your new worker

Unlike a traditional web application, your worker environment cannot be invoked directly from within your web browser to debug it. Instead, you need to send work requests to your environment via the SQS queue it is subscribed to. You can do this programmatically (as you will ultimately do from your front-end application):

 

AmazonSQS sqs = ...;

Sring workRequest =
    "{" +
    "  "bucket": "my-results-bucket"," +
    "  "key": "my-work-item-key"," +
    "  "message": "Hello, World"" +
    "}";

sqs.sendMessage(new SendMessageRequest()
    .withQueueUrl(MY_QUEUE_URL)
    .withMessageBody(workRequest));

 

For testing things out, you can also easily send messages to your application by clicking on the Queue URL in the Environment Resources tab of the server editor (available by double-clicking on the newly-created server in the Servers view). This will bring up a dialog allowing you to quickly send work requests to your environment via the SQS queue in order to test out how it handles them.

 

Conclusion

Now you’re all set up to use an Elastic Beanstalk worker tier in your application. Just fill in the appropriate code to handle the different kinds of asynchronous work requests your application requires, and with a couple mouse-clicks your updated code can be deployed out to a fleet of back-end workers running in the cloud. Nifty! Are you deploying code to Elastic Beanstalk (either worker tiers or traditional web server tiers) from within Eclipse? Let us know how it’s working in the comments below!

AWS re:Invent 2013 Wrap-up

We’re back in Seattle after spending last week in Las Vegas at AWS re:Invent 2013! It was great to meet so many Java developers building applications on AWS. We heard lots of excellent feature requests for all the different tools and projects our team works on, and we’re excited to get started building them!

The slides from my session on the SDK and Eclipse Toolkit are online, and we’ll let you know as soon as the videos from the sessions start appearing online, too.

I’ve also uploaded the latest code for the AWS Meme Generator to GitHub. I used this simple web application in my session to demonstrate a few features in the AWS SDK for Java and the AWS Toolkit for Eclipse. Check out the project on GitHub and try it out yourself!

If you didn’t make it to AWS re:Invent 2013, or if you were there, but just didn’t get a chance to stop by the AWS SDKs and Tools booth, let us know in the comments below what kinds of features you’d like to see in tools like the AWS SDK for Java and the AWS Toolkit for Eclipse.

High-Level APIs in the AWS SDK for Java

Today, at AWS re:Invent 2013, I’m talking about some of the high-level APIs for Amazon S3 and Amazon DynamoDB, but there are a whole lot more high-level APIs in the SDK that I won’t have time to demo. These high-level APIs are all aimed at specific common tasks that developers face, and each one can save you development time. To help you find all these high-level APIs, we’ve put together the list below. As an added bonus, I’ve thrown in some extra links to some of the more powerful features in the AWS Toolkit for Eclipse.

Take a few minutes to explore the SDK and Eclipse Toolkit features below. Are you already using any of these high-level APIs? What’s your favorite? Let us know in the comments below!

Amazon S3 TransferManager

TransferManager is an easy and efficient way to manage data transfers in and out of Amazon S3. The API is easy to use, provides asynchronous management of your transfers, and has several throughput optimizations.

Amazon S3 Encryption Client

This drop-in replacement for the standard Amazon S3 client gives you control over client-side encryption of your data. The encryption client is easy to use, but also has advanced features like hooks for integrating with existing key management systems.

Amazon DynamoDB Mapper

The DynamoDB Mapper handles marshaling your POJOs into and out of Amazon DynamoDB tables. Just apply a few annotations to your POJOs, and they’re ready to use with the mapper. The mapper also has support for running scans and queries on your data and for batching requests.

S3Link

This new type in the SDK allows you to easily store pointers to data in Amazon S3 inside your POJOs that you’re using with the DynamoDB Mapper. It also makes it easy to perform common operations on the referenced data in Amazon S3, such as replacing the contents, downloading them, or changing access permissions.

Amazon DynamoDB Tables Utility

This class provides common utilities for working with Amazon DynamoDB tables, such as checking if a table exists, and waiting for a new table to transition into an available state.

AWS Flow Framework

AWS Flow is an open-source framework that makes it faster and easier to build apps with Amazon Simple Workflow. The framework handles the interaction with Amazon SWF and keeps your application code simple.

Amazon SES JavaMail Provider

The SDK provides an easy to use JavaMail transport implementation that sends email through the Amazon Simple Email Service.

Amazon SQS Batched Client

This extension of the basic Amazon SQS client provides client-side batching when sending and deleting messages with your Amazon SQS queues. Batching can help reduce the number of round-trip queue requests your application makes and can therefore save you money.

Amazon SNS Topics Utility

This class provides common utilities for working with Amazon SNS topics, such as as subscribing an Amazon SQS queue to an SNS topic to receive published messages.

AWS Policy API

Writing JSON policies by hand can be difficult to maintain, but the Policy API in the AWS SDK for Java gives you an easy way to programmatically create JSON policies for AWS services.

Amazon Glacier ArchiveTransferManager

Glacier’s ArchiveTransferManager makes it easy to get data into and out of Amazon Glacier.

AWS Toolkit for Eclipse

Android Application Development Support

Developing Android applications that use AWS has never been easier. With the AWS Toolkit for Eclipse, you can create new AWS Android projects that have your security credentials configured, Android libraries present, AWS SDK for Android on your build path, and some sample source code to start from.

CloudFormation Support

Lots of new features in the Eclipse Toolkit make working with AWS CloudFormation easy. You can update your CloudFormation stacks directly from Eclipse and use a custom editor to make working with CloudFormation templates easy.

AWS Elastic Beanstalk Deployment

One of the most powerful features of the Eclipse Toolkit is being able to quickly deploy your Java web applications to AWS Elastic Beanstalk directly from within Eclipse. This three-part blog series demonstrates how to get started with AWS Java web projects in Eclipse, how to deploy them to AWS Elastic Beanstalk, and how to manage your applications running in AWS Elastic Beanstalk.

AWS re:Invent 2013

We’re all getting very excited about AWS re:Invent 2013. In just over a month, we’ll be down in Las Vegas talking to developers and customers from all over the world.

There’s a huge amount of great technical content this year, and attendees will be taking home lots of knowledge on the latest and greatest features of the AWS platform, and learning best practices for building bigger, more robust applications faster. Our team will be giving a few presentations, including TLS301 – Accelerate Your Java Development on AWS.

I hope we’ll get to meet you at the conference this year. If you weren’t able to make it last year, you can find lots of great videos of the sessions online. One of my favorites is Andy Jassy’s re:Invent Day 1 Keynote. Some of you might remember Zach Musgrave’s session last year on Developing, Deploying, and Debugging AWS Applications with Eclipse, and a few of you might have been there for my session on Being Productive with the AWS SDK for Java.

See you in Las Vegas!

Amazon DynamoDB Session Manager for Apache Tomcat

Today we’re excited to talk about a brand new open source project on our GitHub page for managing Apache Tomcat sessions in Amazon DynamoDB!

DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data. Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.

Using the DynamoDB Session Manager for Tomcat is easy. Just drop the library in the lib directory of your Tomcat installation and tell Tomcat you’re using a custom session manager in your context.xml configuration file:

<?xml version="1.0" encoding="UTF-8"?>
<Context>
    <WatchedResource>WEB-INF/web.xml</WatchedResource>
    <Manager className="com.amazonaws.services.dynamodb.sessionmanager.DynamoDBSessionManager"
             awsAccessKey="myAccessKey"
             awsSecretKey="mySecretKey"
             createIfNotExist="true" />
</Context>

The context.xml file above configures the session manager to store your sessions in DynamoDB, and uses the provided AWS security credentials to access DynamoDB. There are several other configuration options available, including many ways to provide your security credentials:

  • you can explicitly specify them (as shown above)
  • you can specify a properties file to load them from
  • you can rely on the DefaultAWSCredentialsProviderChain to load your credentials from environment variables, Java system properties, or IAM roles for Amazon EC2 instances

If you’re using the AWS Toolkit for Eclipse and deploying your application through AWS Elastic Beanstalk, then all you have to do is opt-in to using the DynamoDB Session Manager for Tomcat in the New AWS Java Web Project Wizard. Then when you deploy to AWS Elastic Beanstalk, all your sessions will be managed in DynamoDB.

For more details on using the session manager, check out the Session Manager section in the AWS SDK for Java Developer Guide. Or, if you really want to get into the details, check out the project on GitHub.

We’re excited to have the first version of the Amazon DynamoDB Session Manager for Apache Tomcat out there for customers to play with. What features do you want to see next? Let us know in the comments below!

Quick Tips: Managing Amazon S3 Data in Eclipse

No matter what type of application you’re developing, it’s a safe bet that it probably needs to save or load data from a central data store, such as Amazon S3. During development, you can take advantage of the Amazon S3 management tools provided by the AWS Toolkit for Eclipse, all without ever leaving your IDE.

To start, find your Amazon S3 buckets in the AWS Explorer view.

From the AWS Explorer view, you can create and delete buckets, or double-click on one of your buckets to open it in the Bucket Editor.

Once you’re in the Bucket Editor, you can delete objects in your bucket, edit the permissions for objects or the bucket itself, and generate pre-signed URLs that you can safely pass around to give other people access to the data stored in your account without ever having to give away your AWS security credentials.


One of the most useful features is the ability to drag and drop files into your Amazon S3 buckets directly from your OS. In the following screenshot, I’ve selected a file from the Mac Finder and drag-and-dropped it into a virtual folder in the object listing in the Bucket Editor. To download one of your objects from Amazon S3, just drag it to a directory in a view such as Eclipse’s Package Explorer.

The AWS Toolkit for Eclipse has many features that facilitate development and deployment of AWS applications. For more information, check out some of our other Eclipse blog posts:

The DynamoDBMapper, Local Secondary Indexes, and You!

Earlier this year, Amazon DynamoDB released support for local secondary indexes. At that time, the AWS SDK for Java added support for LSIs, for both the low-level(AmazonDynamoDBClient) and high-level(DynamoDBMapper) APIs in the com.amazonaws.services.dynamodbv2 package. Since then, I have seen a few questions on how to use the DynamoDBMapper with local secondary indexes. In this post, I will build on the Music Collection sample that is included in the Amazon DynamoDB documentation.

The example table uses a String hash key (Artist), a String range key (SongTitle), and a local secondary index on the AlbumTitle attribute (also a String). I created the table used in this example with the DynamoDB support that is part of the AWS Toolkit for Eclipse, but you could use the code included in the documentation or the AWS Management Console. I also used the Eclipse Toolkit to populate the table with some sample data. Next, I created a POJO to represent an item in the MusicCollection table. The code for MusicCollectionItem is shown below.

@DynamoDBTable(tableName="MusicCollection")
public class MusicCollectionItem {

    private String artist;
    private String songTitle;
    private String albumTitle;
    private String genre;
    private String year;

    @DynamoDBHashKey(attributeName="Artist")
    public String getArtist() { return artist; }
    public void setArtist(String artist) { this.artist = artist; }

    @DynamoDBRangeKey(attributeName = "SongTitle")
    public String getSongTitle() { return songTitle; }
    public void setSongTitle(String songTitle) { this.songTitle = songTitle; }

    @DynamoDBIndexRangeKey(attributeName="AlbumTitle", 
                           localSecondaryIndexName="AlbumTitleIndex")
    public String getAlbumTitle() { return albumTitle; }
    public void setAlbumTitle(String albumTitle) { this.albumTitle = albumTitle; }

    @DynamoDBAttribute(attributeName="Genre")
    public String getGenre() { return genre; }
    public void setGenre(String genre) { this.genre = genre; }

    @DynamoDBAttribute(attributeName="Year")
    public String getYear() { return year;}
    public void setYear(String year) { this.year = year; }
}

As you can see, MusicCollectionItem has the hash key and range key annotations, but also a new annotation DynamoDBIndexRangeKey. You can find the documentation for that annotation here. The DynamoDBIndexRangeKey marks the property as an alternate range key to be used in a local secondary index. Since Amazon DynamoDB can support up to five local secondary indexes, I can also have up to five attributes annotated with the DynamoDBIndexRangeKey. Also note in the code above, since the documentation sample uses PascalCase, I needed to include the attributeName='X' in each of the annotations. If you were starting from scratch, you could make this code simpler by using attribute names that match your instance variable names.

So now that you have both a table and a corresponding POJO using a local secondary index, how do you use it with the DynamoDBMapper? Using a local secondary index with the mapper is pretty straightforward. You create the mapper the same way as before:

dynamoDB = Region.getRegion(Regions.US_WEST_2)
           .createClient(AmazonDynamoDBClient.class, new ClasspathPropertiesFileCredentialsProvider(), null);
mapper = new DynamoDBMapper(dynamoDB);;

Next, you can query the range key in the same manner as you would a table without a local secondary index:

String artist = "The Okee Dokee Brothers";
MusicCollectionItem musicKey = new MusicCollectionItem();
musicKey.setArtist(artist);
DynamoDBQueryExpression<MusicCollectionItem> queryExpression = new DynamoDBQueryExpression<MusicCollectionItem>()
      .withHashKeyValues(musicKey);
List<MusicCollectionItem> myCollection = mapper.query(MusicCollectionItem.class, queryExpression);

This code looks up my kids new favorite artist and returns all the song titles that are in my Amazon DynamoDB table. I could add a Condition that would limit the song titles, but I wanted to get list of all of them.

But what if I want to know which songs are on The Okee Dokee Brothers latest album—Can you Canoe? Well luckily, I have a local secondary index on the AlbumTitle attribute. Before local secondary indexes, I could only do a Scan operation, which would have scanned the entire table, but with local secondary indexes I can easily do a Query operation. The code for using the index is:

rangeKeyCondition = new Condition();
rangeKeyCondition.withComparisonOperator(ComparisonOperator.EQ)
     .withAttributeValueList(new AttributeValue().withS("Can You Canoe?"));
queryExpression = new DynamoDBQueryExpression<MusicCollectionItem>()
     .withHashKeyValues(musicKey)
     .withRangeKeyCondition("AlbumTitle", rangeKeyCondition);
myCollection = mapper.query(MusicCollectionItem.class, queryExpression);

As you can see, doing a query on a local secondary index with the DynamoDBMapper is exactly the same as doing a range key query.

Now that I have shown how easy it is to use a local secondary index with the DynamoDBMapper, how will you use them? Let us know in the comments!

Eclipse Deployment: Part 3 – Configuring AWS Elastic Beanstalk

Now that you know the basics about creating AWS Java web applications and deploying them using the AWS Toolkit for Eclipse, let’s talk about some of the ways you can control how your environment runs.

AWS Elastic Beanstalk provides several easy ways to configure different features of your environment. The first mechanism we’ll look at for controlling how your environment runs is your environment’s configuration. These are properties set through the Elastic Beanstalk API that let you control different operational parameters of your environment, such as load balancer behavior and auto scaling strategies. The second mechanism we’ll look at is Elastic Beanstalk extension config files that are included as files in your deployed application. These configuration files allow you to customize additional software installed on your EC2 instances, as well as create and configure AWS resources that your application requires.

We’ll start off by covering some of the most common options, which are presented in the second page of the wizard when you create a new Elastic Beanstalk environment through Eclipse.

Shell Access

If you want to be able to remotely log into a shell on the EC2 instances running your application, then you’ll need to make sure you launch your environment with an Amazon EC2 key pair. The EC2 key pair can be created and managed through Eclipse or any of the other AWS tools, and allows you to securely log into any EC2 instances launched with that key pair. To connect to an instance from Eclipse, find your instance in the EC2 Instances view, right-click to bring up the context menu and select Open Shell. If Eclipse knows the private key for that instance’s key pair, then you’ll see a command prompt open up.

CNAMEs

The default URL for your application running on AWS Elastic Beanstalk probably isn’t something that your customers will be able to easily remember. You can add another abstraction layer by creating a CNAME record that points to your application’s URL. You can set up that CNAME record with Amazon Route 53 (Amazon’s DNS web service), or with any other DNS provider. This allows you to host your application under any domain you own. You can find more details on CNAMEs in the Elastic Beanstalk Developer Guide. This CNAME not only gives your application a more friendly URL, but it also provides an important abstraction that allows you to deploy new versions of your application with zero downtime by launching a new environment with your new application version and flipping the CNAME record over to the new environment’s URL after you’ve confirmed it’s ready for production traffic. You can read more about this technique in the Elastic Beanstalk Developer’s Guide.

Notifications

AWS Elastic Beanstalk uses the Amazon Simple Notification Service (Amazon SNS) to notify you of important events affecting your application, such as environment status changes. To enable Amazon SNS notifications, simply enter your email address in the Email Address text box under Notifications on the Configuration tab inside the Toolkit for Eclipse.

SSL Certificate

If your application deals with sensitive customer information, then you’ll probably want to configure an SSL cert for your load balancer so that all data between your customers and your environment’s load balancer is encrypted. To do this, you’ll need a certificate from an external certificate authority such as VeriSign or Entrust. Once you register the the certificate with the AWS Identity and Access Management service, you can enter the certificate’s ID here to tell Elastic Beanstalk to configure your load balancer for SSL with your certificate.

Health Check URL

Your Elastic Beanstalk environment attempts to monitor the health of your application through the configured health check URL. By default Elastic Beanstalk will attempt to check the health of your application by testing a TCP connection on port 80. This is a very basic health check, and you can easily override this with your own custom health check. For example, you might create a custom health check page that will do some very basic tests of your application’s health. Be careful that you make this health check page very simply though, since this check will be run often (the interval is configurable). If you want to do more in depth health checking, you might have a separate thread in your application that reports health status such as checking for DB connection health, and then simply have your health check page report that status. If one of the hosts in your environment starts failing health checks, it will automatically be removed from your environment so that it doesn’t serve bad results to customers. The exact parameters on how these checks are run are configurable through the environment configuration editor that we’ll see shortly.

Incremental Deployment

The Incremental Deployment option (enabled by default), only affects how Eclipse uploads new application versions to Elastic Beanstalk, but it’s a neat option worth pointing out here. When you use incremental deployment, Eclipse will only push the delta of your most recent changes to AWS Elastic Beanstalk, instead of pushing every file in your whole application. Under the covers, Eclipse and Elastic Beanstalk are actually using the Git protocol to upload file deltas, and the end result is very fast application deployments for small changes after you’ve gone through a full push initially.

After you’ve started your environment, you can modify any of these configuration options, and many more, by double-clicking on your Elastic Beanstalk environment in Eclipse’s Servers view to open the Environment Configuration Editor. From here you can access dozens of settings to fine tune how your environment runs. Note that some of these options will require stopping and restarting your environment (such as changing the Amazon EC2 instance type your environment uses).

From the environment configuration editor you have access to dozens of additional options for controlling how your environment runs. The Configuration tab in the editor shows you the most common options, such as EC2 key pairs, auto scaling and load balancing parameters, and specific Java container options such as JVM settings and Java system properties.

The Advanced tab in the environment configuration editor has a complete list of every possible option for your environment, but for the vast majority of use cases, you shouldn’t need more than the Configuration tab.

Elastic Beanstalk Extension Config Files

We’ve seen how to manipulate operational settings that control how your environment runs by updating an environment’s configuration. These settings are all updated by tools working directly with the Elastic Beanstalk API to change these settings. The second way to customize your environment is through Elastic Beanstalk extension config files. These files live inside your project and get deployed with your application. They customize your environment in larger ways than the very specific settings we saw earlier.

These extension config files allow you to customize the additional software available on the EC2 instances running your application. For example, your application might want to use the Amazon CloudWatch monitoring scripts to upload custom CloudWatch metrics. You can use these extension config files to specify that the Amazon CloudWatch monitoring scripts be installed on any EC2 instance that comes up as part of your environment, then your application code will be able to access them.

You can also use these Elastic Beanstalk extension config files to create and configure AWS resources that your application will need. For example, if your application requires an Amazon SQS queue, you could declare it in your extension config file and even create an alarm on queue depth to notify you if your application gets behind on processing messages in the queue. The AWS Elastic Beanstalk Developer Guide goes into a lot more detail, and examples, demonstrating how to configure AWS resources with extension config files.

That completes our tour of the different ways you can customize your Elastic Beanstalk environments. One of the great strengths of Elastic Beanstalk is that you can simply drop in your application and not worry about customization, but if you do want to customize, you have a wealth of different ways to configure your environment to run the way you need it to for your application. What kinds of customization settings have you tried for your Elastic Beanstalk environments? Let us know the comments below!