AWS Developer Blog

Working with Multiple Regions

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

In a previous blog post, I introduced the new :region configuration option for the AWS SDK for Ruby (aws-sdk gem). Beyond simplified configuration, the Ruby SDK provides additional helpers for working with multiple regions.

There are two new helper classes for working with regions, AWS::Core::Region and AWS::Core::RegionCollection. The AWS module provides helper methods so that you should not need to instantiate these classes directly.

Region Objects

If you know the name of a region you need to work with, you can create it like so:

# no HTTP request required, simply returns a new AWS::Core::Region object
region = AWS.regions['us-west-2']

A region object provides access to service interface objects. Every service can be accessed using its short name.

region = AWS.regions['us-west-2']

# collect the ids of instances running in this region
region.ec2.instances.map(&:id)

# collect the name of tables created in this region
region.dynamo_db.tables.map(&:name)

See the Region class API documentation for a complete list of service interface helper methods.

RegionCollection

Besides returning a single region object, the region collection can also enumerate all public (non GovCloud) regions.

AWS.regions.each do |region|
  puts region.name
end

Please note that when you enumerate regions, an HTTP request is made to get a current list of regions and services. The response is cached for the life of the process.

Enumerating Regions from a Service

Not all services are available in every region. You can safely enumerate only regions a service operates in using a region collection from a service interface. In the following example we use the regions helper method to enumerate what regions Amazon DynamoDB and Amazon Redshift operate in.

AWS::DynamoDB.regions.map(&:name)
#=> ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-northeast-1", "ap-southeast-1", "ap-southeast-2", "sa-east-1"] 

AWS::Redshift.regions.map(&:name)
#=> ["us-east-1", "us-west-2", "eu-west-1"] 

You can use the region object to operate on a service resource in each region it exists in. As a service expands to additional regions, you code will automatically include those regions when enumerating. In the following example, we list all of the Amazon DynamoDB tables, grouped by region.

# generate a list of DynamoDB tables for every region
AWS::DynamoDB.regions.each do |region|
  table_names = region.dynamo_db.tables.map(&:name)
  unless table_names.empty?
    puts "Region: " + region.name
    puts "Tables:"
    puts table_names.join("n")
    puts ""
  end
end

Take the new regions interfaces for a spin and let us know what you think!

Working with Regions

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

The AWS SDK for Ruby (aws-sdk gem) has some cool new features that simplify working with regions.

The Ruby SDK defaults to the us-east-1 region for all services. Until recently, you had to specify the full regional endpoint for each service you connect to outside the default region. If you use multiple services outside us-east-1, this can be a pain. Your code might end up looking like this:

AWS.config(
  ec2_endpoint: 'ec2.us-west-2.amazonaws.com',
  s3_endpoint: 's3-us-west-2.amazonaws.com',
  # and so on ...
)

Region to the Rescue

You can now set the default region for all services with a single configuration option. Services will construct their own regional endpoint from the default region. If you want to do all of your work in us-west-2, the example above would now look like this:

AWS.config(region: 'us-west-2')

You can pass the :region endpoint directly to a service interface. This is helpful if you need to connect to multiple regional endpoints for a single service.

s3_east = AWS::S3.new(region: 'us-east-1')
s3_west = AWS::S3.new(region: 'us-west-2')

Deprecations

The service specific endpoint options are all now deprecated. They will continue to be supported until removal in our next major revision of the Ruby SDK. The deprecated options are (replace svc with a service prefix like ec2, s3, etc):

  • :svc_endpoint
  • :svc_port
  • :svc_region

Here are a few examples of how to upgrade from the deprecated configuration options to the new options:

# service prefixed connection options are deprecated with AWS.config
AWS.config(s3_endpoint: 'localhost', s3_port: 8000)
s3 = AWS::S3::Client.new

# service prefixed connection options are deprecated with clients
s3 = AWS::S3::Client.new(s3_endpoint: 'localhost', s3_port: 8000)

# this is the preferred method for setting endpoint and port
s3 = AWS::S3::Client.new(endpoint: 'localhost', port: 8000)

Writing less code when using the AWS SDK for Java

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Today we have a guest post by David Yanacek from the Amazon DynamoDB team.


The AWS SDK for Java provides a convenient set of methods for building request objects. This set of methods, known as a fluent interface, can save you from repeatedly retyping the request variable name, and can even make your code more readable. But what about maps? Services like Amazon DynamoDB use java.util.Map objects throughout their API, which do not lend themselves naturally to this builder pattern. Fortunately, the Google Guava open source library offers some classes that make it possible to build maps in a way that is compatible with the SDK’s fluent interface. In this post, we show how using Google Guava’s collection classes can make it easier to use services like Amazon DynamoDB with the low-level Java SDK. 

First, let’s look at some code that uses the bean interface—not the fluent interface—for making a PutItem call to DynamoDB. This example puts an item into the “ProductCatalog” described in the Amazon DynamoDB Developer Guide, using a conditional write so that DynamoDB makes the change only if the item already exists and has the price of “26.00”. The example also asks DynamoDB to return the previous copy of the item.

// Construct the new item to put
Map<String, AttributeValue> item = new HashMap<String, AttributeValue>();

AttributeValue id = new AttributeValue();
id.setN("104");
item.put("Id", id);

AttributeValue title = new AttributeValue("Book 104 Title");
item.put("Title", title);

AttributeValue isbn = new AttributeValue("111-1111111111");
item.put("ISBN", isbn);

AttributeValue price = new AttributeValue();
price.setN("25");
item.put("Price", price);

List<String> authorList = new ArrayList<String>();
authorList.add("Bob");
authorList.add("Alice");
AttributeValue authors = new AttributeValue();
authors.setSS(authorList);
item.put("Authors", authors);

// Construct a map of expected current values for the conditional write
Map<String, ExpectedAttributeValue> expected = new HashMap<String, ExpectedAttributeValue>();

ExpectedAttributeValue expectedPrice = new ExpectedAttributeValue();
AttributeValue currentPrice = new AttributeValue();
currentPrice.setN("26");
expectedPrice.setValue(currentPrice);
expected.put("Price", expectedPrice);

// Construct the request
PutItemRequest putItemRequest = new PutItemRequest();
putItemRequest.setTableName("ProductCatalog");
putItemRequest.setItem(item);
putItemRequest.setExpected(expected);
putItemRequest.setReturnValues(ReturnValue.ALL_OLD);

// Make the request
PutItemResult result = dynamodb.putItem(putItemRequest);

That’s a lot of code for doing something as simple as putting an item into a DynamoDB table. Let’s take that same example and switch it over to using the built-in fluent style interface:

// Construct the new item to put
Map<String, AttributeValue> item = new HashMap<String, AttributeValue>();
item.put("Id", new AttributeValue().withN("104"));
item.put("Title", new AttributeValue("Book 104 Title"));
item.put("ISBN", new AttributeValue("111-1111111111"));
item.put("Price", new AttributeValue().withN("25"));
item.put("Authors", new AttributeValue()
    .withSS(Arrays.asList("Author1", "Author2")));

// Construct a map of expected current values for the conditional write
Map<String, ExpectedAttributeValue> expected = new HashMap<String, ExpectedAttributeValue>();
expected.put("Price", new ExpectedAttributeValue()
    .withValue(new AttributeValue().withN("26")));

// Make the request 
PutItemResult result = dynamodb.putItem(new PutItemRequest()
    .withTableName("ProductCatalog")
    .withItem(item)
    .withExpected(expected)
    .withReturnValues(ReturnValue.ALL_OLD));

That’s a lot shorter. You may have noticed that this code also used a method Arrays.asList(), which ships with the JDK, for constructing the authors list. Wouldn’t it be nice if the JDK came with something like that for building maps? Fortunately, Google Guava exposes several Map subclasses, and provides simple Builder utilities for each. Let’s use ImmutableMap.Builder to make the code even more compact:

// Make the request
PutItemResult result = dynamodb.putItem(new PutItemRequest()
    .withTableName("ProductCatalog")
    .withItem(new ImmutableMap.Builder<String, AttributeValue>()
        .put("Id", new AttributeValue().withN("104"))
        .put("Title", new AttributeValue("Book 104 Title"))
        .put("ISBN", new AttributeValue("111-1111111111"))
        .put("Price", new AttributeValue().withN("25"))
        .put("Authors", new AttributeValue()
            .withSS(Arrays.asList("Author1", "Author2")))
        .build())
    .withExpected(new ImmutableMap.Builder<String, ExpectedAttributeValue>()
        .put("Price", new ExpectedAttributeValue()
            .withValue(new AttributeValue().withN("26")))
        .build())
    .withReturnValues(ReturnValue.ALL_OLD));

And that’s it! We hope this approach saves you some typing and makes your code more readable. And if you want even less code, take a look at the DynamoDBMapper class, which allows you to interact with DynamoDB with your own objects directly. For more details, see the earlier blog posts Storing Java objects in Amazon DynamoDB tables and Using Custom Marshallers to Store Complex Objects in Amazon DynamoDB, or the topic Using the Object Persistence Model with Amazon DynamoDB in the Amazon DynamoDB Developer Guide.

Logging with the AWS SDK for .NET

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

As we announced previously, the AWS SDK for .NET now supports easily-configurable logging to both log4net and .NET’s built-in System.Diagnostics logging. In this post, we cover how to enable and configure this new functionality, and how you can easily collect performance metrics for the SDK.

Both log4net and System.Diagnostics logging approaches have their own advantages and disadvantages, but we’re not covering those here. Which approach you take depends on your specific needs.

Turning on log4net or System.Diagnostics logging, or even both, is now very simple to do: just add the appropriate XML sections to your application’s config file (app.config or web.config, for example), and you’re done! As long as you are using version 1.5.n or higher of the AWS SDK for .NET, there is no need to change any code or recompile your application. (To use log4net, remember that you must download the necessary binaries from the download site and place those alongside the AWSSDK.dll assembly.)

Configuring Logging

To make logging as easy as possible, we have introduced an application setting key, AWSLogging, that configures logging for the entire SDK. The key can be set to one of the following values:

  • "log4net" – only use log4net
  • "SystemDiagnostics" – log using System.Diagnostics
  • "SystemDiagnostics, log4net" – log to both system
  • "None" – completely disable logging

In this post, we demonstrate configuring this setting and the associated loggers.

Configuring log4net

The simplest approach to logging is to direct the logs to a file on your local system. The following configuration shows how to use log4net and direct all logs to the file located at C:Logssdk-log.txt.

<configSections>
  <section name="log4net" 
           type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/>
</configSections>

<log4net>
  <appender name="FileAppender" type="log4net.Appender.FileAppender,log4net">
    <file value="C:Logssdk-log.txt"/>
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern 
             value="%date [%thread] %level %logger - %message%newline"/>
    </layout>
  </appender>

  <logger name="Amazon">
    <level value="INFO"/>
    <appender-ref ref="FileAppender"/>
  </logger>
</log4net>

<appSettings>
  <!-- Configure the SDK to use log4net -->
  <add key="AWSLogging" value="log4net"/>
</appSettings>

If you are considering using log4net logging full-time, use RollingFileAppender instead of FileAppender. Otherwise, your logs may grow too large in size.

System.Diagnostics

You can implement the same kind of local-file logging with System.Diagnostics as well. Here’s a comparable configuration:

<configuration>
  <appSettings>
    <!-- Configure the SDK to use System.Diagnostics -->
    <add key="AWSLogging" value="SystemDiagnostics"/>
  </appSettings>

  <system.diagnostics>
    <trace autoflush="true"/>

    <sources>
      <source name="Amazon">
        <listeners>
          <add name="text" 
               type="System.Diagnostics.TextWriterTraceListener" 
               initializeData="c:Logssdk-log.txt"/>
        </listeners>
      </source>
    </sources>

  </system.diagnostics>
</configuration>

Metrics

The AWS SDK for .NET has recently started logging performance metrics for most service calls. These metrics can also be enabled through a configuration update. This is done using a switch similar to AWSLogging called AWSLogMetrics. This value is a simple boolean field that accepts "true" and "false" as inputs. Here is an example:

<appSettings>
  <!-- Enable SDK metrics logging -->
  <add key="AWSLogMetrics" value="true"/>
  <!-- Logging must be enabled to collect metrics -->
  <add key="AWSLogging" value="log4net,SystemDiagnostics"/>
</appSettings>

When you include this setting in your configuration, the SDK will collect and log performance metrics for every service call. Below is a sample metric log from log4net:

2013-02-24 00:11:59,467 [1] INFO Amazon.DynamoDB.AmazonDynamoDBClient - Request metrics: ServiceName = AmazonDynamoDB; ServiceEndpoint = https://dynamodb.us-east-1.amazonaws.com/; MethodName = ListTablesRequest; AsyncCall = False; StatusCode = OK; AWSRequestID = RRAMEONOVS6EEMP2GVMVDF59DJVV4KQNSO5AEMVJF66Q9ASUAAJG; BytesProcessed = 132; CredentialsRequestTime = 00:00:00.0015676; RequestSigningTime = 00:00:00.0134851; HttpRequestTime = 00:00:00.7279260; ResponseUnmarshallTime = 00:00:00.0057781; ResponseProcessingTime = 00:00:00.1004697; ClientExecuteTime = 00:00:00.8906012; 

As you can see, we identify the .NET client issuing the request, the service, the endpoint, the method, whether this is an asynchronous call, and the response. We also identify timings, including how long the HTTP request took to complete and the total SDK call time. Note that logging must be enabled in order for any metrics to be collected.

Logger Hierarchy

You can use logger hierarchies for both log4net and System.Diagnostics to choose which SDK services will log events. You do this by configuring the loggers based on the namespace of the source. The earlier examples we covered will capture logs for all service calls, as they reference "Amazon", which is the hierarchical parent of all loggers in the SDK. If you were only interested in DynamoDB logs, for instance, you would reference "Amazon.DynamoDB". And since there are multiple logging options and both log4net and System.Diagnostics support multiple listeners/loggers, it’s even possible to configure for different services to be logged to different destinations.

Below, you can see how easy it is to configure log4net to capture Route53 logs and System.Diagnostics to handle only DynamoDB messages.

log4net:

<logger name="Amazon.Route53">
  <level value="INFO"/>
  <appender-ref ref="FileAppender"/>
</logger>

System.Diagnostics:

<source name="Amazon.DynamoDB">
  <listeners>
    <add name="text" 
         type="System.Diagnostics.TextWriterTraceListener" 
         initializeData="c:Logssdk-log.txt"/>
  </listeners>
</source>

Summary

In this post, we covered the available logging approaches present in the AWS SDK for .NET. You learned how to configure logging with both log4net and System.Diagnostics, and how to configure both sets of tools to collect only the data you need. You’ve also seen how simple it is to gather metrics data for the SDK.

As a final note, both log4net and System.Diagnostics provide extensive approaches to logging, though we covered only a small subset of that in this blog post. We recommend checking out the log4net and TraceSource documentation to acquaint yourself with these technologies.

Welcome to the AWS SDKs and Tools .NET blog

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Welcome to the AWS SDKs and Tools .NET blog. We’re glad to see you here!

  • This blog features information for AWS .NET developers, including:
  • Demonstrations of new features in the AWS SDK for .NET
  • Announcements and demonstrations of new features in the AWS Toolkit for Microsoft Visual Studio and AWS Tools for Windows PowerShell
  • Best practices for using the SDK
  • Important AWS product announcements for .NET developers
  • Lots of code samples

You’ll see content from many of our team members, including developers from the SDKs and Tools team.

We hope you come back often to visit or subscribe to our blog using the RSS feed button at the top of the page. If you’d like us to cover any specific topics, please let us know and we’ll do our best.

Transferring Files To and From Amazon S3

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

A common question that I’ve seen on our PHP forums is whether there is an easy way to directly upload from or download to a local file using the Amazon S3 client in the AWS SDK for PHP.

The typical usage of the PutObject operation in the PHP SDK looks like the following:

use AwsCommonAws;

$aws = Aws::factory('/path/to/your/config.php');
$s3 = $aws->get('S3');

$s3->putObject(array(
    'Bucket' => 'your-bucket-name',
    'Key'    => 'your-object-key',
    'Body'   => 'your-data'
));

The Body parameter can be a string of data, a file resource, or a Guzzle EntityBody object. To use a file resource, you could make a simple change to the previous code sample.

$s3->putObject(array(
    'Bucket' => 'your-bucket-name',
    'Key'    => 'your-object-key',
    'Body'   => fopen('/path/to/your/file.ext', 'r')
));

The SDK also provides a shortcut for uploading directly from a file using the SourceFile parameter, instead of the Body parameter.

$s3->putObject(array(
    'Bucket'     => 'your-bucket-name',
    'Key'        => 'your-object-key',
    'SourceFile' => '/path/to/your/file.ext'
));

When downloading an object via the GetObject operation, you can use the SaveAs parameter as a shortcut to save the object directly to a file.

$s3->getObject(array(
    'Bucket' => 'your-bucket-name',
    'Key'    => 'your-object-key',
    'SaveAs' => '/path/to/store/your/downloaded/file.ext'
));

The SourceFile and SaveAs parameters allow you to use the SDK to directly upload files to and download files from S3 very easily.

You can see more examples of how to use these parameters and perform other S3 operations in our user guide page for Amazon S3. Be sure to check out some of our other helpful S3 features, like our MultipartUpload helper and our S3 Stream Wrapper, which allows you to work with objects in S3 using PHP’s native file functions.

New Blog for the AWS SDK for PHP

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Hello fellow PHP developers!

Welcome to our new blog for the AWS SDK for PHP. I’m Jeremy Lindblom, and I work with Michael Dowling on the AWS SDK for PHP here at Amazon Web Services. We will be working together on this blog to bring you articles and tips about the PHP SDK and post related announcements. As passionate PHP developers, Michael and I hope you enjoy using the SDK and will find the posts on this blog helpful for your PHP projects using AWS.

Let me also quickly point you to some links for our SDK and documentation:

We hope you subscribe to our blog using the RSS feed button at the top of the page and come back often. If you would like us to blog about any specific topics, please let us know and we’ll see what we can do.

Threading with the AWS SDK for Ruby

by Loren Segal | on | in Ruby | Permalink | Comments |  Share

When using threads in an application, it’s important to keep thread-safety in mind. This statement is not specific to the Ruby world; it’s a reality in any language that supports threading. What is specific to Ruby is the fact that many libraries in our language are loaded at run-time, and often, loading code at run-time is not a thread-safe operation.

Autoload and Thread-Safety

Many libraries and frameworks (including Ruby on Rails) use a feature of Ruby known as autoload, which allows components of a library to be lazily loaded only when the constant is resolved in the code path of an executing program. The problem with this feature is that, historically, the implementation has not been thread-safe. In other words, if two threads tried to resolve an autoloaded constant at the same time, weird things would happen. This problem was finally tackled in Ruby 1.9.1 but then regressed in 1.9.2 and re-resolved in 1.9.3 (but only in a later patchlevel), causing a bit of confusion around whether autoload is actually safe to use in a threaded Ruby program.

Thread-Safe in 2.0

For all intents and purposes, autoloading of modules should be considered thread-safe in Ruby 2.0.0p0, as the patch was officially merged into the 2.0 branch prior to release. Any thread-safety issues in Ruby 2.0 should be considered regressions, according to that ticket.

Enter Eager Loading

Of course, guaranteeing support for Ruby 2.0 is not entirely sufficient for most programs still running on 1.9.x, and in some cases, 1.8.x, so you may need to use a more backward-compatible strategy. In Ruby on Rails, this was solved with an eager_autoload method that forcibly loads all modules marked to be lazily loaded. If you are running threaded code, it is recommended that you call this prior to launching threads. Note that in Rails 4.0, the framework will eager load all modules by default, which should help you avoid having to think about these threading issues.

Eager Autoloading in AWS SDK for Ruby

So is this an issue for the AWS SDK for Ruby? In short, if you are using a version prior to Ruby 2.0, the answer is "most likely". The SDK is large enough that lazily loading extra modules is important to keep library load time as fast as possible. The downside of this approach is that it can cause issues in multi-threaded programs.

To solve the problem in the SDK, we use a similar mechanism to Ruby on Rails and created an AWS.eager_autoload! method that requires all modules in the library up front. To use this method, simply call it before you launch any threads:

require 'aws-sdk'

AWS.eager_autoload!

# Now you can start threading
Thread.new do ... end

Focused Eager Loading

Sometimes, loading all of the SDK is unnecessary and slow. Fortunately, as of version 1.9.0 of the Ruby SDK, the AWS.eager_autoload! method now optionally accepts the name of a module to load instead of requiring you to eager load the entire SDK. This means that if you are only using a specific service, or a set of services, like Amazon S3 and Amazon DynamoDB, you can choose to eager load only these modules. This can help to improve load time of your application, especially if you do not need many of the other modules packaged in the SDK. To load a focused set of modules, simply call the eager autoload method with the names of the modules you want to load along with AWS::Core:

AWS.eager_autoload! AWS::Core     # Make sure to load Core first.
AWS.eager_autoload! AWS::S3       # Load the S3 class
AWS.eager_autoload! AWS::DynamoDB # Load the DynamoDB class

# Now you can start threading
Thread.new do ... end

Wrapping Up This Thread

The AWS SDK for Ruby has an AWS.eager_autoload! method that allows you to forcibly load all components in the library up front. If you are writing multi-threaded code in Ruby, you will most likely want to call this method before launching any threads that make use of the SDK in order to avoid any thread-safety issues with autoload in older versions of Ruby. Fortunately, it is very easy to use by adding a single method call to the top of your application. It is also easy to target specific modules to eager load by passing the module name to the method, if load-time performance is important to your library or application.

Eclipse Deployment: Part 3 – Configuring AWS Elastic Beanstalk

Now that you know the basics about creating AWS Java web applications and deploying them using the AWS Toolkit for Eclipse, let’s talk about some of the ways you can control how your environment runs.

AWS Elastic Beanstalk provides several easy ways to configure different features of your environment. The first mechanism we’ll look at for controlling how your environment runs is your environment’s configuration. These are properties set through the Elastic Beanstalk API that let you control different operational parameters of your environment, such as load balancer behavior and auto scaling strategies. The second mechanism we’ll look at is Elastic Beanstalk extension config files that are included as files in your deployed application. These configuration files allow you to customize additional software installed on your EC2 instances, as well as create and configure AWS resources that your application requires.

We’ll start off by covering some of the most common options, which are presented in the second page of the wizard when you create a new Elastic Beanstalk environment through Eclipse.

Shell Access

If you want to be able to remotely log into a shell on the EC2 instances running your application, then you’ll need to make sure you launch your environment with an Amazon EC2 key pair. The EC2 key pair can be created and managed through Eclipse or any of the other AWS tools, and allows you to securely log into any EC2 instances launched with that key pair. To connect to an instance from Eclipse, find your instance in the EC2 Instances view, right-click to bring up the context menu and select Open Shell. If Eclipse knows the private key for that instance’s key pair, then you’ll see a command prompt open up.

CNAMEs

The default URL for your application running on AWS Elastic Beanstalk probably isn’t something that your customers will be able to easily remember. You can add another abstraction layer by creating a CNAME record that points to your application’s URL. You can set up that CNAME record with Amazon Route 53 (Amazon’s DNS web service), or with any other DNS provider. This allows you to host your application under any domain you own. You can find more details on CNAMEs in the Elastic Beanstalk Developer Guide. This CNAME not only gives your application a more friendly URL, but it also provides an important abstraction that allows you to deploy new versions of your application with zero downtime by launching a new environment with your new application version and flipping the CNAME record over to the new environment’s URL after you’ve confirmed it’s ready for production traffic. You can read more about this technique in the Elastic Beanstalk Developer’s Guide.

Notifications

AWS Elastic Beanstalk uses the Amazon Simple Notification Service (Amazon SNS) to notify you of important events affecting your application, such as environment status changes. To enable Amazon SNS notifications, simply enter your email address in the Email Address text box under Notifications on the Configuration tab inside the Toolkit for Eclipse.

SSL Certificate

If your application deals with sensitive customer information, then you’ll probably want to configure an SSL cert for your load balancer so that all data between your customers and your environment’s load balancer is encrypted. To do this, you’ll need a certificate from an external certificate authority such as VeriSign or Entrust. Once you register the the certificate with the AWS Identity and Access Management service, you can enter the certificate’s ID here to tell Elastic Beanstalk to configure your load balancer for SSL with your certificate.

Health Check URL

Your Elastic Beanstalk environment attempts to monitor the health of your application through the configured health check URL. By default Elastic Beanstalk will attempt to check the health of your application by testing a TCP connection on port 80. This is a very basic health check, and you can easily override this with your own custom health check. For example, you might create a custom health check page that will do some very basic tests of your application’s health. Be careful that you make this health check page very simply though, since this check will be run often (the interval is configurable). If you want to do more in depth health checking, you might have a separate thread in your application that reports health status such as checking for DB connection health, and then simply have your health check page report that status. If one of the hosts in your environment starts failing health checks, it will automatically be removed from your environment so that it doesn’t serve bad results to customers. The exact parameters on how these checks are run are configurable through the environment configuration editor that we’ll see shortly.

Incremental Deployment

The Incremental Deployment option (enabled by default), only affects how Eclipse uploads new application versions to Elastic Beanstalk, but it’s a neat option worth pointing out here. When you use incremental deployment, Eclipse will only push the delta of your most recent changes to AWS Elastic Beanstalk, instead of pushing every file in your whole application. Under the covers, Eclipse and Elastic Beanstalk are actually using the Git protocol to upload file deltas, and the end result is very fast application deployments for small changes after you’ve gone through a full push initially.

After you’ve started your environment, you can modify any of these configuration options, and many more, by double-clicking on your Elastic Beanstalk environment in Eclipse’s Servers view to open the Environment Configuration Editor. From here you can access dozens of settings to fine tune how your environment runs. Note that some of these options will require stopping and restarting your environment (such as changing the Amazon EC2 instance type your environment uses).

From the environment configuration editor you have access to dozens of additional options for controlling how your environment runs. The Configuration tab in the editor shows you the most common options, such as EC2 key pairs, auto scaling and load balancing parameters, and specific Java container options such as JVM settings and Java system properties.

The Advanced tab in the environment configuration editor has a complete list of every possible option for your environment, but for the vast majority of use cases, you shouldn’t need more than the Configuration tab.

Elastic Beanstalk Extension Config Files

We’ve seen how to manipulate operational settings that control how your environment runs by updating an environment’s configuration. These settings are all updated by tools working directly with the Elastic Beanstalk API to change these settings. The second way to customize your environment is through Elastic Beanstalk extension config files. These files live inside your project and get deployed with your application. They customize your environment in larger ways than the very specific settings we saw earlier.

These extension config files allow you to customize the additional software available on the EC2 instances running your application. For example, your application might want to use the Amazon CloudWatch monitoring scripts to upload custom CloudWatch metrics. You can use these extension config files to specify that the Amazon CloudWatch monitoring scripts be installed on any EC2 instance that comes up as part of your environment, then your application code will be able to access them.

You can also use these Elastic Beanstalk extension config files to create and configure AWS resources that your application will need. For example, if your application requires an Amazon SQS queue, you could declare it in your extension config file and even create an alarm on queue depth to notify you if your application gets behind on processing messages in the queue. The AWS Elastic Beanstalk Developer Guide goes into a lot more detail, and examples, demonstrating how to configure AWS resources with extension config files.

That completes our tour of the different ways you can customize your Elastic Beanstalk environments. One of the great strengths of Elastic Beanstalk is that you can simply drop in your application and not worry about customization, but if you do want to customize, you have a wealth of different ways to configure your environment to run the way you need it to for your application. What kinds of customization settings have you tried for your Elastic Beanstalk environments? Let us know the comments below!

Eclipse Deployment: Part 2 – Deploying to AWS Elastic Beanstalk

In this three part series, we’ll show how easy it is to deploy a Java web application to AWS Elastic Beanstalk using the AWS Toolkit for Eclipse.

In part one of this series, we showed how to create an AWS Java Web Project and deploy it to a local Tomcat server. This is a great workflow for developing your project, but when you’re ready for production, you’ll want to get it running on AWS. In this second post of the series, we’ll show how we can use the same tools in Eclipse to deploy our project using AWS Elastic Beanstalk.

AWS Elastic Beanstalk provides a managed application container environment for your application to run in. That means all you have to worry about is your application code. Elastic Beanstalk handles the provisioning, load balancing, auto-scaling, and application health monitoring for you. Even though Elastic Beanstalk handles all these aspects for you, you still have control over all the settings, as we’ll see in the next part of this series, if you do want to customize how your environment runs.

The AWS Toolkit for Eclipse supports deploying Java web apps to Elastic Beanstalk Tomcat containers, but Elastic Beanstalk supports many other types of applications, including:

  • .NET
  • Ruby
  • Python
  • PHP
  • Node.js

Let’s go ahead and see how easy it is to deploy our application to AWS Elastic Beanstalk. We’ll use the same workflow as before when we deployed our application to our local Tomcat server for local development and testing, but this time, we’ll select to create a new AWS Elastic Beanstalk Tomcat 7 server.

Right-click on your project and select Run As -> Run on Server, then make sure the Manually define a new server option is selected; otherwise, this wizard will only show you any existing servers you’ve configured. Select Elastic Beanstalk for Tomcat 7 from the Amazon Web Services category and move on to the next page in the wizard.

This page asks for some very basic information about the Elastic Beanstalk environment that we’re creating. Every Elastic Beanstalk environment is tied to a specific application, and of course has a name. You can choose to create a new application, or reuse an existing one. Whenever you deploy your project to this environment, you’ll be creating a new version of that application, and then deploying that new version to run in your environment.

On the next page of the wizard are some more options for configuring your new environment. We’ll go over these options and more in the next post in this series.

Go ahead and click the Finish button and Eclipse will start creating your new environment. The very first time you start your environment you’ll need to wait a few minutes while Elastic Beanstalk provisions servers for you, configures them behind a load balancer and auto-scaling group, and deploys your application. Future deployments should go much faster, but Elastic Beanstalk needs to set up several pieces of infrastructure for you the first time a new environment starts up. To see more details about what Elastic Beanstalk is doing to set up your environment, double-click on the server you just created in Eclipse’s Servers view, and open the Events tab in the server editor that opens. The event log shows you all the major events that Elastic Beanstalk is logging for your environment. If you ever have problems starting up your environment, the event log is the place to start looking for clues.

After a few minutes, you should see your application start up in Eclipse’s internal web browser, this time running from AWS instead of a local Tomcat server.

And that’s all it takes to get a Java web application deployed to AWS using AWS Elastic Beanstalk and the AWS Toolkit for Eclipse.

Now that you’ve got your environment running, try making a few small changes to your application and redeploying them, using the same tools as before. Once you get your application code set up, you’ll switch over to incremental deployments and should get very fast redeploys.

Stay tuned for the next post in this series, where we’ll explain how you can customize your environment’s configuration to control different aspects of how it runs.