AWS Developer Blog

Ruby 2.1 on AWS OpsWorks

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We are pleased to announce that AWS OpsWorks now supports Ruby 2.1. Simply select the Ruby version you want, your Rails stack – Passenger or Unicorn, the RubyGems version, and whether you want to use Bundler. Then deploy your app from your chosen repository – Git, Subversion, or bundles on S3. You can get started with a few clicks in the AWS Management console.

Release: AWS SDK for PHP – Version 2.5.3

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.5.3 of the AWS SDK for PHP. This release provides several client updates, Amazon S3 client issue fixes, and additional iterators. Please refer to the CHANGELOG for a complete list of changes.

Install the SDK

Using Amazon SQS Dead Letter Queues

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

After Jason Fulghum recently posted a blog entry about using Amazon SQS dead letter queues with the AWS SDK for Java, I thought his post would be interesting for .NET developers as well. Here is Jason’s post with the code replaced with the C# equivalent.

Amazon SQS recently introduced support for dead letter queues. This feature is an important tool to help your applications consume messages from SQS queues in a more resilient way.

Dead letter queues allow you to set a limit on the number of times a message in a queue is processed. Consider an application that consumes messages from a queue and does some sort of processing based on the message. A bug in your application may only be triggered by certain types of messages or when working with certain data in your application. If your application receives one of these messages, it won’t be able to successfully process it and remove it from the queue. Instead, your application will continue to try to process the message again and again. While this message is being continually retried, your queue is likely filling up with other messages, which your application is unable to process because it’s stuck repeatedly processing the bad message.

Amazon SQS dead letter queues enable you to configure your application so that if it can’t successfully process a problematic message and remove it from the queue, that message will be automatically removed from your queue and delivered to a different SQS queue that you’ve designated as a dead letter queue. Another part of your application can then periodically monitor the dead letter queue and alert you if it contains any messages, which you can debug separately.

Using Amazon SQS dead letter queues is easy. You just need to configure a RedrivePolicy on your queue to specify when messages are delivered to a dead letter queue and to which dead letter queue they should be delivered. You can use the AWS Management Console, or you can access the Amazon SQS API directly with the AWS SDK for .NET.

// First, we'll need an Amazon SQS client object.
IAmazonSQS sqs = new AmazonSQSClient(RegionEndpoint.USWest2);

// Create two new queues:
//     one main queue for our application messages
//     and another to use as our dead letter queue
string qUrl = sqs.CreateQueue(new CreateQueueRequest()
{
    QueueName = "MyApplicationQueue"
}).QueueUrl;

string dlqUrl = sqs.CreateQueue(new CreateQueueRequest()
{
    QueueName = "MyDeadLetterQueue"
}).QueueUrl;

// Next, we need to get the the ARN (Amazon Resource Name) of our dead
// letter queue so we can configure our main queue to deliver messages to it.
IDictionary attributes = sqs.GetQueueAttributes(new GetQueueAttributesRequest()
{
    QueueUrl = dlqUrl,
    AttributeNames = new List() { "QueueArn" }
}).Attributes;

string dlqArn = attributes["QueueArn"];

// The last step is setting a RedrivePolicy on our main queue to configure
// it to deliver messages to our dead letter queue if they haven't been
// successfully processed after five attempts.
string redrivePolicy = string.Format(
    "{{"maxReceiveCount":"{0}", "deadLetterTargetArn":"{1}"}}",
    5, dlqArn);

sqs.SetQueueAttributes(new SetQueueAttributesRequest()
{
    QueueUrl = qUrl,
    Attributes = new Dictionary()
    {
        {"RedrivePolicy", redrivePolicy}
    }
});

There’s also a new operation in the Amazon SQS API to help you identify which of your queues are set up to deliver messages to a specific dead letter queue. If you want to know what queues are sending messages to a dead letter queue, just use the IAmazonSQS.ListDeadLetterSourceQueues operation.

IList sourceQueues = sqs.ListDeadLetterSourceQueues(
    new ListDeadLetterSourceQueuesRequest()
    {
        QueueUrl = dlqUrl
    }).QueueUrls;

Console.WriteLine("Source Queues Delivering to " + qUrl);
foreach (string queueUrl in sourceQueues)
{
    Console.WriteLine(" * " + queueUrl);
}

Dead letter queues are a great way to add more resiliency to your queue-based applications. Have you set up any dead letter queues in Amazon SQS yet?

AWS at PHP Conferences in Spring 2014

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

This spring, I’ll be traveling to Dallas and New York City to represent the AWS SDK for PHP team and be with fellow PHP developers. I hope to see you too!

In late April, I’ll be going to Dallas for Lone Star PHP! I have two talks that I’ll be sharing there: Recursion: Making Big Problems Smaller and Surviving and Thriving in Technical Interviews. I will not be speaking specifically about AWS or the AWS SDK for PHP at Lone Star, but if you want to chat with me about AWS, then definitely come find me. Looking at the other speakers that are going to be there, I can tell this will be a really great conference.

I have another great opportunity in May to speak at Laracon in New York City, a conference for developers (i.e., "artisans") using the Laravel Framework. My talk is titled AWS for Artisans, and I’ll be talking about the services that AWS provides and the ways Laravel artisans can use AWS. I’ll also showcase some of the integrations that exist between the Laravel Framework and the AWS SDK for PHP, including the AWS Service Provider for Laravel 4 and the Laravel Queue component’s Amazon SQS driver.

I look forward to meeting new people and seeing old friends at both of these conferences. Make sure to come introduce yourself if you see me there. If you haven’t bought tickets for either of these events yet, there is still time. Do it!

Using Amazon SQS Dead Letter Queues

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Amazon SQS recently introduced support for dead letter queues. This feature is an important tool to help your applications consume messages from SQS queues in a more resilient way.

Dead letter queues allow you to set a limit on the number of times a message in a queue is processed. Consider an application that consumes messages from a queue and does some sort of processing based on the message. A bug in your application may only be triggered by certain types of messages or when working with certain data in your application. If your application receives one of these messages, it won’t be able to successfully process it and remove it from the queue. Instead, your application will continue to try to process the message again and again. While this message is being continually retried, your queue is likely filling up with other messages, which your application is unable to process because it’s stuck repeatedly processing the bad message.

Amazon SQS dead letter queues enable you to configure your application so that if it can’t successfully process a problematic message and remove it from the queue, that message will be automatically removed from your queue and delivered to a different SQS queue that you’ve designated as a dead letter queue. Another part of your application can then periodically monitor the dead letter queue and alert you if it contains any messages, which you can debug separately.

Using Amazon SQS dead letter queues is easy. You just need to configure a RedrivePolicy on your queue to specify when messages are delivered to a dead letter queue and to which dead letter queue they should be delivered. You can use the AWS Management Console, or you can access the Amazon SQS API directly with the AWS SDK for Java.

// First, we'll need an Amazon SQS client object.
AmazonSQSClient sqs = new AmazonSQSClient(myCredentials);

// Create two new queues:
//     one main queue for our application messages
//     and another to use as our dead letter queue
String qUrl = sqs.createQueue("MyApplicationQueue").getQueueUrl();
String dlqUrl = sqs.createQueue("MyDeadLetterQueue").getQueueUrl();

// Next, we need to get the the ARN (Amazon Resource Name) of our dead
// letter queue so we can configure our main queue to deliver messages to it.
Map attributes = sqs.getQueueAttributes(new GetQueueAttributesRequest(dlqUrl)
    .withAttributeNames(QueueAttributeName.QueueArn)).getAttributes();
String dlqArn = attributes.get(QueueAttributeName.QueueArn.toString());

// The last step is setting a RedrivePolicy on our main queue to configure
// it to deliver messages to our dead letter queue if they haven't been
// successfully processed after five attempts.
String redrivePolicy = String.format(
    "{"maxReceiveCount":"%i", "deadLetterTargetArn":"%s"}",
    5, dlqArn);

sqs.setQueueAttributes(new SetQueueAttributesRequest()
    .withQueueUrl(qUrl)
    .addAttributesEntry(QueueAttributeName.RedrivePolicy.toString(),
                        redrivePolicy));

There’s also a new operation in the Amazon SQS API to help you identify which of your queues are setup to deliver messages to a specific dead letter queue. If you want to know what queues are sending messages to a dead letter queue, just use the AmazonSQS#listDeadLetterSourceQueues operation.

List sourceQueues = sqs.listDeadLetterSourceQueues(
	new ListDeadLetterSourceQueuesRequest()
		.withQueueUrl(dlqUrl)).getQueueUrls();
System.out.println("Source Queues Delivering to " + qUrl);
for (String queueUrl : sourceQueues) {
	System.out.println(" * " + queueUrl);
}

Dead letter queues are a great way to add more resiliency to your queue-based applications. Have you set up any dead letter queues in Amazon SQS yet?

Performing Conditional Writes Using the Amazon DynamoDB Transaction Library

by Wade Matveyenko | on | in Java | Permalink | Comments |  Share

Today we’re lucky to have another guest post by David Yanacek from the Amazon DynamoDB team. David is sharing his deep knowledge on the Amazon DynamoDB Transactions library to help explain how to use it with the conditional writes feature of Amazon DynamoDB.


The DynamoDB transaction library provides a convenient way to perform atomic reads and writes across multiple DynamoDB items and tables. The library does all of the nuanced item locking, commits, applies, and rollbacks for you, so that you don’t have to worry about building your own state machines or other schemes to make sure that writes eventually happen across multiple items. In this post, we demonstrate how to use the read-modify-write pattern with the transaction library to accomplish the same atomic checks you were used to getting by using conditional writes with the vanilla DynamoDB API.

The transaction library exposes as much of the low-level Java API as possible, but it does not support conditional writes out of the box. Conditional writes are a way of asking DynamoDB to perform a write operation like PutItem, UpdateItem, or DeleteItem, but only if certain attributes of the item still have the values that you expect, right before the write goes through. Instead of exposing conditional writes directly, the transaction library enables the read-modify-write pattern—just like the pattern you’re used to with transactions in an RDBMS. The idea is to start a transaction, read items using that transaction, validate that those items contain the values you expect to start with, write your changes using that same transaction, and then commit the transaction.  If the commit() call succeeds, it means that the changes were written atomically, and none of the items in the transaction were modified by any other transaction in the meantime, starting from the time when each item was read by your transaction.

Transaction library recap

Let’s say you’re implementing a tic-tac-toe game. You have an Item in a DynamoDB table representing a single match of the game, with an attribute for each position in the board (Top-Left, Bottom-Right, etc.). Also, to make this into a multi-item transaction, let’s add two more items—one per player in the game, each with an attribute saying whether it is currently that player’s turn or not. The items might look something like this:

Games Table Item Users Table Items
{
  " GameId": "cf3df",
  "Turn": "Bob",
  "Top-Right": "O"
}
{
  " UserId": "Alice",
  "IsMyTurn": 0
}
{
  " UserId": "Bob",
  "IsMyTurn": 1
}

Now when Bob plays his turn in the game, all three items need to be updated:

  1. The Bob record needs to be marked as "Not my turn anymore."
  2. The Alice record needs to be marked as "It’s my turn now."
  3. The Game record needs to be marked as "It’s Alice’s turn, and also the Top-Left has an X in it."

If you write your application so that it performs three UpdateItem operations in a row, a few problems could occur. For example, your application could crash after doing one of the writes, and now something else in your application would need to notice this and pick up where it left off before doing anything else in the game. Fortunately, the transaction library can make these three separate operations happen together in a transaction, where either all of the writes go through together, or if there is another transaction overlapping with yours at the same time, only one of those transactions happens.

The code for doing this in a transaction looks like this:

// Start a new transaction
Transaction t = txManager.newTransaction();
 
// Update Alice's record to let him know that it is now her turn.
t.updateItem(
  new UpdateItemRequest()
    .withTableName("Users")
    .addKeyEntry("UserId", new AttributeValue("Alice"))
    .addAttributeUpdatesEntry("IsMyTurn",
            new AttributeValueUpdate(new AttributeValue("1"), AttributeAction.PUT)));
 
// Update Bob's record to let him know that it is not his turn anymore.
t.updateItem(
  new UpdateItemRequest()
    .withTableName("Users")
    .addKeyEntry("UserId", new AttributeValue("Bob"))
    .addAttributeUpdatesEntry("IsMyTurn",
            new AttributeValueUpdate(new AttributeValue("0"), AttributeAction.PUT)));
 
// Update the Game item to mark the spot that was played, and make it Alice's turn now.
t.updateItem(
  new UpdateItemRequest()
    .withTableName("Games")
    .addKeyEntry("GameId", new AttributeValue("cf3df"))
    .addAttributeUpdatesEntry("Top-Left", 
            new AttributeValueUpdate(new AttributeValue("X"), AttributeAction.PUT))
    .addAttributeUpdatesEntry("Turn",
            new AttributeValueUpdate(new AttributeValue("Alice"), AttributeAction.PUT)));
 
// If no exceptions are thrown by this line, it means that the transaction was committed.
t.commit();

What about conditional writes?

The preceding code makes sure that the writes go through atomically, but that’s not enough logic for making a move in the game. We need to make sure that, when the transaction goes through, there wasn’t a transaction right before it where Bob already played his turn. In other words, how do we make sure that Bob doesn’t play twice in a row—for example, by trying to sneak in two turns before Alice has a chance to move? If there was only a single item involved, say the "Games" item, we could accomplish this by using conditional writes (the Expected clause), like so:

// An example of a conditional update using the DynamoDB client (not the transaction library)
dynamodb.updateItem(
  new UpdateItemRequest()
    .withTableName("Games")
    .addKeyEntry("GameId", new AttributeValue("cf3df"))
    .addAttributeUpdatesEntry("Top-Left", 
    		new AttributeValueUpdate(new AttributeValue("X"), AttributeAction.PUT))
    .addAttributeUpdatesEntry("Turn",
    		new AttributeValueUpdate(new AttributeValue("Alice"), AttributeAction.PUT))
    .addExpectedEntry("Turn", new ExpectedAttributeValue(new AttributeValue("Bob"))) // A condition to ensure it's still Bob's turn
    .addExpectedEntry("Top-Left", new ExpectedAttributeValue(false)));               // A condition to ensure the Top-Left hasn't been played

This code now correctly updates the single Game item. However, conditional writes in DynamoDB can only refer to the single item the operation is updating, and our transaction contains three items that need to be updated together, only if the Game is still in the right state. Therefore, we need some way of mixing the original transaction code with these “conditional check” semantics.

Conditional writes with the transaction library

We started off with code for a transaction that coordinated the writes to all three items atomically, but it didn’t ensure that it was still Bob’s turn when it played Bob’s move. Fortunately, adding that check is easy: it’s simply a matter of adding a read to the transaction, and then performing the verification on the client-side. This is sometimes referred to as a "read-modify-write" pattern:

// Start a new transaction, just like before.
Transaction t = txManager.newTransaction();
 
// First, read the Game item.
Map game = t.getItem(
    new GetItemRequest()
        .withTableName("Games")
        .addKeyEntry("GameId", new AttributeValue("cf3df"))).getItem();
 
// Now check the Game item to ensure it's in the state you expect, and bail out if it's not.
// These checks serve as the "expected" clause.  
if (! "Bob".equals(game.get("Turn").getS())) {
    t.rollback();
    throw new ConditionalCheckFailedException("Bob can only play when it's Bob's turn!");
}
 
if (game.containsKey("Top-Left")) {
    t.rollback();
    throw new ConditionalCheckFailedException("Bob cannot play in the Top-Left because it has already been played.");
}
 
// Again, update Alice's record to let her know that it is now her turn.
t.updateItem(
    new UpdateItemRequest()
        .withTableName("Users")
        .addKeyEntry("UserId", new AttributeValue("Alice"))
        .addAttributeUpdatesEntry("IsMyTurn",
            new AttributeValueUpdate(new AttributeValue("1"), AttributeAction.PUT)));
 
// And again, update Bob's record to let him know that it is not his turn anymore.
t.updateItem(
    new UpdateItemRequest()
        .withTableName("Users")
        .addKeyEntry("UserId", new AttributeValue("Bob"))
        .addAttributeUpdatesEntry("IsMyTurn",
            new AttributeValueUpdate(new AttributeValue("0"), AttributeAction.PUT)));
 
// Finally, update the Game item to mark the spot that was played and make it Alice's turn now.
t.updateItem(
    new UpdateItemRequest()
        .withTableName("Games")
        .addKeyEntry("GameId", new AttributeValue("cf3df"))
        .addAttributeUpdatesEntry("Top-Left", 
            new AttributeValueUpdate(new AttributeValue("X"), AttributeAction.PUT))
        .addAttributeUpdatesEntry("Turn",
            new AttributeValueUpdate(new AttributeValue("Alice"), AttributeAction.PUT)));
 
// If no exceptions are thrown by this line, it means that the transaction was committed without interference from any other transactions.
try {
    t.commit();
} catch (TransactionRolledBackException e) {
    // If any of the items in the transaction were changed or read in the meantime by a different transaction, then this will be thrown.
    throw new RuntimeException("The game was changed while this transaction was happening. You probably want to refresh Bob's view of the game.", e);
}

There are two main differences with the first approach.

  • First, the code calls GetItem on the transaction and checks to make sure the item is in the state your application expects it to be in. If not, it rolls back the transaction and returns an error to the caller. This is done in the same transaction as the subsequent updates. When you read an item in a transaction, the transaction library locks the item in the same way as when you modify it in the transaction. Your application can still read an item without interfering with it while it is locked, but it must do so outside of a transaction, using one of the read isolation levels on the TransactionManager. More about read isolation levels is available in the design document for the transaction library.
  • Next, the code checks for TransactionRolledBackException. This check could have been done in the first example as well, but it’s called out in this example to show what will happen if another transaction either reads or writes any of the items involved in the transaction while yours was going on. When this happens, you might want to retry the whole transaction (start from the beginning—don’t skip any steps), or refresh your client’s view so that they can re-evaluate their move, since the state of the game may have changed.

While the preceding code doesn’t literally use the conditional writes API in DynamoDB (through the Expected parameter), it functionally does the same atomic validation—except with the added capability of performing that check and write atomically across multiple items.

More info

You can find the DynamoDB transaction library in the AWS Labs repository on GitHub. You’ll also find a more detailed write-up describing the algorithms it uses. You can find more usage information about the transaction library in the blog post that announced the library. And if you want to see some working code that uses transactions, check out TransactionExamples.java in the same repo.

For a recap on conditional writes, see part of a talk called Amazon DynamoDB Design Patterns for Ultra-High Performance Apps from the 2013 AWS re: Invent conference. You may find the rest of the talk useful as well, but the segment on conditional writes is only five minutes long.

Two New Amazon RDS Database Engines in Eclipse

We’re excited to announce support for two more Amazon RDS database engines in the AWS Toolkit for Eclipse. You can now configure connections to PostgreSQL and Microsoft SQL Server RDS database instances directly from within Eclipse by opening the AWS Explorer view and double-clicking on your RDS database instance.

The first time you select your RDS database instance, you’ll be asked for some basic information about connecting to your database instance, such as: password, JDBC driver, and whether you want Eclipse to automatically open permissions in your security group to allow database connections.

Once you’ve configured a connection to your database, you can use all the tools from the Eclipse Data Tools Platform. You can browse your schemas, export data, run queries in SQL Scrapbook, and more.

If you don’t have any Amazon RDS database instances yet, you can go to the Amazon RDS console and launch a new database instance. With just a few clicks, you can launch a fully managed MySQL, Oracle, PostgreSQL, or Microsoft SQL Server database.

Are you using any of the database tools in Eclipse to work with your RDS databases?

Using New Regions and Endpoints

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Last week, a customer asked us how they could configure the AWS SDK for PHP to use Amazon SES with the EU (Ireland) Region. SES had just released support for the EU Region, but there was no tagged version of the SDK that supported it yet.

Our typical process is to push new support for regions to the master branch of the AWS SDK for PHP repository as soon as possible after they are announced. In fact, at the time that the customer asked us about EU Region support in SES, we had already pushed out support for it. However, if you are using only tagged versions of the SDK, which you should do with production code, then you may have to wait 1 or 2 weeks until a new version of the SDK is released.

Configuring the base URL of your client

Fortunately, there is a way to use new regions and endpoints, even if the SDK does not yet support a new region for a service. You can manually configure the base_url of a client when you instantiate it. For example, to configure an SES client to use the EU Region, do the following:

$ses = AwsSesSesClient::factory(array(
    'key'      => 'YOUR_AWS_ACCESS_KEY_ID',
    'secret'   => 'YOUR_AWS_SECRET_KEY',
    'region'   => 'eu-west-1',
    'base_url' => 'https://email.eu-west-1.amazonaws.com',
));

Remember, you only need to specify the base_url if the SDK doesn’t already support the region. For regions that the SDK does support, the endpoint is automatically determined.

To find the correct URL to use for your desired service and region, see the Regions and Endpoints page of the AWS General Reference documentation.

Using the base_url for other reasons

The base_url option can be used for more than just accessing new regions. It can be used to allow the SDK to send requests to any endpoint compatible with the API of the service you are using (e.g., mock/test services, private beta endpoints).

An example of this is the DynamoDB Local tool that acts as a small client-side database and server that mimics Amazon DynamoDB. You can easily configure a DynamoDB client to work with DynamoDB Local by using the base_url option (assuming you have correctly installed and started DynamoDB Local).

$dynamodb = AwsDynamoDbDynamoDbClient::factory(array(
    'key'      => 'YOUR_AWS_ACCESS_KEY_ID',
    'secret'   => 'YOUR_AWS_SECRET_KEY',
    'region'   => 'us-east-1',
    'base_url' => 'http://localhost:8000',
));

For more information, see Setting a custom endpoint in the AWS SDK for PHP User Guide.

Using the latest SDK via Composer

If you are using Composer with the SDK, then you have another option for picking up new features, like newly supported regions, without modifying your code. If you need to use a new feature or bugfix that is not yet in a tagged release, you can do so by adjusting the SDK dependency in your composer.json file to use our development alias 2.5.x-dev.

{
    "require": {
        "aws/aws-sdk-php": "2.5.x-dev"
    }
}

Using the development alias, instead of dev-master, is ideal, because if you have other dependencies that require the SDK, version constraints like "2.5.*" will still resolve correctly. Remember that relying on a non-tagged version of the SDK is not recommended for production code.

Release: AWS SDK for PHP – Version 2.5.2

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.5.2 of the AWS SDK for PHP. This release adds support for dead letter queues to the Amazon Simple Queue Service client. Please see the official release notes or the release CHANGELOG for a complete list of changes.

Install the SDK

Steve Roberts Interviewed in Episode 255 of the PowerScripting Podcast

A few weeks ago, Steve Roberts, from the AWS SDK and Tools team for .NET, was pleased to be invited to take part in an episode of the PowerScripting Podcast, chatting with fellow developers about PowerShell here at AWS, the AWS SDK for .NET and other general topics (including his choice of superhero!). The recording of the event has now been published and can be accessed here.

As mentioned in the podcast, a new book has also just been published about using PowerShell with AWS. More details can be found on the publisher’s website at Pro PowerShell for Amazon Web Services.