AWS Developer Blog

DynamoDB Series Kickoff

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Last week, Amazon DynamoDB added support for JSON document data structures. With this update, DynamoDB now supports nested data in the form of lists (L type) and maps (M type). Also part of this update was native support for booleans (BOOL type) and nulls (NULL type).

This week, we will be running a series of daily blog posts that will explain the new changes and how they relate to the AWS SDK for .NET, and we will see how you can take advantage of these new types to work with complex objects in all three .NET SDK DynamoDB APIs. In this, the first blog post of series, we will see how the low-level API has changed. In the following days, we will cover the Document Model, Conversion Schemas, Object Persistence Model, and finally Expressions.

New types

Until now, DynamoDB had only six data types:

  • Scalars N, S, and B that represent number, string, and binary data.
  • Sets NS, SS, and BS that represent number set, string set, and binary set.
    Sets have the limitation that the data they store has to be homogeneous (e.g., SS could only contain S elements) and unique (no two elements could be the same).

This release expands the possible data types with four new additions:

  • BOOL represents boolean data.
  • NULL represents null values.
  • L type represents a list of elements.
  • M type represents a string-to-element map.

The key point about L and M types is that they can contain any DynamoDB type. This allows you to create, for example, lists of maps of lists, which in turn can contain a mix of numbers, strings, bools, and nulls, or any other conceivable combination of attributes.


The low-level API changes are straightforward: new DynamoDB types are now supported in all data calls. Here’s a sample that shows how both old and new types can be used in a PutItem call.

// Put item
client.PutItem("SampleTable", new Dictionary<string, AttributeValue>
    { "Id", new AttributeValue { N = "1" } },
    { "Product", new AttributeValue { S = "DataWriter" } },
    { "Aliases", new AttributeValue {
        SS = new List<string> { "Prod", "1.0" } } },
    { "IsPublic", new AttributeValue { BOOL = false } },
    { "Metadata", new AttributeValue {
        M = new Dictionary<string, AttributeValue>
            { "InternalVersion", new AttributeValue { N = "1.2" } },
            { "Developers", new AttributeValue {
                SS = new List<string> { "Alan", "Franko" } } 
            { "SampleInput", new AttributeValue {
                L = new List<AttributeValue>
                    new AttributeValue { BOOL = true },
                    new AttributeValue { N =  "42" },
                    new AttributeValue { NULL = true },
                    new AttributeValue {
                        SS = new List<string> { "apple", "orange" } }
                } }
        } }

As you can see, the new M and L AttributeValue types may contain AttributeValues, allowing complex, nested data to be stored in a single DynamoDB record. In the above example, the item we just stored into DynamoDB will have an attribute of type M named "Metadata". This attribute will in turn contain three other attributes: N (number), SS (string set), and L (list). The list contains four more attributes, which in turn can be other M and L types, though in our example they are not.

Tomorrow, we will take a look at how the new additions can be used with the Document Model API.

Introducing DynamoDB Document API (Part 2)

by Hanson Char | on | in Java | Permalink | Comments |  Share

In the previous blog, Introducing DynamoDB Document API (Part 1), we saw how to program against the DynamoDB Document API and produce code that is both easy to write and read.  But why is the API called the Document API, and how are JSON-style documents supported?

This perhaps can best be explained, well, with code! Using the same Game table from the previous blog, let’s start with a game object directly represented by a JSON document:

        "Status" : "IN_PROGRESS",
        "GameId" : "abc",
        "Player1-Position" : 15,
        "Player2-Position" : 12

First, we put this game to Amazon DynamoDB as a structured document:

AmazonDynamoDBClient client = new AmazonDynamoDBClient(...);
DynamoDB dynamodb = new DynamoDB(client);
Table table = dynamo.getTable("Game");
String json = "{"
                    + ""Status" : "IN_PROGRESS","
                    + ""GameId" : "abc","
                    + ""Player1-Position" : 15,"
                    + ""Player2-Position" : 12"
                    + "}"
Item jsonItem = Item.fromJSON(json);

Suppose we need to update the game, changing the status to "SUSPENDED", and adding 1 to the first player’s position, but only if both players’ positions are less than 20 and if the current status is "IN_PROGRESS":

UpdateItemOutcome outcome = table.updateItem(new UpdateItemSpec()
            .withPrimaryKey("GameId", "abc")
                new AttributeUpdate("Player1-Position").addNumeric(1), 
                new AttributeUpdate("Status").put("SUSPENDED"))
                new Expected("Player1-Position").lt(20),
                new Expected("Player2-Position").lt(20),
                new Expected("Status").eq("IN_PROGRESS"))

Finally, let’s get back the updated document as JSON:

Item itemUpdated = outcome.getItem();
String jsonUpdated = itemUpdated.toJSONPretty();

Here is the output in JSON:

      "Status" : "SUSPENDED",
      "GameId" : "abc",
      "Player1-Position" : 16,
      "Player2-Position" : 12

As you can see, saving JSON as a structured document in Amazon DynamoDB, or updating, retrieving and converting the document back into JSON is as easy as 1-2-3. :)  You can find more examples in the A-Z Document API quick-start folder at GitHub. Happy coding until next time!

Getting Ready for re:Invent

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

AWS re:Invent 2014 is coming up fast. Steve and I are heading to Las Vegas and will be presenting a session where we discuss some of the latest features of the AWS SDK for .NET. We’ll also be hanging out in the Expo area at the AWS Developer Resources booth, so please drop by and say hi. We would love to hear how you use our .NET tooling.

It’s not too late to register for AWS re:Invent 2014! See you in Las Vegas!

Deploying Ruby on Rails Applications to AWS OpsWorks

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

To begin our series on using Ruby on Rails with Amazon Web Services, we are going to start at the beginning: deploying our application. Today, we will be deploying our application to AWS OpsWorks.

Following along with this post, you should be able to deploy our "Todo Sample App" to AWS using OpsWorks, with your application and database running on different machine instances.

Getting Your Application Ready to Deploy

You can deploy the Todo sample application to OpsWorks directly from its public GitHub repo, using the ‘opsworks’ branch. If you explore the repo, you will notice that we’ve made a few design choices:

  • Our secrets file at config/secrets.yml expects the RAILS_SECRET_TOKEN environment variable to be set on our application servers.
  • We have required the mysql2 gem, to interface with a MySQL database.
  • We have required the unicorn gem, and will use unicorn as our app server.

Creating an OpsWorks Stack

Log in to the AWS Console and navigate to the AWS OpsWorks Console. Click Add Stack and fill out the form like so:

Add Stack Screen

Don’t worry about the "Advanced" settings for now – we won’t need them during this part of the tutorial. Once you’ve filled out the form, just press Create Stack and you’re done.

Creating the Rails App Server Layer

After creating a stack, you’ll find yourself at a page prompting you to create a layer, an instance, and an app. To start, click Add a layer.

We are making a few changes to the default options here. They are:

  • Using Ruby version 2.1.
  • Using "nginx and Unicorn" instead of "Apache2 and Passenger".
  • Using RubyGems version 2.2.1.

Once you’re all done, click Add Layer. You’ll be redirected to the "Layers" screen.

Creating the Database Layer

Next, we’re going to create our database layer. On the layers screen, click + Layer.

Add MySQL Layer

Choose "MySQL" from the drop down box, and leave everything else as-is. Of course, if you’re taking your own screenshots, it is best to avoid sharing your passwords of choice as well!

Click Add Layer and you’re done with this step.

MySQL Layer vs. Amazon RDS Layer

When creating your stack, you can choose to use an OpsWorks-managed EC2 instance running MySQL, called a "MySQL" layer, or you can create a layer that points to an existing Amazon RDS instance.

For this example, we are going to use a MySQL layer. You could substitute an RDS layer if you so chose. In future posts, we may explore this option in depth.

Adding Instances

We’ve made layers for our application servers and database, but we do not yet have application servers or a database. We will next create an instance of each.

Create an App Server Instance

From the "Layers" screen, click Add instance in the "Rails App Server" layer.

Add Rails Instance

We’re creating a t2.micro instance to optimize for cost (this is a demo after all). You may also want to create an SSH key and specify it here in order to be able to log in to your host for debugging purposes, but we don’t strictly need it so we are going to skip that for now.

Click Add Instance once you’re done, then start to begin the instance setup process. While that runs, we are going to make our database instance.

One quick aside about using a t2.micro instance: you can only create them in a VPC. We have created a VPC in this example, but if you were creating a stack without a VPC, t2.micro instances will not be available to you. Other instance types will, of course, work for this example.

Create a Database Instance

You’ll note that, if you’re following along with each step, you’re now at the "Instances" page. From either here or the layers page, under "MySQL", click Add an instance.

Add MySQL Instance

As before, we are creating a t2.micro instance. Click Add Instance to create the instance, and start to begin instance setup.

Adding the Application

While our instances are set up, let’s add our application. Click the Apps link on the sidebar, then click Add an app.

Add App

For this example, we’re using the Git repository at as our Application Source, and using the opsworks branch to ensure that you’re deploying the same code I was as this post was written. You can name the app whatever you’d like, but the "TodoApp" name will match with fields we will fill out later, so if you do change the name, make sure to use that new name going forward wherever we use "TodoApp".

Add App

To generate a value for the RAILS_SECRET_KEY environment variable, you can use the command rake secret within your copy of the repo. Just remember to set this as a "Protected value", and if you’re taking screenshots of your process, this is a good time to remove the placeholder value you used for the screenshot and to add a new value generated with rake secret.

Click Add App when you are done.

Deploying the Application

It is likely that your instances are done being created and set up by now, but double check that they are both online before continuing to this step. If by chance they are not quite set up, by the time you prepare a cup of tea and come back, they should be ready.

Click the Deployments link on the sidebar, then click the Deploy an App button.

Deploy App

Since we have not done so yet, remember to check "Yes" for the "Migrate database" setting. We will also need this custom JSON to ensure the "mysql2" adapter is used as intended:

        "adapter": "mysql2"

Click Deploy, and grab another cup of tea. You’ve now deployed the Ruby on Rails "Todo" sample app to AWS OpsWorks!

Use Custom JSON for All Deployments

You probably don’t want to be filling in the custom JSON for your adapter choice with every deployment. Fortunately, you can move this custom JSON into your stack settings to have it go with every deployment.

Click the Stack link on the sidebar, open Stack Settings, and click Edit.

Stack Settings

Add the custom JSON you used for your deployment earlier, and click Save.

Try It Out

To view the app in action, click on your app server’s name on the deployment screen to go to the server’s info page. Click the link next to "Public DNS", and you should see the front page of the application:

You can add tasks, mark them complete, and delete them as you like. In short, your application is running and performing database transactions.

Hello TodoApp!


In this post, we started with a Ruby on Rails application, and went step-by-step through the process to get it up and running on AWS with OpsWorks. Now, you can follow this same process to get your own Rails application running on AWS.

Now that we can deploy our application, we will begin to explore ways to make our app scale, improve availability, and optimize some common speed bottlenecks.

Have any questions, comments, or problems getting the app up and running? Suggestions for topics you would like to see next? Please reach out to us in the comments!

Version 3 Preview of the AWS SDK

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We’re excited to introduce you to the preview release of Version 3 of the AWS SDK for PHP! As of today, the preview release of Version 3 (V3) is available on GitHub and via Composer.

Two years ago, we released Version 2 (V2) of the SDK. Since then, thousands of developers and companies have adopted it. We are sincerely grateful to all of our users and contributors. We have been constantly collecting your feedback and ideas, and continually watching the evolution of PHP, AWS, and the Guzzle library.

Earlier this year, we felt we could make significant improvements to the SDK, but only if we could break a few things. Since receiving a unanimously positive response to our blog post about updating to the latest version of Guzzle a few months ago, we’ve been working hard on V3, and we’re ready to share it with you.

What’s new?

The new version of the SDK provides a number of important benefits to AWS customers. It is smaller and faster, with improved performance for both serial and concurrent requests. It has several new features based on its use of the new Guzzle 5 library (which also includes the new features from Guzzle 4). The SDK will also, starting from V3, follow the official SemVer spec, so you can have complete confidence when setting version constraints in your projects’ composer.json files.

Let’s take a quick look at some of the new features.

Asynchronous requests

With V3, you can perform asynchronous operations, which allow you to more easily send requests concurrently. To achieve this, the SDK returns future result objects when you specify the @future parameter, which block only when they are accessed. For managing more robust asynchronous workflows, you can retrieve a promise from the future result, to perform logic once the result becomes available or an exception is thrown.


// Upload a file to your bucket in Amazon S3.
// Use '@future' to make the operation complete asynchronously.
$result = $s3Client->putObject([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),
    '@future' => true,

After creating a result using the @future attribute, you now have a future result object. You can use the data stored in the future in a blocking (or synchronous) manner by just using the result as normal (i.e., like a PHP array).

// Wait until the response has been received before accessing its data.
echo $result['ObjectURL'];

If you want to allow your requests to complete asynchronously, then you should use the promise API of the future result object. To retrieve the promise, you must use the then() method of the future result, and provide a callback to be completed when the promise is fulfilled. Promises allow you to more easily compose pipelines when dealing with asynchronous results. For example, we could use promises to save the Amazon S3 object’s URL to an item in an Amazon DynamoDB table, once the upload is complete.

// Note: $result is the result of the preceding example's PutObject operation.
    function ($s3Result) use ($ddbClient) {
        $ddbResult = $ddbClient->putItem([
            'TableName' => 'your-table',
            'Item' => [
                'topic' => ['S' => 'docs'],
                'time'  => ['N' => (string) time()],
                'url'   => ['S' => $s3Result['ObjectURL']],
            '@future' => true,

        // Don't break promise chains; return a value. In this case, we are returning
        // another promise, so the PutItem operation can complete asynchronously too.
        return $ddbResult->promise();
    function ($result) {
        echo "SUCCESS!n";
        return $result;
    function ($error) {
        echo "FAILED. " . $error->getMessage() . "n";
        // Forward the rejection by re-throwing it.
        throw $error;

The SDK uses the React/Promise library to provide the promise functionality, allowing for additional features such as joining and mapping promises.

JMESPath querying of results

The result object also has a new search() method that allows you to query the result data using JMESPath, a query language for JSON (or PHP arrays, in our case).


$result = $ec2Client->describeInstances();


Example output:

    [0] => i-xxxxxxxx
    [1] => i-yyyyyyyy
    [2] => i-zzzzzzzz

Swappable and custom HTTP adapters

In V3, cURL is no longer required, but is still used by the default HTTP adapter. However, you can use other HTTP adapters, like the one shipped with Guzzle that uses PHP’s HTTP stream wrapper. You can also write custom adapters, which opens up the possibility of creating an adapter that integrates with a non-blocking event loop like ReactPHP.


Paginators are a new feature in V3, that come as an addition to Iterators from V2. Paginators are similar to Iterators, except that they yield Result objects, instead of items within a result. This is nice, because it handles the tokens/markers for you, getting multiple pages of results, but gives you the flexibility to extract whatever data you want.

// List all "directories" and "files" in the bucket.
$paginator = $s3->getPaginator('ListObjects', [
    'Bucket' => 'my-bucket',
    'Delimiter' => '/'
foreach ($paginator as $result) {
    $jmespathExpr = '[CommonPrefixes[].Prefix, Contents[].Key][]';
    foreach ($result->search($jmespathExpr) as $item) {
        echo $item . "n";

Example output:

    [0] => dir1/
    [1] => dir2/
    [2] => file1
    [3] => file2

New event system

Version 3 features a new and improved event system. Command objects now have their own event emitter that is decoupled from the HTTP request events. There is also a new request "progress" event that can be used for tracking upload and download progress.

use GuzzleHttpEventProgressEvent;

$s3->getHttpClient()->getEmitter()->on('progress', function (ProgressEvent $e) {
    echo 'Uploaded ' . $e->uploaded . ' of ' . $e->uploadSize . "n";

   'Bucket' => $bucket,
   'Key'    => 'docs/file.pdf',
   'Body'   => fopen('/path/to/file.pdf', 'r'),

Example output:

Uploaded 0 of 5299866
Uploaded 16384 of 5299866
Uploaded 32768 of 5299866
Uploaded 5275648 of 5299866
Uploaded 5292032 of 5299866
Uploaded 5299866 of 5299866

New client options

For V3, we changed some of the options you provide when instantiating a client, but we added a few new options that may help you work with services more easily.

  • "debug" – Set to true to print out debug information as requests are being made. You’ll see how the Command and Request objects are affected during each event, and an adapter-specific wire log of the request.
  • "retries" – Set the maximum number of retries the client will perform on failed and throttled requests. The default has always been 3, but now it is easy to configure.

These options can be set when instantiating client.


$s3 = (new AwsSdk)->getS3([
    // Exist in Version 2 and 3
    'profile'  => 'my-credential-profile',
    'region'   => 'us-east-1',
    'version'  => 'latest',

    // New in Version 3
    'debug'    => true,
    'retries'  => 5,

What has changed?

To make all of these improvements for V3, we needed to make some backward-incompatible changes. However, the changes from Version 2 to Version 3 are much fewer than the changes from Version 1 to Version 2. In fact, much of the way you use the SDK will remain the same. For example, the following code for writing an item to an Amazon DynamoDB table looks exactly the same in both V2 and V3 of the SDK.

$result = $dynamoDbClient->putItem([
    'TableName' => 'Contacts',
    'Item'      => [
        'FirstName' => ['S' => 'Jeremy'],
        'LastName'  => ['S' => 'Lindblom'],
        'Birthday'  => ['M' => [
            'Month' => ['N' => '11'],
            'Date'  => ['N' => '24'],

There are two important changes though that you should be aware of upfront:

  1. V3 requires PHP 5.5 or higher and requires the use of Guzzle 5.
  2. You must now specify the API version (via the "version" client option) when you instantiate a client. This is important, because it allows you to lock-in to the API versions of the services you are using. This helps us and you maintain backward compatibility between future SDK releases, because you will be in charge of API versions you are using. Your code will never be impacted by new service API versions until you update your version setting. If this is not a concern for you, you can default to the latest API version by setting 'version' to 'latest' (this is essentially the default behavior of V2).

What next?

We hope you are excited for Version 3 of the SDK!

We look forward to your feedback as we continue to work towards a stable release. Please reach out to us in the comments, on GitHub, or via Twitter (@awsforphp). We plan to publish more blog posts in the near future to explain some of the new features in more detail. We have already published the API docs for V3, but we’ll be working on improving all the documentation for V3, including creating detailed migration and user guides. We’ll also be speaking about V3 in our session at AWS re:Invent.

We will continue updating and making regular releases for V2 on the "master" branch of the SDK’s GitHub repository. Our work on V3 will happen on a separate "v3" branch until we are ready for a stable release.

Version 3 can be installed via Composer using version 3.0.0-beta.1, or you can download the aws.phar or on GitHub.

Introducing DynamoDB Document API (Part 1)

by Hanson Char | on | in Java | Permalink | Comments |  Share

Amazon DynamoDB has recently announced the support of storing entire JSON-style document as single DynamoDB items. What is as exciting is that the AWS SDK for Java has come up with a new Document API that makes it easy and simple to access all the feaures of Amazon DynamoDB, including the latest document support, but with less code!

The new Document API is designed from the ground up to be the next generation of API for accessing DynamoDB. It has an object-oriented API that provides full access to all the DynamoDB features including JSON data support, use of Document Path to access part of a document, new data types such as Map, List, etc.  The best part is, the resultant code is a lot less verbose, and is therefore both easier to write and read.

Alright, enough talking.  Perhaps the new API can best be illustrated with an example. Here I took the liberty of borrowing the code from a previous blog, Using Improved Conditional Writes in DynamoDB, and rewrite it using the new API. To begin with, the original code is copied here:

public static void main(String[] args) {
    // To run this example, first initialize the client, and create a table
    // named 'Game' with a primary key of type hash / string called 'GameId'.
    AmazonDynamoDB dynamodb; // initialize the client     
    try {
        // First set up the example by inserting a new item         
        // To see different results, change either player's
        // starting positions to 20, or set player 1's location to 19.
        Integer player1Position = 15;
        Integer player2Position = 12;
        dynamodb.putItem(new PutItemRequest()
                .addItemEntry("GameId", new AttributeValue("abc"))
                    new AttributeValue().withN(player1Position.toString()))
                    new AttributeValue().withN(player2Position.toString()))
                .addItemEntry("Status", new AttributeValue("IN_PROGRESS")));
        // Now move Player1 for game "abc" by 1,
        // as long as neither player has reached "20".
        UpdateItemResult result = dynamodb.updateItem(new UpdateItemRequest()
            .addKeyEntry("GameId", new AttributeValue("abc"))
                 "Player1-Position", new AttributeValueUpdate()
                     .withValue(new AttributeValue().withN("1"))
                 "Player1-Position", new ExpectedAttributeValue()
                     .withValue(new AttributeValue().withN("20"))
                 "Player2-Position", new ExpectedAttributeValue()
                     .withValue(new AttributeValue().withN("20"))
                 "Status", new ExpectedAttributeValue()
                     .withValue(new AttributeValue().withS("IN_PROGRESS"))
        if ("20".equals(
            result.getAttributes().get("Player1-Position").getN())) {
            System.out.println("Player 1 wins!");
        } else {
            System.out.println("The game is still in progress: "
                + result.getAttributes());
    } catch (ConditionalCheckFailedException e) {
        System.out.println("Failed to move player 1 because the game is over");

Now, let’s rewrite the same code using the DynamoDB Document API:

public static void main(String[] args) {
    // Initialize the client and DynamoDB object
    AmazonDynamoDBClient client = new AmazonDynamoDBClient(...);
    DynamoDB dynamodb = new DynamoDB(client);

    try {
        Table table = dynamodb.getTable("Game");
        table.putItem(new Item()
            .withPrimaryKey("GameId", "abc")
            .withInt("Player1-Position", 15)
            .withInt("Player2-Position", 12)
            .withString("Status", "IN_PROGRESS"));
       UpdateItemOutcome outcome = table.updateItem(new UpdateItemSpec()
            .withPrimaryKey("GameId", "abc")
                new AttributeUpdate("Player1-Position").addNumeric(1))
                new Expected("Player1-Position").lt(20),
                new Expected("Player2-Position").lt(20),
                new Expected("Status").eq("IN_PROGRESS")));

        Item item = outcome.getItem();
        if (item.getInt("Player1-Position") == 20) {
            System.out.println("Player 1 wins!");
        } else {
            System.out.println("The game is still in progress: " + item);
    } catch (ConditionalCheckFailedException e) {
        System.out.println("Failed to move player 1 because the game is over");

As you see, the new Document API allows the direct use of plain old Java data types and has less boilerplate.  In fact, the Dynamo Document API can be used to entirely subsume what you can do with the low level client (i.e. AmazonDynamoDBClient) but with a much cleaner programming model and less code.

I hope this has whet your appetite in harnessing the power of Amazon DynamoDB using the Document API.  To see more examples, feel free to play with the code sample in the A-Z Document API quick-start folder at github, or check out the AWS blog.

Blog Series: Ruby on Rails on Amazon Web Services

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

Welcome to a series on how to integrate Ruby on Rails apps with Amazon Web Services. In this series, we’re going to start from scratch with a simple app, and show you how to make it scalable, highly available, and fault tolerant.

The Sample App

For this blog series, we have built a sample app in Ruby on Rails. You can find it on GitHub as awslabs/todo-sample-app.

The app itself is designed to be simple to follow. It is a very basic todo list, where you can add tasks, mark them complete, or delete them, all on a single page. In this way, we can focus on the code changes needed to integrate the app with Amazon Web Services, without worrying about confusion over what the app itself does.

There’s no hand-waving here: all the code you need to do this is in this repo, and all the setup you need to do in AWS is in the posts.

What the Series Will Cover

We’re going to start by covering how to deploy the TodoApp to the cloud using AWS OpsWorks. Then, we will talk about speeding up your app by caching your static assets with Amazon CloudFront. We will go on to discuss other scaling and performance improvements you can make to solve real-world problems in the cloud.

Have a topic you’d love for us to cover? Let us know in the comments!

Up Next

The first post, showing you how to deploy the Todo Sample App to AWS OpsWorks will be out soon. Stay tuned!

AWS re:Invent 2014

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Just a little over a month from now AWS re:Invent 2014 kicks off! This year, you’ll have over 20 different tracks to choose from – from Big Data to Web Development, and everything in between.

The AWS SDK for Java team will be at AWS re:Invent this year, and presenting a session on using the AWS SDK for Java. In particular, we’ll be highlighting many enhancements we’ve built into the AWS SDK for Java in the past year.

We all look forward to AWS re:Invent every year because it’s such a great opportunity to connect with you, hear about what you need, and teach you about some of the new stuff we’ve been working on.

It’s not too late to register for AWS re:Invent 2014! See you in Las Vegas!

Amazon Cognito Credentials Provider

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Amazon Cognito helps you create unique identifiers for your end users that are kept consistent across devices and platforms. Cognito also delivers temporary, limited-privilege credentials to your app to access AWS resources. With Amazon Cognito, your app can support unauthenticated guest users as well as users authenticated through a identity provider, such as Facebook, Google, Login with Amazon or with developer authenticated identity providers.

Version of AWS SDK for .NET has added Amazon.CognitoIdentity.CognitoAWSCredentials, a credentials object that uses Cognito and the Security Token Service to retrieve credentials in order to make AWS calls.

The first step in setting up CognitoAWSCredentials is to create an ”identity pool”. (An identity pool is a store of user identity information specific to your account. The information is retrievable across client platforms, devices, and operating systems, so that if a user starts using the app on a phone and later switches to a tablet, the persisted app information is still available for that user.) You can create a new identity pool from the Amazon Cognito management console. If you are using the console, it will also provide you the other pieces of information we will need:

  • Your account number: this is a 12-digit number, such as 123456789012, that is unique to your account.
  • The unauthenticated role ARN: this is a role that unauthenticated users will assume. For instance, this role can provide read-only permissions to your data.
  • The authenticated role ARN: authenticated users will assume this role. This role can have more extensive permissions to your data.


Here’s a simple code sample illustrating how this information is used to set up CognitoAWSCredentials, which can then be used to make a call to Amazon S3 as an unauthenticated user.

CognitoAWSCredentials credentials = new CognitoAWSCredentials(
    accountId,        // account number
    identityPoolId,   // identity pool id
    unAuthRoleArn,    // role for unauthenticated users
    null,             // role for authenticated users, not set
using (var s3Client = new AmazonS3Client(credentials))


As you can see, we are able to make calls with just a minimum amount of data required to authenticate the user. User permissions are controlled by the role, so you are free to configure access as you see fit.

The next example shows how you can start using AWS as an unauthenticated user, then authenticate through Facebook and update the credentials to use Facebook credentials. Using this approach, you can grant different capabilities to authenticated users via the authenticated role. For instance, you might have a Windows Phone application that permits users to view content anonymously, but allows them to post if they are logged on with one or more of the configured providers.

CognitoAWSCredentials credentials = new CognitoAWSCredentials(
    accountId, identityPoolId,
    unAuthRoleArn,    // role for unauthenticated users
    authRoleArn,      // role for authenticated users
using (var s3Client = new AmazonS3Client(credentials))
    // Initial use will be unauthenticated
    // Authenticate user through Facebook
    string facebookToken = GetFacebookAuthToken();
    // Add Facebook login to credentials. This will clear the current AWS credentials
    // and new AWS credentials using the authenticated role will be retrieved.
    credentials.AddLogin("", facebookAccessToken);

    // This call will be performed with the authenticated role and credentials

This new credentials object provides even more functionality if used with the AmazonCognitoSyncClient that is part of the .NET SDK: if you are using both AmazonCognitoSyncClient and CognitoAWSCredentials, you don’t have to specify the IdentityPoolId and IdentityId properties when making calls with the AmazonCognitoSyncClient. These properties are automatically filled in from CognitoAWSCredentials. Our final example illustrates this, as well as an event that notifies us whenever the IdentityId for CognitoAWSCredentials changes. (The IdentityId can change in some cases, such as going from an unauthenticated user to an authenticated one.)

CognitoAWSCredentials credentials = GetCognitoAWSCredentials();

// Log identity changes
credentials.IdentityChangedEvent += (sender, args) =>
    Console.WriteLine("Identity changed: [{0}] => [{1}]", args.OldIdentityId, args.NewIdentityId);

using(var syncClient = new AmazonCognitoSyncClient(credentials))
    var result = syncClient.ListRecords(new ListRecordsRequest
        DatasetName = datasetName
        // No need to specify these properties
        //IdentityId = "...",
        //IdentityPoolId = "..."        

For more information on Amazon Cognito, including use-cases and sample policies, visit the official Amazon Cognito page or the Cognito section of the Mobile Development blog.

AWS SDK for Ruby V2 Preview Release

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

Version 2 of the AWS SDK for Ruby is available now as a preview release. If you use Bundler with some standard best-practices, you should be unaffected by the v2 release of the aws-sdk gem. This blog post highlights a few things you might want to be aware of.

Installing V2 Preview Release

V2 of the AWS SDK for Ruby is available now as a preview release. To install v2, use –pre:

$ gem install aws-sdk --pre

If you are using bundler, you must specify the full version until the preview status is removed.

gem 'aws-sdk', '2.0.0.pre'

Lock your Dependencies

The V2 Ruby SDK is not backwards compatible with the V1 Ruby SDK. If you have a bundler dependency on aws-sdk and you do not specify a version, you will run into problems with the 2.0 final is released.
To ensure you are unaffected by the major version bump, ensure you specify a version dependency in your Gemfile:

gem 'aws-sdk', '< 2.0'

Alternatively, you can change your gem dependency from aws-sdk to aws-sdk-v1

gem 'aws-sdk-v1'

The AWS SDK for Ruby follows semver. This allows users to update within the same major version with confidence that there are not backwards incompatible changes. If there are, they will be treated as bugs.

Use Both Version in One Application

The V1 and V2 Ruby SDKs use different namespaces. You may only load one version of a single gem. We publish the v1 Ruby SDK as a separate gem now to allow users to load both versions. Additionally, the v2 SDK uses a different root namespace to avoid conflicts.

# in your Gemfile
gem 'aws-sdk-v1'
gem 'aws-sdk', '2.0.0.pre'

And then in your application:

require 'aws-sdk-v1'
require 'aws-sdk'

# v1 uses the AWS module, v2 uses the Aws module
s3_v1 =
s3_v2 =

Links of Interest

Happy coding, and as always, feedback is welcomed!