AWS Developer Blog

Getting Ready for re:Invent

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

AWS re:Invent 2014 is coming up fast. Steve and I are heading to Las Vegas and will be presenting a session where we discuss some of the latest features of the AWS SDK for .NET. We’ll also be hanging out in the Expo area at the AWS Developer Resources booth, so please drop by and say hi. We would love to hear how you use our .NET tooling.

It’s not too late to register for AWS re:Invent 2014! See you in Las Vegas!

Deploying Ruby on Rails Applications to AWS OpsWorks

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

To begin our series on using Ruby on Rails with Amazon Web Services, we are going to start at the beginning: deploying our application. Today, we will be deploying our application to AWS OpsWorks.

Following along with this post, you should be able to deploy our "Todo Sample App" to AWS using OpsWorks, with your application and database running on different machine instances.

Getting Your Application Ready to Deploy

You can deploy the Todo sample application to OpsWorks directly from its public GitHub repo, using the ‘opsworks’ branch. If you explore the repo, you will notice that we’ve made a few design choices:

  • Our secrets file at config/secrets.yml expects the RAILS_SECRET_TOKEN environment variable to be set on our application servers.
  • We have required the mysql2 gem, to interface with a MySQL database.
  • We have required the unicorn gem, and will use unicorn as our app server.

Creating an OpsWorks Stack

Log in to the AWS Console and navigate to the AWS OpsWorks Console. Click Add Stack and fill out the form like so:

Add Stack Screen

Don’t worry about the "Advanced" settings for now – we won’t need them during this part of the tutorial. Once you’ve filled out the form, just press Create Stack and you’re done.

Creating the Rails App Server Layer

After creating a stack, you’ll find yourself at a page prompting you to create a layer, an instance, and an app. To start, click Add a layer.

We are making a few changes to the default options here. They are:

  • Using Ruby version 2.1.
  • Using "nginx and Unicorn" instead of "Apache2 and Passenger".
  • Using RubyGems version 2.2.1.

Once you’re all done, click Add Layer. You’ll be redirected to the "Layers" screen.

Creating the Database Layer

Next, we’re going to create our database layer. On the layers screen, click + Layer.

Add MySQL Layer

Choose "MySQL" from the drop down box, and leave everything else as-is. Of course, if you’re taking your own screenshots, it is best to avoid sharing your passwords of choice as well!

Click Add Layer and you’re done with this step.

MySQL Layer vs. Amazon RDS Layer

When creating your stack, you can choose to use an OpsWorks-managed EC2 instance running MySQL, called a "MySQL" layer, or you can create a layer that points to an existing Amazon RDS instance.

For this example, we are going to use a MySQL layer. You could substitute an RDS layer if you so chose. In future posts, we may explore this option in depth.

Adding Instances

We’ve made layers for our application servers and database, but we do not yet have application servers or a database. We will next create an instance of each.

Create an App Server Instance

From the "Layers" screen, click Add instance in the "Rails App Server" layer.

Add Rails Instance

We’re creating a t2.micro instance to optimize for cost (this is a demo after all). You may also want to create an SSH key and specify it here in order to be able to log in to your host for debugging purposes, but we don’t strictly need it so we are going to skip that for now.

Click Add Instance once you’re done, then start to begin the instance setup process. While that runs, we are going to make our database instance.

One quick aside about using a t2.micro instance: you can only create them in a VPC. We have created a VPC in this example, but if you were creating a stack without a VPC, t2.micro instances will not be available to you. Other instance types will, of course, work for this example.

Create a Database Instance

You’ll note that, if you’re following along with each step, you’re now at the "Instances" page. From either here or the layers page, under "MySQL", click Add an instance.

Add MySQL Instance

As before, we are creating a t2.micro instance. Click Add Instance to create the instance, and start to begin instance setup.

Adding the Application

While our instances are set up, let’s add our application. Click the Apps link on the sidebar, then click Add an app.

Add App

For this example, we’re using the Git repository at https://github.com/awslabs/todo-sample-app.git as our Application Source, and using the opsworks branch to ensure that you’re deploying the same code I was as this post was written. You can name the app whatever you’d like, but the "TodoApp" name will match with fields we will fill out later, so if you do change the name, make sure to use that new name going forward wherever we use "TodoApp".

Add App

To generate a value for the RAILS_SECRET_KEY environment variable, you can use the command rake secret within your copy of the repo. Just remember to set this as a "Protected value", and if you’re taking screenshots of your process, this is a good time to remove the placeholder value you used for the screenshot and to add a new value generated with rake secret.

Click Add App when you are done.

Deploying the Application

It is likely that your instances are done being created and set up by now, but double check that they are both online before continuing to this step. If by chance they are not quite set up, by the time you prepare a cup of tea and come back, they should be ready.

Click the Deployments link on the sidebar, then click the Deploy an App button.

Deploy App

Since we have not done so yet, remember to check "Yes" for the "Migrate database" setting. We will also need this custom JSON to ensure the "mysql2" adapter is used as intended:

{
  "deploy":
  {
    "todoapp":
    {
      "database":
      {
        "adapter": "mysql2"
      }
    }
  }
}

Click Deploy, and grab another cup of tea. You’ve now deployed the Ruby on Rails "Todo" sample app to AWS OpsWorks!

Use Custom JSON for All Deployments

You probably don’t want to be filling in the custom JSON for your adapter choice with every deployment. Fortunately, you can move this custom JSON into your stack settings to have it go with every deployment.

Click the Stack link on the sidebar, open Stack Settings, and click Edit.

Stack Settings

Add the custom JSON you used for your deployment earlier, and click Save.

Try It Out

To view the app in action, click on your app server’s name on the deployment screen to go to the server’s info page. Click the link next to "Public DNS", and you should see the front page of the application:

You can add tasks, mark them complete, and delete them as you like. In short, your application is running and performing database transactions.

Hello TodoApp!

Wrap-Up

In this post, we started with a Ruby on Rails application, and went step-by-step through the process to get it up and running on AWS with OpsWorks. Now, you can follow this same process to get your own Rails application running on AWS.

Now that we can deploy our application, we will begin to explore ways to make our app scale, improve availability, and optimize some common speed bottlenecks.

Have any questions, comments, or problems getting the app up and running? Suggestions for topics you would like to see next? Please reach out to us in the comments!

Version 3 Preview of the AWS SDK

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We’re excited to introduce you to the preview release of Version 3 of the AWS SDK for PHP! As of today, the preview release of Version 3 (V3) is available on GitHub and via Composer.

Two years ago, we released Version 2 (V2) of the SDK. Since then, thousands of developers and companies have adopted it. We are sincerely grateful to all of our users and contributors. We have been constantly collecting your feedback and ideas, and continually watching the evolution of PHP, AWS, and the Guzzle library.

Earlier this year, we felt we could make significant improvements to the SDK, but only if we could break a few things. Since receiving a unanimously positive response to our blog post about updating to the latest version of Guzzle a few months ago, we’ve been working hard on V3, and we’re ready to share it with you.

What’s new?

The new version of the SDK provides a number of important benefits to AWS customers. It is smaller and faster, with improved performance for both serial and concurrent requests. It has several new features based on its use of the new Guzzle 5 library (which also includes the new features from Guzzle 4). The SDK will also, starting from V3, follow the official SemVer spec, so you can have complete confidence when setting version constraints in your projects’ composer.json files.

Let’s take a quick look at some of the new features.

Asynchronous requests

With V3, you can perform asynchronous operations, which allow you to more easily send requests concurrently. To achieve this, the SDK returns future result objects when you specify the @future parameter, which block only when they are accessed. For managing more robust asynchronous workflows, you can retrieve a promise from the future result, to perform logic once the result becomes available or an exception is thrown.

<?php

// Upload a file to your bucket in Amazon S3.
// Use '@future' to make the operation complete asynchronously.
$result = $s3Client->putObject([
    'Bucket' => 'your-bucket',
    'Key'    => 'docs/file.pdf',
    'Body'   => fopen('/path/to/file.pdf', 'r'),
    '@future' => true,
]);

After creating a result using the @future attribute, you now have a future result object. You can use the data stored in the future in a blocking (or synchronous) manner by just using the result as normal (i.e., like a PHP array).

// Wait until the response has been received before accessing its data.
echo $result['ObjectURL'];

If you want to allow your requests to complete asynchronously, then you should use the promise API of the future result object. To retrieve the promise, you must use the then() method of the future result, and provide a callback to be completed when the promise is fulfilled. Promises allow you to more easily compose pipelines when dealing with asynchronous results. For example, we could use promises to save the Amazon S3 object’s URL to an item in an Amazon DynamoDB table, once the upload is complete.

// Note: $result is the result of the preceding example's PutObject operation.
$result->then(
    function ($s3Result) use ($ddbClient) {
        $ddbResult = $ddbClient->putItem([
            'TableName' => 'your-table',
            'Item' => [
                'topic' => ['S' => 'docs'],
                'time'  => ['N' => (string) time()],
                'url'   => ['S' => $s3Result['ObjectURL']],
            ],
            '@future' => true,
        ]);

        // Don't break promise chains; return a value. In this case, we are returning
        // another promise, so the PutItem operation can complete asynchronously too.
        return $ddbResult->promise();
    }
)->then(
    function ($result) {
        echo "SUCCESS!n";
        return $result;
    },
    function ($error) {
        echo "FAILED. " . $error->getMessage() . "n";
        // Forward the rejection by re-throwing it.
        throw $error;
    }
);

The SDK uses the React/Promise library to provide the promise functionality, allowing for additional features such as joining and mapping promises.

JMESPath querying of results

The result object also has a new search() method that allows you to query the result data using JMESPath, a query language for JSON (or PHP arrays, in our case).

<?php

$result = $ec2Client->describeInstances();

print_r($result->search('Reservations[].Instances[].InstanceId'));

Example output:

Array
(
    [0] => i-xxxxxxxx
    [1] => i-yyyyyyyy
    [2] => i-zzzzzzzz
)

Swappable and custom HTTP adapters

In V3, cURL is no longer required, but is still used by the default HTTP adapter. However, you can use other HTTP adapters, like the one shipped with Guzzle that uses PHP’s HTTP stream wrapper. You can also write custom adapters, which opens up the possibility of creating an adapter that integrates with a non-blocking event loop like ReactPHP.

Paginators

Paginators are a new feature in V3, that come as an addition to Iterators from V2. Paginators are similar to Iterators, except that they yield Result objects, instead of items within a result. This is nice, because it handles the tokens/markers for you, getting multiple pages of results, but gives you the flexibility to extract whatever data you want.

// List all "directories" and "files" in the bucket.
$paginator = $s3->getPaginator('ListObjects', [
    'Bucket' => 'my-bucket',
    'Delimiter' => '/'
]);
foreach ($paginator as $result) {
    $jmespathExpr = '[CommonPrefixes[].Prefix, Contents[].Key][]';
    foreach ($result->search($jmespathExpr) as $item) {
        echo $item . "n";
    }
}

Example output:

Array
(
    [0] => dir1/
    [1] => dir2/
    [2] => file1
    [3] => file2
    ...
)

New event system

Version 3 features a new and improved event system. Command objects now have their own event emitter that is decoupled from the HTTP request events. There is also a new request "progress" event that can be used for tracking upload and download progress.

use GuzzleHttpEventProgressEvent;

$s3->getHttpClient()->getEmitter()->on('progress', function (ProgressEvent $e) {
    echo 'Uploaded ' . $e->uploaded . ' of ' . $e->uploadSize . "n";
});

$s3->putObject([
   'Bucket' => $bucket,
   'Key'    => 'docs/file.pdf',
   'Body'   => fopen('/path/to/file.pdf', 'r'),
]);

Example output:

Uploaded 0 of 5299866
Uploaded 16384 of 5299866
Uploaded 32768 of 5299866
...
Uploaded 5275648 of 5299866
Uploaded 5292032 of 5299866
Uploaded 5299866 of 5299866

New client options

For V3, we changed some of the options you provide when instantiating a client, but we added a few new options that may help you work with services more easily.

  • "debug" – Set to true to print out debug information as requests are being made. You’ll see how the Command and Request objects are affected during each event, and an adapter-specific wire log of the request.
  • "retries" – Set the maximum number of retries the client will perform on failed and throttled requests. The default has always been 3, but now it is easy to configure.

These options can be set when instantiating client.

<?php

$s3 = (new AwsSdk)->getS3([
    // Exist in Version 2 and 3
    'profile'  => 'my-credential-profile',
    'region'   => 'us-east-1',
    'version'  => 'latest',

    // New in Version 3
    'debug'    => true,
    'retries'  => 5,
]);

What has changed?

To make all of these improvements for V3, we needed to make some backward-incompatible changes. However, the changes from Version 2 to Version 3 are much fewer than the changes from Version 1 to Version 2. In fact, much of the way you use the SDK will remain the same. For example, the following code for writing an item to an Amazon DynamoDB table looks exactly the same in both V2 and V3 of the SDK.

$result = $dynamoDbClient->putItem([
    'TableName' => 'Contacts',
    'Item'      => [
        'FirstName' => ['S' => 'Jeremy'],
        'LastName'  => ['S' => 'Lindblom'],
        'Birthday'  => ['M' => [
            'Month' => ['N' => '11'],
            'Date'  => ['N' => '24'],
        ],
    ],
]);

There are two important changes though that you should be aware of upfront:

  1. V3 requires PHP 5.5 or higher and requires the use of Guzzle 5.
  2. You must now specify the API version (via the "version" client option) when you instantiate a client. This is important, because it allows you to lock-in to the API versions of the services you are using. This helps us and you maintain backward compatibility between future SDK releases, because you will be in charge of API versions you are using. Your code will never be impacted by new service API versions until you update your version setting. If this is not a concern for you, you can default to the latest API version by setting 'version' to 'latest' (this is essentially the default behavior of V2).

What next?

We hope you are excited for Version 3 of the SDK!

We look forward to your feedback as we continue to work towards a stable release. Please reach out to us in the comments, on GitHub, or via Twitter (@awsforphp). We plan to publish more blog posts in the near future to explain some of the new features in more detail. We have already published the API docs for V3, but we’ll be working on improving all the documentation for V3, including creating detailed migration and user guides. We’ll also be speaking about V3 in our session at AWS re:Invent.

We will continue updating and making regular releases for V2 on the "master" branch of the SDK’s GitHub repository. Our work on V3 will happen on a separate "v3" branch until we are ready for a stable release.

Version 3 can be installed via Composer using version 3.0.0-beta.1, or you can download the aws.phar or aws.zip on GitHub.

Introducing DynamoDB Document API (Part 1)

by Hanson Char | on | in Java | Permalink | Comments |  Share

Amazon DynamoDB has recently announced the support of storing entire JSON-style document as single DynamoDB items. What is as exciting is that the AWS SDK for Java has come up with a new Document API that makes it easy and simple to access all the feaures of Amazon DynamoDB, including the latest document support, but with less code!

The new Document API is designed from the ground up to be the next generation of API for accessing DynamoDB. It has an object-oriented API that provides full access to all the DynamoDB features including JSON data support, use of Document Path to access part of a document, new data types such as Map, List, etc.  The best part is, the resultant code is a lot less verbose, and is therefore both easier to write and read.

Alright, enough talking.  Perhaps the new API can best be illustrated with an example. Here I took the liberty of borrowing the code from a previous blog, Using Improved Conditional Writes in DynamoDB, and rewrite it using the new API. To begin with, the original code is copied here:

public static void main(String[] args) {
    // To run this example, first initialize the client, and create a table
    // named 'Game' with a primary key of type hash / string called 'GameId'.
    AmazonDynamoDB dynamodb; // initialize the client     
    try {
        // First set up the example by inserting a new item         
        // To see different results, change either player's
        // starting positions to 20, or set player 1's location to 19.
        Integer player1Position = 15;
        Integer player2Position = 12;
        dynamodb.putItem(new PutItemRequest()
                .withTableName("Game")
                .addItemEntry("GameId", new AttributeValue("abc"))
                .addItemEntry("Player1-Position",
                    new AttributeValue().withN(player1Position.toString()))
                .addItemEntry("Player2-Position",
                    new AttributeValue().withN(player2Position.toString()))
                .addItemEntry("Status", new AttributeValue("IN_PROGRESS")));
        // Now move Player1 for game "abc" by 1,
        // as long as neither player has reached "20".
        UpdateItemResult result = dynamodb.updateItem(new UpdateItemRequest()
            .withTableName("Game")
            .withReturnValues(ReturnValue.ALL_NEW)
            .addKeyEntry("GameId", new AttributeValue("abc"))
            .addAttributeUpdatesEntry(
                 "Player1-Position", new AttributeValueUpdate()
                     .withValue(new AttributeValue().withN("1"))
                     .withAction(AttributeAction.ADD))
            .addExpectedEntry(
                 "Player1-Position", new ExpectedAttributeValue()
                     .withValue(new AttributeValue().withN("20"))
                     .withComparisonOperator(ComparisonOperator.LT))
            .addExpectedEntry(
                 "Player2-Position", new ExpectedAttributeValue()
                     .withValue(new AttributeValue().withN("20"))
                     .withComparisonOperator(ComparisonOperator.LT))
            .addExpectedEntry(
                 "Status", new ExpectedAttributeValue()
                     .withValue(new AttributeValue().withS("IN_PROGRESS"))
                     .withComparisonOperator(ComparisonOperator.EQ))  
        );
        if ("20".equals(
            result.getAttributes().get("Player1-Position").getN())) {
            System.out.println("Player 1 wins!");
        } else {
            System.out.println("The game is still in progress: "
                + result.getAttributes());
        }
    } catch (ConditionalCheckFailedException e) {
        System.out.println("Failed to move player 1 because the game is over");
    }
}

Now, let’s rewrite the same code using the DynamoDB Document API:

public static void main(String[] args) {
    // Initialize the client and DynamoDB object
    AmazonDynamoDBClient client = new AmazonDynamoDBClient(...);
    DynamoDB dynamodb = new DynamoDB(client);

    try {
        Table table = dynamodb.getTable("Game");
        table.putItem(new Item()
            .withPrimaryKey("GameId", "abc")
            .withInt("Player1-Position", 15)
            .withInt("Player2-Position", 12)
            .withString("Status", "IN_PROGRESS"));
         
       UpdateItemOutcome outcome = table.updateItem(new UpdateItemSpec()
            .withReturnValues(ReturnValue.ALL_NEW)
            .withPrimaryKey("GameId", "abc")
            .withAttributeUpdate(
                new AttributeUpdate("Player1-Position").addNumeric(1))
            .withExpected(
                new Expected("Player1-Position").lt(20),
                new Expected("Player2-Position").lt(20),
                new Expected("Status").eq("IN_PROGRESS")));

        Item item = outcome.getItem();
        if (item.getInt("Player1-Position") == 20) {
            System.out.println("Player 1 wins!");
        } else {
            System.out.println("The game is still in progress: " + item);
        }
    } catch (ConditionalCheckFailedException e) {
        System.out.println("Failed to move player 1 because the game is over");
    }
}

As you see, the new Document API allows the direct use of plain old Java data types and has less boilerplate.  In fact, the Dynamo Document API can be used to entirely subsume what you can do with the low level client (i.e. AmazonDynamoDBClient) but with a much cleaner programming model and less code.

I hope this has whet your appetite in harnessing the power of Amazon DynamoDB using the Document API.  To see more examples, feel free to play with the code sample in the A-Z Document API quick-start folder at github, or check out the AWS blog.

Blog Series: Ruby on Rails on Amazon Web Services

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

Welcome to a series on how to integrate Ruby on Rails apps with Amazon Web Services. In this series, we’re going to start from scratch with a simple app, and show you how to make it scalable, highly available, and fault tolerant.

The Sample App

For this blog series, we have built a sample app in Ruby on Rails. You can find it on GitHub as awslabs/todo-sample-app.

The app itself is designed to be simple to follow. It is a very basic todo list, where you can add tasks, mark them complete, or delete them, all on a single page. In this way, we can focus on the code changes needed to integrate the app with Amazon Web Services, without worrying about confusion over what the app itself does.

There’s no hand-waving here: all the code you need to do this is in this repo, and all the setup you need to do in AWS is in the posts.

What the Series Will Cover

We’re going to start by covering how to deploy the TodoApp to the cloud using AWS OpsWorks. Then, we will talk about speeding up your app by caching your static assets with Amazon CloudFront. We will go on to discuss other scaling and performance improvements you can make to solve real-world problems in the cloud.

Have a topic you’d love for us to cover? Let us know in the comments!

Up Next

The first post, showing you how to deploy the Todo Sample App to AWS OpsWorks will be out soon. Stay tuned!

AWS re:Invent 2014

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Just a little over a month from now AWS re:Invent 2014 kicks off! This year, you’ll have over 20 different tracks to choose from – from Big Data to Web Development, and everything in between.

The AWS SDK for Java team will be at AWS re:Invent this year, and presenting a session on using the AWS SDK for Java. In particular, we’ll be highlighting many enhancements we’ve built into the AWS SDK for Java in the past year.

We all look forward to AWS re:Invent every year because it’s such a great opportunity to connect with you, hear about what you need, and teach you about some of the new stuff we’ve been working on.

It’s not too late to register for AWS re:Invent 2014! See you in Las Vegas!

Amazon Cognito Credentials Provider

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Amazon Cognito helps you create unique identifiers for your end users that are kept consistent across devices and platforms. Cognito also delivers temporary, limited-privilege credentials to your app to access AWS resources. With Amazon Cognito, your app can support unauthenticated guest users as well as users authenticated through a identity provider, such as Facebook, Google, Login with Amazon or with developer authenticated identity providers.

Version 2.3.1.0 of AWS SDK for .NET has added Amazon.CognitoIdentity.CognitoAWSCredentials, a credentials object that uses Cognito and the Security Token Service to retrieve credentials in order to make AWS calls.

The first step in setting up CognitoAWSCredentials is to create an ”identity pool”. (An identity pool is a store of user identity information specific to your account. The information is retrievable across client platforms, devices, and operating systems, so that if a user starts using the app on a phone and later switches to a tablet, the persisted app information is still available for that user.) You can create a new identity pool from the Amazon Cognito management console. If you are using the console, it will also provide you the other pieces of information we will need:

  • Your account number: this is a 12-digit number, such as 123456789012, that is unique to your account.
  • The unauthenticated role ARN: this is a role that unauthenticated users will assume. For instance, this role can provide read-only permissions to your data.
  • The authenticated role ARN: authenticated users will assume this role. This role can have more extensive permissions to your data.

 

Here’s a simple code sample illustrating how this information is used to set up CognitoAWSCredentials, which can then be used to make a call to Amazon S3 as an unauthenticated user.

CognitoAWSCredentials credentials = new CognitoAWSCredentials(
    accountId,        // account number
    identityPoolId,   // identity pool id
    unAuthRoleArn,    // role for unauthenticated users
    null,             // role for authenticated users, not set
    region);
using (var s3Client = new AmazonS3Client(credentials))
{
    s3Client.ListBuckets();
}

 

As you can see, we are able to make calls with just a minimum amount of data required to authenticate the user. User permissions are controlled by the role, so you are free to configure access as you see fit.

The next example shows how you can start using AWS as an unauthenticated user, then authenticate through Facebook and update the credentials to use Facebook credentials. Using this approach, you can grant different capabilities to authenticated users via the authenticated role. For instance, you might have a Windows Phone application that permits users to view content anonymously, but allows them to post if they are logged on with one or more of the configured providers.

CognitoAWSCredentials credentials = new CognitoAWSCredentials(
    accountId, identityPoolId,
    unAuthRoleArn,    // role for unauthenticated users
    authRoleArn,      // role for authenticated users
    region);
using (var s3Client = new AmazonS3Client(credentials))
{
    // Initial use will be unauthenticated
    s3Client.ListBuckets();
    
    // Authenticate user through Facebook
    string facebookToken = GetFacebookAuthToken();
    
    // Add Facebook login to credentials. This will clear the current AWS credentials
    // and new AWS credentials using the authenticated role will be retrieved.
    credentials.AddLogin("graph.facebook.com", facebookAccessToken);

    // This call will be performed with the authenticated role and credentials
    s3Client.ListBuckets();
}

This new credentials object provides even more functionality if used with the AmazonCognitoSyncClient that is part of the .NET SDK: if you are using both AmazonCognitoSyncClient and CognitoAWSCredentials, you don’t have to specify the IdentityPoolId and IdentityId properties when making calls with the AmazonCognitoSyncClient. These properties are automatically filled in from CognitoAWSCredentials. Our final example illustrates this, as well as an event that notifies us whenever the IdentityId for CognitoAWSCredentials changes. (The IdentityId can change in some cases, such as going from an unauthenticated user to an authenticated one.)

CognitoAWSCredentials credentials = GetCognitoAWSCredentials();

// Log identity changes
credentials.IdentityChangedEvent += (sender, args) =>
{
    Console.WriteLine("Identity changed: [{0}] => [{1}]", args.OldIdentityId, args.NewIdentityId);
};

using(var syncClient = new AmazonCognitoSyncClient(credentials))
{
    var result = syncClient.ListRecords(new ListRecordsRequest
    {
        DatasetName = datasetName
        // No need to specify these properties
        //IdentityId = "...",
        //IdentityPoolId = "..."        
    });
}

For more information on Amazon Cognito, including use-cases and sample policies, visit the official Amazon Cognito page or the Cognito section of the Mobile Development blog.

AWS SDK for Ruby V2 Preview Release

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

Version 2 of the AWS SDK for Ruby is available now as a preview release. If you use Bundler with some standard best-practices, you should be unaffected by the v2 release of the aws-sdk gem. This blog post highlights a few things you might want to be aware of.

Installing V2 Preview Release

V2 of the AWS SDK for Ruby is available now as a preview release. To install v2, use –pre:

$ gem install aws-sdk --pre

If you are using bundler, you must specify the full version until the preview status is removed.

gem 'aws-sdk', '2.0.0.pre'

Lock your Dependencies

The V2 Ruby SDK is not backwards compatible with the V1 Ruby SDK. If you have a bundler dependency on aws-sdk and you do not specify a version, you will run into problems with the 2.0 final is released.
To ensure you are unaffected by the major version bump, ensure you specify a version dependency in your Gemfile:

gem 'aws-sdk', '< 2.0'

Alternatively, you can change your gem dependency from aws-sdk to aws-sdk-v1

gem 'aws-sdk-v1'

The AWS SDK for Ruby follows semver. This allows users to update within the same major version with confidence that there are not backwards incompatible changes. If there are, they will be treated as bugs.

Use Both Version in One Application

The V1 and V2 Ruby SDKs use different namespaces. You may only load one version of a single gem. We publish the v1 Ruby SDK as a separate gem now to allow users to load both versions. Additionally, the v2 SDK uses a different root namespace to avoid conflicts.

# in your Gemfile
gem 'aws-sdk-v1'
gem 'aws-sdk', '2.0.0.pre'

And then in your application:

require 'aws-sdk-v1'
require 'aws-sdk'

# v1 uses the AWS module, v2 uses the Aws module
s3_v1 = AWS::S3::Client.new
s3_v2 = Aws::S3::Client.new

Links of Interest

Happy coding, and as always, feedback is welcomed!

Object Lifecycles

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

When using the AWS SDK for .NET, you may find yourself wondering how to use various components of the SDK and, more importantly, how to properly dispose of resources once you are done with them. This blog post reviews the lifecycles of various SDK objects and the best practices for using them.

Lifecycles

There are three basic lifecycle concerns in the AWS SDK for .NET:

  • Thread safety – Some objects can be used across multiple threads without worrying about errors or data corruption.
  • Disposability – Some objects should be disposed of, either by calling .Dispose() on the object or with a using block.
  • Cacheability – Some objects should be cached or reused, usually because they are expensive to recreate.

Clients

The best-known aspect of the AWS SDK for .NET are the various service clients that you can use to interact with AWS. Client objects are thread safe, disposable, and can be reused. (Client objects are inexpensive, so you are not incurring a large overhead by constructing multiple instances, but it’s not a bad idea to create and reuse a client.)

Here’s a simple example of using an Amazon DynamoDB client to list all the tables in your account. Note that we wrap the client in a using block to make sure it is disposed of, either after the last line of the using block is executed or when an exception is thrown. The last point is why it is a good idea, whenever possible, to wrap disposable objects in a using block.

using(var client = new AmazonDynamoDBClient())
{
    var tableNames = client.ListTables().TableNames;
    Console.WriteLine("My DynamoDB tables: " + string.Join(", ", tableNames));
}

High-level objects

The SDK has a number of high-level abstractions built on top of the various service clients. These helper classes provide extra functionality, such as the Amazon S3 TransferUtility class, which automatically handles multi-part uploads, or the DynamoDBContext class, which allows you to store and load .NET objects in Amazon DynamoDB.

  • Amazon.DynamoDBv2.DocumentModel.Table
  • Amazon.DynamoDBv2.DataModel.DynamoDBContext
  • These classes are thread safe, disposable, and cacheable. Both Table and DynamoDBContext should be reused as much as possible, as these objects create and use caches that are populated either from DynamoDB or reflection, operations that could severely degrade performance if performed often (for instance, by recreating a table or a context for every operation).

  • Amazon.S3.Transfer.TransferUtility
  • Amazon.Glacier.Transfer.ArchiveTransferManager
  • These classes are thread safe and disposable. They can be treated like client objects.

Response objects

Most SDK response objects are simple classes, but in a few cases the response objects that you receive from a service call will be disposable. It is important to be aware of what these results are and to properly dispose of them.

The best-known example of this is GetObjectResponse. This is a response returned from an Amazon S3 GetObject call. It is disposable because it returns back to the caller a stream from Amazon S3 of the content for the object. Here’s an example of how this object should be used and disposed of.

MemoryStream ms = new MemoryStream();
using(var client = new Amazon.S3.AmazonS3Client())
{
    var request = new GetObjectRequest
    {
        BucketName = bucketName, Key = key
    };
    using(var response = client.GetObject(request))
    {
        response.ResponseStream.CopyTo(ms);
    }
}

As you can see, we wrapped both the client and the response in using blocks. This makes sure that we dispose of the underlying .NET web streams. (If these are not properly disposed of, you may eventually not be able to make any service calls at all.)

There are two disposable response objects like this in the SDK: Amazon.S3.Model.GetObjectResponse and Amazon.S3.Model.GetObjectTorrentResponse.

Keeping Up with the Latest Release

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

In the past, we’ve used various means to announce new releases of the AWS SDK
for PHP. We’ve recently evaluated our options to decide which tools work best
with our release process, and are easiest for our users to consume.

The best way to track releases of the SDK is to use the "Releases" page of
our GitHub repo
. This page show links to all of the releases, and if you
navigate to a specific releases page, you can see the excerpt of the CHANGELOG
for that release, and download the aws.phar and aws.zip. GitHub allows you
to link directly to the latest release (i.e.,
https://github.com/aws/aws-sdk-php/releases/latest) and also provides a
Releases atom feed which gets updated each time we tag a release.

We also recommend that you follow @awsforphp on Twitter. We use this
account to make announcements about new releases, blog posts, etc., and often
tweet and retweet other things related to AWS and PHP. We also occasionally like
to ask questions, answer questions, and post tips about the AWS SDK for PHP.

Note: If you are currently subscribed to our PEAR channel’s RSS feed, you
should know that we are no longer making updates to the PEAR channel as of 9/15
(see End of Life of PEAR Channel for more details).

So, subscribe to the Releases atom feed and follow us on Twitter to
stay up-to-date with the SDK and make sure you don’t miss out on any new
features or announcements.