AWS Developer Blog

Announcing the Amazon S3 Managed Uploader in the AWS SDK for JavaScript

Today’s release of the AWS SDK for JavaScript (v2.1.0) contains support for a new uploading abstraction in the AWS.S3 service that allows large buffers, blobs, or streams to be uploaded more easily and efficiently, both in Node.js and in the browser. We’re excited to share some details on this new feature in this post.

The new AWS.S3.upload() function intelligently detects when a buffer or stream can be split up into multiple parts and sent to S3 as a multipart upload. This provides a number of benefits:

  1. It enables much more robust upload operations, because if uploading of a single part fails, that individual part can be retried separately without requiring the whole payload to be resent.
  2. Multiple parts can be queued and sent in parallel, allowing for much faster uploads when enough bandwidth is available.
  3. Most importantly, due to the way the managed uploader buffers data in memory using multiple parts, this abstraction does not need to know the full size of the stream, even though Amazon S3 typically requires a content length when uploading data. This enables many common Node.js streaming workflows (like piping a file through a compression stream) to be used natively with the SDK.

A Node.js Example

Let’s see how you can leverage the managed upload abstraction by uploading a stream of unknown size. In this case, we will be piping contents from a file (bigfile) from disk through a gzip compression stream, effectively sending compressed bytes to S3. This can be done easily in the SDK using upload():

// Load the stream
var fs = require('fs'), zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());

// Upload the stream
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}, function(err, data) {
  if (err) console.log("An error occurred", err);
  console.log("Uploaded the file at", data.Location);

A Browser Example

Important Note: In order to support large file uploads in the browser, you must ensure that your CORS configuration exposes the ETag header; otherwise, your multipart uploads will not succeed. See the guide for more information on how to expose this header.

All of this works in the browser too! The only difference here is that, in the browser, we will probably be dealing with File objects instead:

// Get our File object
var file = $('#file-chooser')[0].files[0];

// Upload the File
var bucket = new AWS.S3({params: {Bucket: 'myBucket'});
var params = {Key:, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
  $('#results').html(err ? 'ERROR!' : 'UPLOADED.');

Tracking Total Progress

In addition to simply uploading files, the managed uploader can also keep track of total progress across all parts. This is done by listening to the httpUploadProgress event, similar to the way you would do it with normal request objects:

var fs = require('fs');
var zlib = require('zlib');

var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}).
  on('httpUploadProgress', function(evt) {
    console.log('Progress:', evt.loaded, '/',; 
  send(function(err, data) { console.log(err, data) });

Note that might be undefined for streams of unknown size until the entire stream has been chunked and the last parts are being queued.

Configuring Concurrency and Part Size

Since the uploader provides concurrency and part size management, these values can also be configured to tune performance. In order to do this, you can provide an options map containing the queueSize and partSize to control these respective features. For example, if you wanted to buffer 10 megabyte chunks and reduce concurrency down to 2, you could specify it as follows:

var opts = {queueSize: 2, partSize: 1024 * 1024 * 10};
s3obj.upload({Body: body}, opts).send(callback);

You can read more about controlling these values in the API documentation.

Handling Failures

Finally, you can also control how parts are cleaned up in the managed uploader when a failure occurs using the leavePartsOnError option. By default, the managed uploader will attempt to abort a multipart upload if any individual part fails to upload, but if you would prefer to handle this failure manually (i.e., by attempting to recover more aggressively), you can set leavePartsOnError to true:

s3obj.upload({Body: body}, {leavePartsOnError: true}).send(callback);

This option is also documented in the API documentation.

Give It a Try!

We would love to see what you think about this new feature, so give the new AWS SDK for JavaScript v2.1.0 a try and provide your feedback in the comments below or on GitHub!

Taming client-side key rotation with the Amazon S3 encryption client

by Hanson Char | on | in Java | Permalink | Comments |  Share

As mentioned in an earlier blog, encrypting data using the Amazon S3 encryption client is one way you can provide an additional layer of protection for sensitive information you store in Amazon S3. Under the hood, the Amazon S3 encryption client randomly generates a one-time data encryption key per S3 object, encrypts the key using your client-side master key, and stores the encrypted data key as metadata in S3 alongside the encrypted data. In particular, one interesting property of such client-side encryption is that the client-side master key is always present only locally on the client side, is never sent to AWS, and therefore enables a high level of security control by our customers.

Every now and then, however, an interesting question arises: How can a user of the Amazon S3 encryption client perform key rotation on the client-side master key? Indeed, for security-conscious customers, rotating a client-side master key from one version to the next can sometimes be a desirable feature, if not a strict security requirement. On the other hand, due to the immutability of the S3 metadata, which is where the encrypted data key is stored by default, it may seem necessary to copy the entire S3 object just to allow re-encryption of the data key. And for large S3 objects, that seems rather inefficient and expensive!

In this blog, we will introduce an existing feature of the Amazon S3 encryption client that makes client-side master key rotation feasible in practice. The feature is related to the use of CryptoStorageMode.InstructionFile. In a nutshell, if you explicitly select InstructionFile as the mode of storage for the meta information of an encrypted S3 object, you would then be able to perform key rotation via the instruction file efficiently without ever touching the encrypted S3 object.

The key idea is that, by using an instruction file, you can perform efficient key rotation from one client-side master key to a different client-side master key. The only requirement is that each of the client-side master keys must have a 1:1 mapping with a unique set of identifying information. In the Amazon S3 encryption client, this unique set of identifying information for the client-side master key is called the "material description."

A code sample could be worth a thousand words. :)  To begin with, let’s construct an instance of the Amazon S3 encryption client with the following configuration:

  • An encryption material provider with a v1.0 client-side master key for S3 encryption
  • CryptoStorageMode.InstructionFile
  • Not to ignore any missing instruction file of an encrypted S3 object.  (More on this below.)
// Configures a material provider for a v1.0 client-side master key
SecretKey v1ClientSideMasterKey = ...;
SimpleMaterialProvider origMaterialProvider = new SimpleMaterialProvider().withLatest(
    new EncryptionMaterials(v1ClientSideMasterKey).addDescription("version", "v1.0"));

// Configures to use InstructionFile storage mode
CryptoConfiguration config = new CryptoConfiguration()

final AmazonS3EncryptionClient s3v1 = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                origMaterialProvider, config)

Now, we are ready to use this encryption client to encrypt and persist objects to Amazon S3. With the above configuration, instead of persisting the encrypted data key in the metadata of an encrypted S3 object, the encryption client persists the encrypted data key into a separate S3 object called an "instruction file." Under the hood, the instruction file defaults to use the same name as that of the original S3 object, but with an additional suffix of ".instruction".

So why do we need to explicitly set the IgnoreMissingInstructionFile to false? This has to do with the eventual consistency model of Amazon S3. In this model, there is a small probability of a momentary delay in the instruction file being made available for reading after it has been persisted to S3. For such edge cases, we’d rather fail fast than to return the raw ciphertext without decryption (which is the default behavior for legacy and backward compatibility reasons).  The eventual consistency model of Amazon S3 also means there are some edge cases that you’ll want to watch out for when updating the S3 data object; but we’ll cover that in one of our upcoming posts.

To continue,

// Encrypts and saves the data under the name "sensitive_data.txt"
// to S3. Under the hood, the v1.0 client-side master key is used
// to encrypt the randomly generated data key which gets automatically
// saved in a separate "instruction file".
byte[] plaintext = "Hello S3 Client-side Master Key Rotation!".getBytes(Charset.forName("UTF-8"));
ObjectMetadata metadata = new ObjectMetadata();
String bucket = ...;
PutObjectResult putResult = s3v1.putObject(bucket, "sensitive_data.txt", new ByteArrayInputStream(plaintext), metadata);

// Retrieves and decrypts the data.
S3Object s3object = s3v1.getObject(bucket, "sensitive_data.txt");
System.out.println("Encrypt/Decrypt using the v1.0 client-side master key: "
                + IOUtils.toString(s3object.getObjectContent()));

Now, to update the client-side master key from v1.0 to v2.0, we simply specify a different EncryptionMaterials in a PutInstructionFileRequest. In this example, the encryption client would proceed to do the following:

  1. Decrypt the encrypted data-key using the original v1.0 client-side master key.
  2. Re-encrypt the data-key using a v2.0 client-side master key specified by the EncryptionMaterials in the PutInstructionFileRequest.
  3. Re-persist the instruction file that contains the newly re-encrypted data key along with other meta information to S3.
// Time to rotate to v2.0 client-side master key, but we still need access
// to the v1.0 client-side master key until the key rotation is complete.
SecretKey v2ClientSideMasterKey = ...;
SimpleMaterialProvider materialProvider = 
            new SimpleMaterialProvider()
                .withLatest(new EncryptionMaterials(v2ClientSideMasterKey)
                                .addDescription("version", "v2.0"))
                .addMaterial(new EncryptionMaterials(v1ClientSideMasterKey)
                                .addDescription("version", "v1.0"));

final AmazonS3EncryptionClient s3 = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                materialProvider, config)

// Decrypts the data-key using v1.0 client-side master key
// and re-encrypts the data-key using v2.0 client-side master key,
// overwriting the "instruction file"
PutObjectResult result = s3.putInstructionFile(new PutInstructionFileRequest(
            new S3ObjectId(bucket, "sensitive_data.txt"),

// Retrieves and decrypts the S3 object using v2.0 client-side master key
s3object = s3.getObject(bucket, "sensitive_data.txt");
System.out.println("Client-side master key rotated from v1.0 to v2.0: "
                + IOUtils.toString(s3object.getObjectContent()));
// Key rotation success!

Once the key rotation is finished, you can now use the v2.0 cilent-side master key exclusively without the v1.0 client-side master key. For example:

// Once the key rotation is complete, you need only the v2.0 client-side
// master key. Note the absence of the v1.0 client-side master key.
SimpleMaterialProvider v2materialProvider =
            new SimpleMaterialProvider()
                .withLatest(new EncryptionMaterials(getTestKeyPair())
                                .addDescription("version", "v2.0"));
final AmazonS3EncryptionClient s3v2 = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                v2materialProvider, config)

// Retrieves and decrypts the S3 object using v2.0 client-side master key
s3object = s3v2.getObject(bucket, "sensitive_data.txt");
System.out.println("Decrypt using v2.0 client-side master key: "
                + IOUtils.toString(s3object.getObjectContent()));

In conclusion, we have demonstrated how you can efficiently rotate your client-side master key for Amazon S3 client-side encryption without the need to modify the existing data keys, or mutate the ciphertext of your existing S3 data objects.

We hope you find this useful.  For more information about S3 encryption, see Amazon S3 client-side encryption and Amazon S3 Encryption with AWS Key Management Service.

Using Amazon RDS with Ruby on Rails and AWS OpsWorks

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

Earlier in this blog series, we showed you how to deploy a Ruby on Rails application to Amazon Web Services using AWS OpsWorks. In that example, we used an OpsWorks-managed MySQL database run on an Amazon EC2 instance. One common piece of feedback on that post was a desire to see how you can set up your stack with Amazon RDS. Today, we are going to show you how.


We are going to assume you’re familiar with our earlier post on how to deploy with OpsWorks in general. For this tutorial you can take one of the following approaches:

  • Take your existing stack from following along with that post, delete your database and MySQL layers, then follow this tutorial.
  • Clone your existing stack (in case you don’t want to lose your work), create a new app server instance, delete the new stack’s MySQL layer, then follow along.
  • Start from a brand new stack, and go between both tutorials, replacing the DB steps from the previous tutorial with the DB steps here. If you’re going to go that route, we would recommend reading this tutorial first to understand the differences in approach.

Whichever approach you choose, you can be up and running in just a few minutes!

Create an RDS Instance

AWS OpsWorks will not create Amazon RDS instances on your behalf. You will need to create an instance and link it to OpsWorks.

  1. Open up the RDS console, navigate to Instances, and click Launch DB Instances.
  2. To be able to use our Rails example code unaltered, choose the MySQL engine (you can choose another engine if you like, see the next section for details).
  3. On the next screen, you can choose either Multi-AZ deployments or single-AZ, but if you want to stay within the RDS Free Usage Tier, then you should not choose a multi-AZ deployment.
  4. Make sure that you keep track of your Master Username, Master Password, and the Database Name from the rest of the form.
  5. Within Configure Advanced Settings, make sure you set your VPC Security Groups to include AWS-OpsWorks-DB-Master-Server and AWS-OpsWorks-Rails-App-Server. You can set Publicly Accessible to "No" as well.
  6. Once you’ve completed the forms, click Launch DB Instance. It may take a few minutes for your instance to launch, so now is not a bad time for tea or coffee.

Don’t Want MySQL?

You do not need to use MySQL as your database backend, that is just what we have chosen for this example. Want to use a different database backend? Just do the following:

  1. Replace the mysql2 gem with the adapter gem appropriate to your DB engine selection.
  2. Make sure you also use your adapter selection in your custom deployment JSON, found in your stack settings if you used our exact tutorial steps.
  3. Of course, select that database engine when creating your RDS instance.

Create an RDS Layer

Now that you have an RDS Instance, you need to register it with OpsWorks.

  1. Navigate to the OpsWorks console, to the Layers section.
  2. Click + Layer, and select the RDS tab.
  3. Your RDS instance should be on this screen to select. Select it, then enter your User and Password from the RDS instance creation step.
  4. Click Register with Stack.

This registers your database with OpsWorks, and next you need to connect your app.

Attach Your RDS Instance to Your App

  1. From the Apps section, click on Edit for TodoApp (or whatever app you are developing for).
  2. Under the Data Sources section, select RDS.
  3. It should auto-select your instance; select it if not. Then, fill in your Database name from the RDS instance creation process.
  4. Click Save to save your changes.

Note that this should work for any of the topic branches we’ve made for TodoApp, as your choice of database host is transparent to your app so long as you have the correct database adapter gem in your Gemfile.

Deploy the App

To use your new database, simply run an app deployment, making sure you select the Migrate database option so that Rails can set up your database.

Once that’s done, navigate to your app server host and play around with your app. You’re now running on RDS!

AWS re:Invent 2014 Recap

by David Murray | on | in Java | Permalink | Comments |  Share

I’m almost done getting readjusted to regular life after AWS re:Invent! It was really awesome to meet so many developers building cool stuff on AWS, and to talk about how we can make your lives easier with great SDKs and tools.

If you didn’t make it to re:Invent this year, or if you opted for a session about one of the exciting new services we announced instead of my session about the AWS SDK for Java, the video and slides of my talk are available online. If you’re interested in going deeper, here are some links to more info on all of the topics I covered:

As always, check out the AWS SDK for Java and all the AWS Labs projects on GitHub, and help us make it even easier for you to build really cool applications on AWS. And don’t forget to follow along on twitter for all the latest news and information!

AWS re:Invent 2014 Recap

AWS re:Invent 2014 concluded on Friday, November 14, 2014; here is a summary of an action-packed, fun-filled week!

New features and services

There were four new services and features announced at the Day 1 Keynote, two of which are available to all customers effective November 12, 2014. These include:

  • Amazon RDS for Aurora (a service in preview), a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.
  • AWS CodeDeploy, a service that automates code deployments to Amazon EC2 instances, thereby eliminating the need for error-prone manual operations.
  • AWS Key Management Service, a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys.
  • AWS Config (a service in preview) is a managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.

Additionally, AWS CodeCommit, AWS CodePipeline, and AWS Service Catalog are other new services that were preannounced on the first day. These will be available in early 2015.

There were more new features and services announced at the Day 2 Keynote. These include:

  • Amazon EC2 Container Service (a service in preview), a high-performance container management service that supports Docker containers and allows you to easily run distributed applications on a managed cluster of EC2 instances.
  • AWS Lambda (a service in preview), a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.
  • New Event Notifications for Amazon S3: With this feature, notification messages can be sent through either Amazon SNS or Amazon SQS, or delivered directly to AWS Lambda to trigger Lambda functions.

The AWS SDK for JavaScript supports all the new services and features as of version 2.0.27.

Breakout sessions

AWS re:Invent is a learning conference at its core, and there were over 200 sessions spread over three days, on a variety of topics ranging from application deployment and management and architecture, to education, gaming, and financial services.

I had the privilege of presenting one such session, “Building Cross-Platform Applications Using the AWS SDK for JavaScript.” The goal of this talk was to highlight the portability of the AWS SDK for JavaScript and how the SDK’s features support cross-platform application development. You can watch the talk, as well as refer to the slides posted.

There is a common set of differences involved in porting applications between platforms. The most important of these is the way you include the SDK itself in your application, and configure the SDK with credentials and other settings. Our developer guide describes many ways in which you can configure credentials for Node.js, and manage credentials and identity in the browser.

Other differences include working with web standards like CORS, and keeping your users notified of application state.

Separating the application’s business logic from presentation promotes code reuse, and allows you to easily port the application to multiple platforms by changing only the presentation layer.

To demonstrate this, I wrote an application in Node.js and ported it to a Google Chrome extension, as well as a Windows RT application. The source code for this application is available here.

There were many more fantastic presentations at re:Invent 2014, for which the videos, slides, and audio podcasts are available online.

See you again next year

We hope re:Invent 2014 presented an opportunity for you to learn from the many talks and workshops that happened over the course of a week. We will continue innovating on your behalf and will have more exciting stuff to show you, so stay tuned by subscribing to email updates on the latest AWS re:Invent announcements!

AWS re:Invent 2014 Ruby Recap

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

Last week, we had a great time meeting with AWS customers using Ruby at AWS re:Invent! We appreciate the feedback we received, and the discussions that we had with you.

AWS SDK for Ruby Presentation Notes

At AWS re:Invent this year I took many of you on a tour of version 2 of the AWS SDK for Ruby. We were thrilled to have such a great audience, and we had a great time meeting with many of you both before and after the talk.

In the presentation, we talked about the architecture and new features available in version 2 of the AWS SDK for Ruby, including resources, pagination, and waiters. We also walked through an end-to-end example using Ruby on Rails.

If you didn’t get a chance to make it to my talk, I encourage you to check it out. If you did, you still might find it worthwhile to code along with our Rails example. You can skip ahead to that here if you like.

The presentation also had a lot of links that you might want to check out. For your convenience, we’ve compiled them here:

See You Next Year

AWS re:Invent is October 6 – 9, 2015, once again at The Venetian in Las Vegas, NV. Let us know what you’d like to see next on Twitter @awsforruby and we hope to see you there!

Authentication with Amazon Cognito in the Browser

Amazon Cognito is a great new service that enables a much easier workflow for authenticating with your AWS resources in the browser. Although web identity federation still works directly with identity providers, using the new AWS.CognitoIdentityCredentials gives you the ability to provide access to customers through any identity provider using the same simple workflow and fewer roles. In addition, you can also now provide access to unauthenticated users without having to embed read-only credentials into your application. Let’s look at how easy it is to get set up with Cognito!

Setting Up Identity Providers

Cognito still relies on third-party identity providers to authenticate the identity of the user logging into your site. Currently, Cognito supports Login with Amazon, Facebook, and Google, though you can also setup developer authenticated identities or OpenID Connect. You can visit the respective links to setup applications with these providers which you will then use to get tokens for your application as normal. After you have created the applications with the respective providers, you will want to create an identity pool with Amazon Cognito.

Creating an Identity Pool

Cognito’s “identity pool” is the core resource that groups the users in your application. Effectively, each user logging into your application will get a Cognito ID; these IDs are all created and managed from your identity pool. The identity pool needs to know which identity providers it can hand out IDs to, so this is something we configure when creating the pool.

The easiest way to create an identity pool is through the Amazon Cognito console, which offers a good walkthrough of the necessary parameters (the name, your identity provider application IDs, and whether or not you want to support unauthenticated login). In our example we will use the SDK to create an identity pool with support for Amazon logins and unauthenticated access only. We will call the pool “MyApplicationPool”:

// Cognito Identity is currently only available in us-east-1
var ci = new AWS.CognitoIdentity({region: 'us-east-1'});

var params = {
  AllowUnauthenticatedIdentities: true,
  IdentityPoolName: 'MyApplicationPool',
  Logins: {
    '': 'amzn1.application.1234567890abcdef'
ci.createIdentityPool(params, console.log);

This will print an IdentityPoolId, which we can now use to get a Cognito ID for our user when logging in.

Authenticated vs. Unauthenticated Roles

Now that we’ve configured our pool, we can start talking about logging the user in. There are two ways for a user to login to your application when you enable unauthenticated access:

  1. Authenticated mode — this happens when a user goes through the regular Amazon, Facebook, or Google login flow. This type of login is considered authenticated because the user has proven their identity to the respective provider.
  2. Unauthenticated mode — this happens when a user attempts to get a Cognito ID for your pool but without having authenticated with an identity provider. If you enabled AllowUnauthenticatedIdentities, this will be allowed in your application. This is useful when your application has features you want to expose to users without requiring login, like, say, being able to view comments in a blog.

The way we separate these two “modes” in AWS is through IAM roles. We are going to create two roles for our application matching these two types of users. First, the authenticated role:

Authenticated Role

The easiest way to create a role is through the IAM Console (click “Create Role”). Let’s create a role called “MyApplication-CognitoAuthenticated”, choose a “Role for Identity Provider Access”, granting access to web identity providers. You should now be in “Step 3: Establish Trust”. This is the most important part; it is where we limit this role only to authenticated Cognito users. Here’s how we do it:

  1. Select Amazon Cognito and enter the Identity Pool Id
  2. Be sure to click Add Conditions to add an extra condition.
  3. Set the following fields:
    • Condition: “StringEquals”
    • Key: “”
    • Condition Prefix: “For any value”
    • Value: “authenticated”

This will permit only authenticated users to use this role. The following example is what your trust policy should look like after clicking Next:

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Federated": ""
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "": "us-east-1:1234-abcd-dcba-1234-4321"
        "ForAnyValue:StringLike": {
          "": "authenticated"

Next, set the correct role permissions for this authenticated user. This step will depend heavily on your application and the actual resources you want to expose to authenticated users. You can use the policy generator here to help you create a policy that is scoped to specific actions and resources.

This should give you the correct authenticated role for use in your application.

Unauthenticated Role

This role should be configured almost like the authenticated role except for a few differences:

  1. We give it a separate name like “MyApplication-CognitoUnauthenticated”
  2. We set the Trust Policy condition value to “unauthenticated” instead of authenticated.
  3. The permissions on the role should be much more restrictive, and typically only exposing read-only operations (like for reading comments in a blog).

Once you’ve created these roles you should be able to copy down both role ARNs. These two ARNs will be used when logging into the application under the respective “modes”.

Logging In

We’ve now created all the resources we need to login to our application, so let’s see the code used in the AWS SDK for JavaScript to enable this login flow.

Initially, you will likely want all users to login under “unauthenticated” mode. This can be done up front when configuring your application:

  region: 'us-east-1',
  credentials: new AWS.CognitoIdentityCredentials({
    AccountId: '1234567890', // your AWS account ID
    RoleArn: 'arn:aws:iam::1234567890:role/MyApplication-CognitoUnauthenticated',
    IdentityPoolId: 'us-east-1:1234-abcd-dcba-1234-4321'

Eventually your customer will authenticate with the associated Amazon application. In that case, you will want to update your credentials to reflect the new role and provide the web token. A portion of the Login with Amazon code might look like:

amazon.Login.authorize({scope: "profile"}, function(resp) {
  if (!resp.error) { // logged in
    var creds = AWS.config.credentials;
    creds.params.RoleArn =
    creds.params.Logins = {
      '': resp.access_token

    // manually expire credentials so next request will fire a refresh()
    creds.expired = true;

Getting the Identity ID

Occasionally, you will need the user’s Cognito identity ID to provide to requests. Since the ID is a unique ID for the user, it’s likely you will use this token for things like the user’s identifier in your application. In general, you can get the ID from the identityId property on the credentials object, but it’s likely you will want to use a recently refreshed credentials object prior to accessing the property. To verify that the ID is up to date, wrap this access inside of the get() call on a credentials object, for example:

AWS.config.credentials.get(function(err) {
  if (!err) {
    var id = AWS.config.credentials.identityId;
    console.log("Cognito Identity Id:", id);

Wrapping Up

That’s all there is to using Amazon Cognito for authenticated and unauthenticated access inside of your application. Using Cognito gives your application plenty of other benefits, such as the ability to merge a user’s identity if they choose to login with multiple identity providers (a common feature request), as well as the ability to take advantage of the Amazon Cognito Sync Manager to easily sync any user data inside of your application.

Take a look at Cognito login in the AWS SDK for JavaScript and let us know what you think!

Client Response Stubs

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

We recently added client response stubs to the aws-sdk-core gem. Response stubbing disables network traffic and causes a client to return fake or stubbed data.

# no API calls are made
s3 = true)
#=> []

Custom Response Data

By default, stubbed responses return empty lists, empty maps, and placeholder scalars. These empty responses can be useful at times, but often you want to control the data returned.

s3.stub_responses(:list_buckets, buckets:[{name:'aws-sdk'}])
#=> ['aws-sdk']

Safe Stubbing

One of the common risks when writing tests with stub data is that the stub doesn’t match the shape of the actual response. You risk coding against stubs that provide methods that won’t exist out of your tests.

We resolve this issue by validating your stub data hash against the model of the API. An ArgumentError is raised when calling #stub_responses with invalid data.

s3.stub_resposnes(:list_buckets, buckets:['aws-sdk'])
#=> raises ArgumentError, "expected params[:buckets][0] to be a hash"

Stubbing Multiple Calls

By calling #stub_responses with an operation name and stub data, the client will serve that data for each call. You can specify multiple responses, and they will be used in sequence.

  { buckets:[{name:'aws-sdk'}] },
  { buckets:[{name:'aws-sdk', 'aws-sdk-2'}] }
#=> ['aws-sdk']
#=> ['aws-sdk', 'aws-sdk-2']

Stubbing Errors

In addition to stubbing response data, you can configure errors to raise. You can specify a service error by name, or you can provide an error object or class to raise.

# everything is broken

# raises Aws::S3::Errors::NotFound

# raises a new Timeout::Error

# raises'oops')

You can mix stubbed response data and errors. This approach is great when you want to test how well your code recovers from errors. 

Stubbing All Clients

The default config can be used to enable client stubbing globally. This can be very useful when writing tests to enable stubbing in your test helper once.

# stub everything
Aws.config[:stub_responses] = true

Give it a try and let us know what you think.

DynamoDB JSON Support

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

The latest Amazon DynamoDB update added support for JSON data, making it easy to store JSON documents in a DynamoDB table while preserving their complex and possibly nested shape. Now, the AWS SDK for .NET has added native JSON support, so you can use raw JSON data when working with DynamoDB. This is especially helpful if your application needs to consume or produce JSON—for instance, if your application is talking to a client-side component that uses JSON to send and receive data—as you no longer need to manually parse or compose this data.

Using the new features

The new JSON functionality is exposed in the AWS SDK for .NET through the Document class:

  • ToJson – This method converts a given Document to its JSON representation
  • FromJson – This method creates a Document for a given JSON string

Here’s a quick example of this feature in action.

// Create a Document from JSON data
var jsonDoc = Document.FromJson(json);

// Use the Document as an attribute
var doc = new Document();
doc["Id"] = 123;
doc["NestedDocument"] = jsonDoc;

// Put the item

// Load the item
doc = table.GetItem(42);

// Convert the Document to JSON
var jsonText = doc.ToJson();
var jsonPrettyText = doc["NestedDocument"].AsDocument().ToJsonPretty();

This example shows how a JSON-based Document can be used as an attribute, but you can also use the converted Document directly, provided that it has the necessary key attributes.
Also note that we have introduced the methods ToJson and ToJsonPretty. The difference between the two is that the latter will produce indented JSON that is easier to read.

JSON types

DynamoDB data types are a superset of JSON data types. This means that all JSON data can be represented as DynamoDB data, while the opposite isn’t true.

So if you perform the conversion JSON -> Document -> JSON, the starting and final JSON will be identical (except for formatting). However, since not all DynamoDB data types can be converted to JSON, the conversion Document -> JSON -> Document may result in a different representation of your data.

The differences between DynamoDB and JSON are:

  • JSON has no sets, just arrays, so DynamoDB sets (SS, NS, and BS types) will be converted to JSON arrays.
  • JSON has no binary representation, so DynamoDB binary scalars and sets (B and BS types) will be converted to base64-encoded JSON strings or lists of strings.

If you do end up with a Document instance that has base64-encoded data, we have provided a method on the Document object to decode this data and replace it with the correct binary representation. Here is a simple example:

doc.DecodeBase64Attributes("Data", "DataSet");

After executing the above code, the "Data" attribute will contain binary data, while the "DataSet" attribute will contain a list of binary data.

I hope you find this feature a useful addition to the AWS SDK for .NET. Please give it a try and let us know what you think on GitHub or here in the comments!

AWS re:Invent 2014 Recap

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Another AWS re:Invent has come and gone. Steve and I were lucky enough to be there and meet many developers using AWS in such interesting ways. We also gave a talk showing off some the new features the team added to the SDK this year. The talk has been made available online.

In our talk, we showed demos for:


We hope to hear from more .NET developers at next year’s re:Invent. Until then, feel free to contact us either in our forums or on GitHub.