Category: JavaScript


Announcing CORS Support for Amazon EC2

We are pleased to announce that Amazon EC2 now supports CORS requests, which means you can now use the AWS SDK for JavaScript in the Browser to access your Amazon EC2 resources.

The following example code snippet shows how to make requests to Amazon EC2:

In your HTML file:

<script src="https://sdk.amazonaws.com/js/aws-sdk-2.1.34.min.js"></script>

In your JavaScript file:

var ec2 = new AWS.EC2({region: 'us-west-2'});

ec2.describeInstances(function(err, data) {
  if (err) {
    console.log(err);
  } else {
    data.Reservations.forEach(function(reservation) {
      reservation.Instances.forEach(function(instance) {
        console.log(instance.InstanceId);
      });
    });
  }
});

With Amazon EC2 support for CORS requests, you can now build rich two-tier web applications to manage your instances, VPCs, and more using the AWS SDK for JavaScript in the Browser. Check out our API documentation for details about how to use the API.

We hope you are excited to use use the Amazon EC2 API directly from the browser. We’re eager to know what you think, so leave us a comment or tweet about it @awsforjs.

Creating Amazon CloudFront Signed URLs in Node.js

Amazon CloudFront allows you to use signed URLs to restrict access to content. This allows you to securely serve private content, or content intended for selected users using CloudFront. Read more about how CloudFront signed URLs work.

This article describes how to generate Amazon CloudFront signed URLs in Node.js.

To generate signed URLs, you can use the aws-cloudfront-sign npm module.

Installing the module

npm install aws-cloudfront-sign

Using the module in your code

We recommend that you restrict direct access to your bucket, and require that users access content only through CloudFront. Read more about using an origin access identity to restrict access to Amazon S3 content.

To create a signed URL, you first need to configure your distribution to specify which AWS accounts can create signed URLs (trusted signers). You then need to create a CloudFront key pair for your trusted signer. Once you’ve downloaded your private key for the key-pair ID (Access Key ID), you can use it in your code to generate signed URLs.

The following code shows how to generate signed URLS for web distributions:

var cfsign = require('aws-cloudfront-sign');

var signingParams = {
  keypairId: process.env.PUBLIC_KEY,
  privateKeyString: process.env.PRIVATE_KEY,
  // Optional - this can be used as an alternative to privateKeyString
  privateKeyPath: '/path/to/private/key',
  expireTime: 1426625464599
}

// Generating a signed URL
var signedUrl = cfsign.getSignedUrl(
  'http://example.cloudfront.net/path/to/s3/object', 
  signingParams
);

This module can also be used to generate signed URLs for RTMP distributions:

var signedRTMPUrlObj = cfsign.getSignedRTMPUrl(
  'example.cloudfront.net', 
  // Must not contain prefixes like mp3: or mp4:
  'path/to/s3/object', 
  signingParams
);

This generated URL can now be served to users who are entitled to access the content. We hope this simplifies creating signed URLs for Amazon CloudFront in Node.js.

Authentication in the Browser with Amazon Cognito and Public Identity Providers

Our earlier blog post introduced authentication with Amazon Cognito in the browser.

Amazon Cognito has since simplified the authentication workflow. This article describes authenticating the SDK in the browser using Amazon Cognito and supported public identity providers like Google, Facebook, and Amazon.

Step 1 and Step 2 outline registering your application with a public identity provider, and creating a Cognito identity pool. These steps typically need to be performed only once.

One-time Setup

Step 3 and Step 4 describe the authentication workflow of a client application using a public identity provider with Amazon Cognito.

Client Application Workflow

Step 1: Set up a public identity provider

Amazon Cognito supports Facebook, Google, Amazon, and any other OpenID Connect compliant provider. As a first step you will have to register your application with a public identity provider. Here is a list of popular providers:

  1. Facebook
  2. Google
  3. Login with Amazon

You can then use the corresponding provider’s SDK in your web application to allow users to authenticate with the provider. Listed below are the developer guides for the providers listed above:

  1. Facebook login for the Web with the Facebook JavaScript SDK
  2. Google+ Sign-In
  3. Login with Amazon – Getting Started for Web

Step 2: Create a Cognito Identity Pool

To begin using Amazon Cognito you will need to set up an identity pool. An identity pool is a store of user identity data specific to your account. The easiest way to setup an identity pool is to use the Amazon Cognito console.

The New Identity Pool wizard will guide you through the configuration process. When creating your identity pool, make sure that you enable access to unauthenticated identities. At this time, you can also configure any public identity providers that you have setup in Step 1.

The wizard will then create authenticated and unauthenticated roles for you with very limited permissions. You can edit these roles later using the IAM console. Note that Amazon Cognito will use these roles to grant authenticated and unauthenticated access to your resources, so scope them accordingly.

Step 3: Starting with Unauthenticated Access to Resources

You may want to grant unauthenticated users read-only access to some resources. These permissions should be configured in the IAM role for unauthenticated access (the role created in Step 2).

Configuring the SDK

To configure the SDK to work with unauthenticated roles simply omit the Logins property of the AWS.CognitoIdentityCredentials provider.

Because your identity pool is already configured to use authenticated and unauthenticated IAM roles, you need not set the RoleArn parameter when constructing your provider.

// Identity pool already configured to use roles
var creds = new AWS.CognitoIdentityCredentials({
    IdentityPoolId: 'us-east-1:1699ebc0-7900-4099-b910-2df94f52a030'
})

AWS.config.update({
    region: 'us-east-1',
    credentials: creds
});

Making requests

Having configured a credential provider, you can now make requests with the SDK.

var s3 = new AWS.S3({region: 'us-west-2'});
s3.listObjects({Bucket: 'bucket'}, function(err, data) {
    if (err) console.log(err);
    else console.log(data);
});

Step 4: Switching to Authenticated Access

You can also use the public identity provider configured in Step 1 to provide authenticated access to your resources.

When a user of your application authenticates with a public identity provider, the response contains a login token that must be supplied to Amazon Cognito in exchange for temporary credentials.

For Facebook and Amazon, this token is available at the access_token property of the response data. For Google and any other OpenID provider, this token is available at the id_token property of the response.

Refreshing credentials

After a user has authenticated with a public identity provider, you will need to update your credential provider with the login token from the authentication response.

// access_token received in the authentication response
// from Facebook
creds.params.Logins = {};
creds.params.Logins['graph.facebook.com'] = access_token;

// Explicitly expire credentials so they are refreshed
// on the next request.
creds.expired = true;

The credential provider will refresh credentials when the next request is made.

Persisting authentication tokens

In most cases, the SDKs of public identity providers have built-in mechanisms for caching the login tokens for the duration of the session. For example, the Login with Amazon SDK for JavaScript will cache the access_token and subsequent amazon.Login.authorize() calls will return the cached token as long as the session is valid.

The Facebook SDK for JavaScript exposes a FB.getLoginStatus() method which allows you to check the status of the login session.

The Google APIs Client Library for JavaScript automatically sets the OAuth 2.0 token for your application with the gapi.auth.setToken() method. This token can be retrieved using the gapi.auth.getToken() method.

You can also implement your own caching mechanism for login tokens, if these default mechanisms are insufficient for your use case.

Wrapping up

This article describes how to grant access to your AWS resources by using Amazon Cognito with public identity providers. We hope this article helps you easily authenticate users in your web applications with Amazon Cognito. We’d love to hear more about how you’re using the SDK in your browser applications, so leave us a comment or tweet about it @awsforjs.

The AWS SDK for JavaScript now supports Amazon S3 Requester Pays buckets

The AWS SDK for JavaScript now has support for Amazon S3 Requester Pays buckets.

With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data. This allows bucket owners to share the operational cost of their buckets.

Support for Requester Pays buckets was recently added in version 2.1.19 of the SDK. This article describes how to use Requester Pays buckets with the AWS SDK for JavaScript.

Setting up a Requester Pays bucket

The easiest way to setup a Requester Pays bucket is to use the Amazon S3 console.

You can also configure a Requester Pays bucket programmatically using the SDK.

var AWS = require('aws-sdk');
var s3 = new AWS.S3({region: 'us-west-2'});

var callback = function(err, data) {
    if (err) console.log(err);
    else console.log(data);
};

s3.putBucketRequestPayment({
    Bucket: 'bucket',
    RequestPaymentConfiguration: {
        Payer: 'Requester'
    }
}, callback);

Accessing objects in Requester Pays buckets

To access objects in Requester Pays buckets, requests made using the SDK must include the RequestPayer parameter. This parameter is translated by the SDK to the x-amz-request-payer header.

Setting this parameter confirms that the requester knows that they will be charged for the request.

Example

var s3 = new AWS.S3({region: 'us-west-2'});

s3.getObject({
    Bucket: 'bucket',
    Key: 'key',
    RequestPayer: 'requester'   
}, callback);

The only valid value for the RequestPayer parameter is requester. If the RequestPayer parameter is not set for a request made on a Requester Pays bucket, then Amazon S3 returns a 403 error and the bucket owner is charged for the request.

Conclusion

The AWS SDK for JavaScript supports Requester Pays for all operations on objects. We hope this feature in the SDK allows you to more easily manage your bucket permissions and cost. We’re eager to know what you think, so leave us a comment or tweet about it @awsforjs.

Using the AWS SDK for JavaScript from Behind a Proxy

The AWS SDK for JavaScript can be configured to work from behind a network proxy. In browsers, proxy connections are transparently managed, and the SDK works out of the box without any additional configuration. This article focuses on using the SDK in Node.js from behind a proxy.

Node.js itself has no low-level support for proxies, so in order to configure the SDK to work with a proxy, you will need override the default http.Agent.

This article shows you how to override the default http.Agent with the proxy-agent npm module. Note that some http.Agent modules may not support all proxies. You can visit npmjs.com for a list of available http.Agent libraries that support proxies.

Installation

>npm install proxy-agent --save

In your code

The proxy-agent module automatically maps proxy protocols to agent instances. Currently, the supported protocols are HTTP(s), Secure Socket (SOCKS), and Proxy Auto-Config (PAC).

var AWS = require('aws-sdk');
var proxy = require('proxy-agent');

AWS.config.update({
  httpOptions: { 
    agent: proxy('http://user:password@internal.proxy.com') 
  }
});

var s3 = new AWS.S3({region: 'us-west-2'});
s3.getObject({Bucket: 'bucket', Key: 'key'}, function (err, data) {
  console.log(err, data);
});

Overriding the default http.Agent is simple, and allows you to configure proxy settings that SDK can use. We hope you find this information useful and are able to easily use the SDK in Node.js from behind a proxy!

Announcing the Amazon S3 Managed Uploader in the AWS SDK for JavaScript

Today’s release of the AWS SDK for JavaScript (v2.1.0) contains support for a new uploading abstraction in the AWS.S3 service that allows large buffers, blobs, or streams to be uploaded more easily and efficiently, both in Node.js and in the browser. We’re excited to share some details on this new feature in this post.

The new AWS.S3.upload() function intelligently detects when a buffer or stream can be split up into multiple parts and sent to S3 as a multipart upload. This provides a number of benefits:

  1. It enables much more robust upload operations, because if uploading of a single part fails, that individual part can be retried separately without requiring the whole payload to be resent.
  2. Multiple parts can be queued and sent in parallel, allowing for much faster uploads when enough bandwidth is available.
  3. Most importantly, due to the way the managed uploader buffers data in memory using multiple parts, this abstraction does not need to know the full size of the stream, even though Amazon S3 typically requires a content length when uploading data. This enables many common Node.js streaming workflows (like piping a file through a compression stream) to be used natively with the SDK.

A Node.js Example

Let’s see how you can leverage the managed upload abstraction by uploading a stream of unknown size. In this case, we will be piping contents from a file (bigfile) from disk through a gzip compression stream, effectively sending compressed bytes to S3. This can be done easily in the SDK using upload():

// Load the stream
var fs = require('fs'), zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());

// Upload the stream
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}, function(err, data) {
  if (err) console.log("An error occurred", err);
  console.log("Uploaded the file at", data.Location);
})

A Browser Example

Important Note: In order to support large file uploads in the browser, you must ensure that your CORS configuration exposes the ETag header; otherwise, your multipart uploads will not succeed. See the guide for more information on how to expose this header.

All of this works in the browser too! The only difference here is that, in the browser, we will probably be dealing with File objects instead:

// Get our File object
var file = $('#file-chooser')[0].files[0];

// Upload the File
var bucket = new AWS.S3({params: {Bucket: 'myBucket'});
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
  $('#results').html(err ? 'ERROR!' : 'UPLOADED.');
});

Tracking Total Progress

In addition to simply uploading files, the managed uploader can also keep track of total progress across all parts. This is done by listening to the httpUploadProgress event, similar to the way you would do it with normal request objects:

var fs = require('fs');
var zlib = require('zlib');

var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}).
  on('httpUploadProgress', function(evt) {
    console.log('Progress:', evt.loaded, '/', evt.total); 
  }).
  send(function(err, data) { console.log(err, data) });

Note that evt.total might be undefined for streams of unknown size until the entire stream has been chunked and the last parts are being queued.

Configuring Concurrency and Part Size

Since the uploader provides concurrency and part size management, these values can also be configured to tune performance. In order to do this, you can provide an options map containing the queueSize and partSize to control these respective features. For example, if you wanted to buffer 10 megabyte chunks and reduce concurrency down to 2, you could specify it as follows:

var opts = {queueSize: 2, partSize: 1024 * 1024 * 10};
s3obj.upload({Body: body}, opts).send(callback);

You can read more about controlling these values in the API documentation.

Handling Failures

Finally, you can also control how parts are cleaned up in the managed uploader when a failure occurs using the leavePartsOnError option. By default, the managed uploader will attempt to abort a multipart upload if any individual part fails to upload, but if you would prefer to handle this failure manually (i.e., by attempting to recover more aggressively), you can set leavePartsOnError to true:

s3obj.upload({Body: body}, {leavePartsOnError: true}).send(callback);

This option is also documented in the API documentation.

Give It a Try!

We would love to see what you think about this new feature, so give the new AWS SDK for JavaScript v2.1.0 a try and provide your feedback in the comments below or on GitHub!

AWS re:Invent 2014 Recap

AWS re:Invent 2014 concluded on Friday, November 14, 2014; here is a summary of an action-packed, fun-filled week!

New features and services

There were four new services and features announced at the Day 1 Keynote, two of which are available to all customers effective November 12, 2014. These include:

  • Amazon RDS for Aurora (a service in preview), a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.
  • AWS CodeDeploy, a service that automates code deployments to Amazon EC2 instances, thereby eliminating the need for error-prone manual operations.
  • AWS Key Management Service, a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys.
  • AWS Config (a service in preview) is a managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.

Additionally, AWS CodeCommit, AWS CodePipeline, and AWS Service Catalog are other new services that were preannounced on the first day. These will be available in early 2015.

There were more new features and services announced at the Day 2 Keynote. These include:

  • Amazon EC2 Container Service (a service in preview), a high-performance container management service that supports Docker containers and allows you to easily run distributed applications on a managed cluster of EC2 instances.
  • AWS Lambda (a service in preview), a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.
  • New Event Notifications for Amazon S3: With this feature, notification messages can be sent through either Amazon SNS or Amazon SQS, or delivered directly to AWS Lambda to trigger Lambda functions.

The AWS SDK for JavaScript supports all the new services and features as of version 2.0.27.

Breakout sessions

AWS re:Invent is a learning conference at its core, and there were over 200 sessions spread over three days, on a variety of topics ranging from application deployment and management and architecture, to education, gaming, and financial services.

I had the privilege of presenting one such session, “Building Cross-Platform Applications Using the AWS SDK for JavaScript.” The goal of this talk was to highlight the portability of the AWS SDK for JavaScript and how the SDK’s features support cross-platform application development. You can watch the talk, as well as refer to the slides posted.

There is a common set of differences involved in porting applications between platforms. The most important of these is the way you include the SDK itself in your application, and configure the SDK with credentials and other settings. Our developer guide describes many ways in which you can configure credentials for Node.js, and manage credentials and identity in the browser.

Other differences include working with web standards like CORS, and keeping your users notified of application state.

Separating the application’s business logic from presentation promotes code reuse, and allows you to easily port the application to multiple platforms by changing only the presentation layer.

To demonstrate this, I wrote an application in Node.js and ported it to a Google Chrome extension, as well as a Windows RT application. The source code for this application is available here.

There were many more fantastic presentations at re:Invent 2014, for which the videos, slides, and audio podcasts are available online.

See you again next year

We hope re:Invent 2014 presented an opportunity for you to learn from the many talks and workshops that happened over the course of a week. We will continue innovating on your behalf and will have more exciting stuff to show you, so stay tuned by subscribing to email updates on the latest AWS re:Invent announcements!

Authentication with Amazon Cognito in the Browser

Amazon Cognito is a great new service that enables a much easier workflow for authenticating with your AWS resources in the browser. Although web identity federation still works directly with identity providers, using the new AWS.CognitoIdentityCredentials gives you the ability to provide access to customers through any identity provider using the same simple workflow and fewer roles. In addition, you can also now provide access to unauthenticated users without having to embed read-only credentials into your application. Let’s look at how easy it is to get set up with Cognito!

Setting Up Identity Providers

Cognito still relies on third-party identity providers to authenticate the identity of the user logging into your site. Currently, Cognito supports Login with Amazon, Facebook, and Google, though you can also setup developer authenticated identities or OpenID Connect. You can visit the respective links to setup applications with these providers which you will then use to get tokens for your application as normal. After you have created the applications with the respective providers, you will want to create an identity pool with Amazon Cognito.

Creating an Identity Pool

Cognito’s “identity pool” is the core resource that groups the users in your application. Effectively, each user logging into your application will get a Cognito ID; these IDs are all created and managed from your identity pool. The identity pool needs to know which identity providers it can hand out IDs to, so this is something we configure when creating the pool.

The easiest way to create an identity pool is through the Amazon Cognito console, which offers a good walkthrough of the necessary parameters (the name, your identity provider application IDs, and whether or not you want to support unauthenticated login). In our example we will use the SDK to create an identity pool with support for Amazon logins and unauthenticated access only. We will call the pool “MyApplicationPool”:

// Cognito Identity is currently only available in us-east-1
var ci = new AWS.CognitoIdentity({region: 'us-east-1'});

var params = {
  AllowUnauthenticatedIdentities: true,
  IdentityPoolName: 'MyApplicationPool',
  Logins: {
    'www.amazon.com': 'amzn1.application.1234567890abcdef'
  }
};
ci.createIdentityPool(params, console.log);

This will print an IdentityPoolId, which we can now use to get a Cognito ID for our user when logging in.

Authenticated vs. Unauthenticated Roles

Now that we’ve configured our pool, we can start talking about logging the user in. There are two ways for a user to login to your application when you enable unauthenticated access:

  1. Authenticated mode — this happens when a user goes through the regular Amazon, Facebook, or Google login flow. This type of login is considered authenticated because the user has proven their identity to the respective provider.
  2. Unauthenticated mode — this happens when a user attempts to get a Cognito ID for your pool but without having authenticated with an identity provider. If you enabled AllowUnauthenticatedIdentities, this will be allowed in your application. This is useful when your application has features you want to expose to users without requiring login, like, say, being able to view comments in a blog.

The way we separate these two “modes” in AWS is through IAM roles. We are going to create two roles for our application matching these two types of users. First, the authenticated role:

Authenticated Role

The easiest way to create a role is through the IAM Console (click “Create Role”). Let’s create a role called “MyApplication-CognitoAuthenticated”, choose a “Role for Identity Provider Access”, granting access to web identity providers. You should now be in “Step 3: Establish Trust”. This is the most important part; it is where we limit this role only to authenticated Cognito users. Here’s how we do it:

  1. Select Amazon Cognito and enter the Identity Pool Id
  2. Be sure to click Add Conditions to add an extra condition.
  3. Set the following fields:
    • Condition: “StringEquals”
    • Key: “cognito-identity.amazonaws.com:amr”
    • Condition Prefix: “For any value”
    • Value: “authenticated”

This will permit only authenticated users to use this role. The following example is what your trust policy should look like after clicking Next:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Federated": "cognito-identity.amazonaws.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "cognito-identity.amazonaws.com:aud": "us-east-1:1234-abcd-dcba-1234-4321"
        },
        "ForAnyValue:StringLike": {
          "cognito-identity.amazonaws.com:amr": "authenticated"
        }
      }
    }
  ]
}

Next, set the correct role permissions for this authenticated user. This step will depend heavily on your application and the actual resources you want to expose to authenticated users. You can use the policy generator here to help you create a policy that is scoped to specific actions and resources.

This should give you the correct authenticated role for use in your application.

Unauthenticated Role

This role should be configured almost like the authenticated role except for a few differences:

  1. We give it a separate name like “MyApplication-CognitoUnauthenticated”
  2. We set the Trust Policy condition value to “unauthenticated” instead of authenticated.
  3. The permissions on the role should be much more restrictive, and typically only exposing read-only operations (like for reading comments in a blog).

Once you’ve created these roles you should be able to copy down both role ARNs. These two ARNs will be used when logging into the application under the respective “modes”.

Logging In

We’ve now created all the resources we need to login to our application, so let’s see the code used in the AWS SDK for JavaScript to enable this login flow.

Initially, you will likely want all users to login under “unauthenticated” mode. This can be done up front when configuring your application:

AWS.config.update({
  region: 'us-east-1',
  credentials: new AWS.CognitoIdentityCredentials({
    AccountId: '1234567890', // your AWS account ID
    RoleArn: 'arn:aws:iam::1234567890:role/MyApplication-CognitoUnauthenticated',
    IdentityPoolId: 'us-east-1:1234-abcd-dcba-1234-4321'
  })
});

Eventually your customer will authenticate with the associated Amazon application. In that case, you will want to update your credentials to reflect the new role and provide the web token. A portion of the Login with Amazon code might look like:

amazon.Login.authorize({scope: "profile"}, function(resp) {
  if (!resp.error) { // logged in
    var creds = AWS.config.credentials;
    creds.params.RoleArn =
      'arn:aws:iam::1234567890:role/MyApplication-CognitoAuthenticated';
    creds.params.Logins = {
      'www.amazon.com': resp.access_token
    };

    // manually expire credentials so next request will fire a refresh()
    creds.expired = true;
  }
});

Getting the Identity ID

Occasionally, you will need the user’s Cognito identity ID to provide to requests. Since the ID is a unique ID for the user, it’s likely you will use this token for things like the user’s identifier in your application. In general, you can get the ID from the identityId property on the credentials object, but it’s likely you will want to use a recently refreshed credentials object prior to accessing the property. To verify that the ID is up to date, wrap this access inside of the get() call on a credentials object, for example:

AWS.config.credentials.get(function(err) {
  if (!err) {
    var id = AWS.config.credentials.identityId;
    console.log("Cognito Identity Id:", id);
  }
});

Wrapping Up

That’s all there is to using Amazon Cognito for authenticated and unauthenticated access inside of your application. Using Cognito gives your application plenty of other benefits, such as the ability to merge a user’s identity if they choose to login with multiple identity providers (a common feature request), as well as the ability to take advantage of the Amazon Cognito Sync Manager to easily sync any user data inside of your application.

Take a look at Cognito login in the AWS SDK for JavaScript and let us know what you think!

Come see us at re:Invent 2014!

AWS re:Invent is just around the corner, and we are excited to meet you.

I will be presenting DEV 306 – Building cross platform applications using the AWS SDK for JavaScript on November 13, 2014. This talk will introduce you to building portable applications using the SDK and outline some differences in porting your application to multiple platforms. You can learn more about the talk here. Come check it out!

We will also be at the AWS Booth in the Expo Hall (map). Come talk to us about how you’re using AWS services, ask us a question, and learn about how to use our many AWS SDKs and tools.

Hope to see you there!

Introducing the AWS SDK for JavaScript Blog

Introducing the AWS SDK for JavaScript Blog

Today we’re announcing a new blog for the AWS SDK for JavaScript. On this blog, we will be sharing the latest tips, tricks, and best practices when using the SDK. We will also keep you up to date on new developments in the SDK and share information on upcoming features. Ultimately, this blog is a place for us to reach out and get feedback from you, our developers, in order to make our SDK even better.

We’re excited to finally start writing about the work we’ve done and will be doing in the future; there’s a lot of content to share. In the meantime, here’s a little primer on the AWS SDK for JavaScript, if you haven’t had the chance to kick its tires.

Works in Node.js and Modern Browsers

The SDK is designed to work seamlessly across Node.js and browser environments. With the exception of a few environment-specific integration points (like streams and file access in Node.js, and Blob support in the browser), we attempt to make all SDK API calls work the same way across all of your different applications. One of my favorite features is the ability to take snippets of SDK code and move them from Node.js to the browser and back with at most a few changes in code.

In Node.js you can install the SDK as the aws-sdk npm package:

$ npm install aws-sdk --save

In the browser, you can use a hosted script tag to install the SDK or build your own version. More details on this can be found in our guide.

Full Service Coverage

The SDK has support for all the AWS services you want to use, and we keep it up to date with new API updates as they are released. Note that although some services in the browser require CORS to work over the web, we are continually working to expand the list of CORS-supported services. You can also take advantage of JavaScript in various local environments (Chrome and Firefox extensions, iOS, Android, WinRT and other mobile applications), which do not enforce CORS and develop your AWS-backed applications there; and you can do that with a custom build of the SDK today.

Open Source

Finally, the thing about the SDK that excites me the most is the fact that the entire SDK is openly developed and shared on GitHub. Feel free to check out the SDK code, post issue reports, and even submit pull requests with fixes or new features. Our SDK depends on feedback from our developers, so we love to get reports and pull requests. Send away!

More to Come

We will be posting much more information about the SDK on this blog. We have plenty of exciting things to share with you about new features and improvements to the library. Bookmark this blog and check back soon as we publish more information!