AWS Developer Blog

Scripting your EC2 Windows fleet using Windows PowerShell and Windows Remote Management

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today we have a guest post by one of our AWS Solutions Architects, James Saull, discussing how to take advantage of Windows PowerShell and Windows Remote Management (WinRM) to script your Windows fleet.

One of the advantages of using AWS is on-demand access to an elastic fleet of machines—continuously adjusting in response to demand and ranging, potentially, from zero machines to thousands. This presents a couple of challenges: within your infrastructure, how might you identify and run your script against a large and varying number of machines at the same time? In this post, we take a look at how to use EC2 tags for targeting and Windows Remote Management to simultaneously run PowerShell scripts.

Launching an AWS EC2 Windows instance from the console and connecting via RDP is a simple matter. You can even do it directly from within Visual Studio as recently documented here. From the RDP session, you might perform tasks such as updating the assets of an ASP.Net web application. If you had a second machine, you could open a second RDP session and repeat those tasks. Alternatively, if you are running in AWS VPC, you could avoid opening additional RDP sessions and just use PowerShell’s Enter-PSSession to the second machine. This does require that all instances are members of security groups that will allow Windows Remote Management traffic.

Below is an example of connecting to another host in a VPC and issuing a simple command (notice the date time stamps are different on the second host):

However, as the number of machines grows, you will quickly want the ability to issue a command once and have it run against the whole fleet simultaneously. To do this, we can use PowerShell’s Invoke-Command. Let’s take a look at how we might instruct a fleet of Windows EC2 hosts to all download the latest version of my web application assets from Amazon S3.

First, using EC2 tags, we will identify which machines are web servers, as only they should be downloading these files. The example below uses the cmdlets Get-EC2Instance and Read-S3Object, which are part of the AWS Tools for Windows PowerShell and are installed by default on AWS Windows Machine Images:

$privateIp = ((Get-EC2Instance -Region eu-west-1).RunningInstance `
            | Where-Object {
                $_.Tag.Count –gt 0 `
                –and $_.Tag.Key -eq  "Role" `
                -and $_.Tag.Value -match "WebServer"}).PrivateIpAddress 

Establish a session with each of the web servers:

$s = New-PSSession -ComputerName $privateIp 

Invoke the command that will now simultaneously run on each of the web servers:

Invoke-Command -Session $s -ScriptBlock {
    Read-S3Object   -BucketName mysourcebucket `
                    -KeyPrefix /path/towebassets/ `
                    -Directory z:webassets `
                    -Region eu-west-1 } 

This works well, but what if I want to run something that is individualized to the instance? There are many possible ways, but here is one example:

$scriptBlock = {
 param (
            [int] $clusterPosition , [int] $numberOfWebServers
        "I am Web Server $clusterPosition out of $numberOfWebServers" | Out-File z:afile.txt

$position = 1
foreach($machine in $privateIp)
    Invoke-Command  -ComputerName $machine `
                    -ScriptBlock $scriptBlock `
                    -ArgumentList $position , ($PrivateIp.Length) `
                    -AsJob -JobName DoSomethingDifferent


This post showed how using EC2 tags can make scripting a fleet of instances via Windows Remote Management very convenient. We hope you find these tips helpful, and as always, let us know what other .NET or PowerShell information would be most valuable to you.

Release of the AWS SDK V2.0 for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

We have just released a new version of the AWS SDK V2.0 for .NET. You can download version of the SDK here.

This release adds support for Amazon SNS mobile push notifications and fixes an issue with uploading large objects to Amazon S3 using the .NET 4.5 Framework version of the SDK.

Please let us know what you think of this latest version of the AWS SDK V2.0 for .NET. You can contact us through our GitHub repository or our forums.

AWS SDK for Ruby v1.15.0

by Alex Wood | on | in Ruby | Permalink | Comments |  Share

Yesterday morning, we released a new version of the AWS SDK for Ruby (aws-sdk gem). This release adds mobile push support for Amazon Simple Notification Service. The release also includes API updates for Amazon Redshift, adding snapshot identifiers to the AWS::Redshift::Client#copy_cluster_snapshot and AWS::Redshift::Client#delete_cluster_snapshot operations, and enabling better status reporting for restoring from snapshots.

You can view the release notes here.

Release: AWS SDK for PHP 2.4.3

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.4.3 of the AWS SDK for PHP. This release adds support for the Amazon Simple Notification Service mobile push API, adds progress reporting on snapshot restore operations for Amazon Redshift, and addresses an issue with directories and the Amazon S3 stream wrapper.


  • Updated the Amazon SNS client API to support mobile push
  • Updated the Amazon Redshift client API to support progress reporting on snapshot restore operations
  • Updated the Amazon Elastic MapReduce client to now use JSON serialization and AWS Signature V4 to securely sign requests
  • AWS SDK for PHP clients now throw AwsCommonExceptionTransferException exceptions when a network error occurs instead of a GuzzleHttpExceptionCurlException. The TransferException class, however, extends from GuzzleHttpExceptionCurlException. You can continue to catch the Guzzle CurlException or catch AwsCommonExceptionAwsException to catch any exception that can be thrown by an AWS client.
  • Fixed an issue with the Amazon S3 stream wrapper where trailing slashes were being added when listing directories

Install/Download the Latest SDK

Quick Tips: Managing Amazon S3 Data in Eclipse

No matter what type of application you’re developing, it’s a safe bet that it probably needs to save or load data from a central data store, such as Amazon S3. During development, you can take advantage of the Amazon S3 management tools provided by the AWS Toolkit for Eclipse, all without ever leaving your IDE.

To start, find your Amazon S3 buckets in the AWS Explorer view.

From the AWS Explorer view, you can create and delete buckets, or double-click on one of your buckets to open it in the Bucket Editor.

Once you’re in the Bucket Editor, you can delete objects in your bucket, edit the permissions for objects or the bucket itself, and generate pre-signed URLs that you can safely pass around to give other people access to the data stored in your account without ever having to give away your AWS security credentials.

One of the most useful features is the ability to drag and drop files into your Amazon S3 buckets directly from your OS. In the following screenshot, I’ve selected a file from the Mac Finder and drag-and-dropped it into a virtual folder in the object listing in the Bucket Editor. To download one of your objects from Amazon S3, just drag it to a directory in a view such as Eclipse’s Package Explorer.

The AWS Toolkit for Eclipse has many features that facilitate development and deployment of AWS applications. For more information, check out some of our other Eclipse blog posts:

Using Amazon CloudFront with ASP.NET Apps

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today, I’m going to talk about using Amazon CloudFront to boost the performance of ASP.NET web apps that are deployed to Amazon Web Services. CloudFront is a content delivery service that can cache content in edge locations across the world to give users low-latency access to static content and relieve some of the pressure from web servers.

The main entity that is created in CloudFront is a distribution. A CloudFront distribution contains all the configuration of how content will be cached and the domain name that users will use to access the content. You can create distributions using many different tools like the AWS Management Console or the AWS Explorer in the AWS Toolkit for Visual Studio. I’m going show how to create a distribution using AWS CloudFormation to script the creation of our CloudFront distribution so it can be easily reproduced in other web applications. Then I’ll show how to use the AWS Toolkit to deploy it.

Deploying an App

First, I’m going to deploy an application to AWS using AWS Elastic Beanstalk. To keep things simple, I’m going to create a new project in Visual Studio, select ASP.NET MVC 4 Web Application, and then select Internet Application. To keep the focus on CloudFront, I’m going to only lightly cover Elastic Beanstalk deployments. For more in-depth information on deployments, please review our Elastic Beanstalk user guide.

The first step in deploying is to right-click on our project and select Publish to AWS.

Then walk through the wizard setting using the following instructions.

Template Page

  • Select Account and Region to deploy to
  • Select Deploy new application with template
  • Select AWS Elastic Beanstalk
  • Click Next

Application Page

  • Leave values at the default and click Next

Environment Page

  • Enter a name for the environment
  • Verify the environment URL is unique
  • Click Next

AWS Options Page

  • Select a key pair or create a new one
  • Click Next

Application Options Page

  • Click Next

Amazon RDS Database Security Group Page

  • Click Next

Review Page

  • Click Deploy

After you click Deploy, the application will be built and deployed to Elastic Beanstalk. The AWS Explorer will be refreshed showing the new environment and the Environment view will be displayed as well.

Creating the AWS CloudFormation Template

CloudFormation uses templates—which are JSON text files—to script the creation of AWS resources. I’m going to create a template that will create my CloudFront distribution using the CloudFormation editor that is part of the Visual Studio Toolkit. To get started, I’m going to right-click on the solution, select Add New Project, and then select the AWS CloudFormation project.

In the project wizard, I’m going to select Create with empty template and then click Finish.

Once the project is created, I can use the following template to create the distribution. In the CloudFormation editor, you can hover over any of the keys to get a description of what they mean.

    "AWSTemplateFormatVersion" : "2010-09-09",

    "Description" : "",

The only parameter needed is the domain name of our application. In this case, it will be the URL of the Elastic Beanstalk environment. In other examples, this could be the DNS name of an Elastic Load Balancer or EC2 instance.

    "Parameters" : {
        "CloudFrontDomain" : {
            "Type" : "String",
            "Description" : "The domain of the website"

    "Resources" : {

Define the CloudFront distribution.

"Distribution" : {
            "Type" : "AWS::CloudFront::Distribution",
            "Properties" : {
                "DistributionConfig" : {
                    "DefaultRootObject" : "/",

An origin is the source of content for a distribution. In this case, there is only one origin, which is the Elastic Beanstalk environment. In advanced situations, there could be multiple origins. One use case for having multiple origins would be having an Elastic Beanstalk environment to serve up the dynamic content and the static content coming from an Amazon S3 bucket. For this advanced case, refer to the CloudFront documentation on setting up multiple cache behaviors.

"Origins" : [
                            "DomainName" : { "Ref" : "CloudFrontDomain" },
                            "Id" : "webapp-dns",
                            "CustomOriginConfig" : {
                                "HTTPPort" : "80",
                                "HTTPSPort" : "443",
                                "OriginProtocolPolicy" : "match-viewer"

All distributions have a default cache behavior that tells which origin to use. The query string needs to be forwarded since the application is serving up dynamic content based on the query string.

"DefaultCacheBehavior" : {
                        "ForwardedValues" : {
                            "QueryString" : true
                        "TargetOriginId"  : "webapp-dns",
                        "ViewerProtocolPolicy" : "allow-all"
                    "Enabled" : true,

This section enables CloudFront access logging. The logs are similar to IIS logs and are great for understanding the request coming into your site.

"Logging" : {
                        "Bucket" : {"Fn::GetAtt" : [ "LoggingBucket", "DomainName"]},
                        "Prefix" : "cflogs/"

Create an Amazon S3 bucket for the CloudFront logs to be delivered to.

"LoggingBucket" : {
            "Type" : "AWS::S3::Bucket",
            "Properties" : {

    "Outputs" : {

Output the URL to access our web application through CloudFront.

"CloudFrontDomainName" : {
            "Value" : {"Fn::Join" : [ "", ["http://", {"Fn::GetAtt" : [ "Distribution", "DomainName"]}, "/" ] ]},
            "Description" : "Use this URL to access your website through CloudFront"

Output the name of the Amazon S3 bucket created for the CloudFront logs to be delivered to.

"LoggingBucket" : {
            "Value" : { "Ref" : "LoggingBucket" },
            "Description" : "Bucket where CloudFront logs will be written to"

Deploying the AWS CloudFormation Template

With the template done, the next step is to deploy to CloudFormation, which will create a stack that represents all the actual AWS resources defined in the template. To deploy this template, I right-click on the template in Solution Explorer and then click Deploy to AWS CloudFormation.

On the first page of the wizard, I’ll enter cloudfrontdemo for the name of the stack that is going to be created. Then I click Next.

On the second page, which is for filling out any parameters defined in the template, I enter the Elastic Beanstalk environment DNS name, then click Next for the review page, and then click Finish.

Now CloudFormation is creating my stack with my distribution. Once the status of the stack transitions to CREATE_COMPLETE, I can check the Outputs tab to get the URL of the CloudFront distribution.


When users hit my application using the distribution’s URL, they are directed to the nearest edge location to them. The edge location either returns its cached value for the request, and if it doesn’t contain a value, it will reach back to my Elastic Beanstalk environment to fetch the latest value. How long CloudFront caches values is controlled by Cache-Control headers returned by the origin, which in this case is my Elastic Beanstalk environment. If there is no Cache-Control header, then CloudFront defaults to 24 hours, which will be the case for all my static content such as images and javascript. By default, ASP.NET is going to return the Cache-Control header with a value of private for dynamic content, which indicates that all or part of the response message is intended for a single user and must not be cached by a shared cache. This way the content coming from my ViewControllers will always be fresh, whereas my static content can be cached. If the dynamic content can be cached for periods of time, then by using the HttpResponse object, I can indicate that to CloudFront. For example, the code snippet below will let CloudFront know that this content can be cached for 30 minutes.

protected void Page_Load(object sender, EventArgs e)

For more information on controlling the length of caches, review the CloudFront documentation on expiration.

Iterating through Amazon DynamoDB Results

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

The AWS SDK for PHP has a feature called "iterators" that allows you to retrieve an entire result set without manually handling pagination tokens or markers. The iterators in the SDK implement PHP’s Iterator interface, which allows you to easily enumerate or iterate through resources from a result set with foreach.

The Amazon DynamoDB client has iterators available for all of the operations that return sets of resoures, including Query, Scan, BatchGetItem, and ListTables. Let’s take a look at how we can use the iterators feature with the DynamoDB client in order to iterate through items in a result.

Specifically, let’s look at an example of how to create and use a Scan iterator. First, let’s create a client object to use throughout the rest of the example code.


require 'vendor/autoload.php';

use AwsDynamoDbDynamoDbClient;

$client = DynamoDbClient::factory(array(
    'key'    => '[aws access key]',
    'secret' => '[aws secret key]',
    'region' => '[aws region]' // (e.g., us-west-2)

Next, we’ll create a normal Scan operation without an iterator. A DynamoDB Scan operation is used to do a full table scan on a DynamoDB table. We want to iterate through all the items in the table, so we will just provide the TableName as a parameter to the operation without a ScanFilter.

$result = $client->scan(array(
    'TableName' => 'TheNameOfYourTable',

foreach ($result['Items'] as $item) {
    // Do something with the $item

The $result variable will contain a GuzzleServiceResourceModel object, which is an array-like object structured according to the description in the API documentation for the scan method. However, DynamoDB will only return up to 1 MB of results per Scan operation, so if your table is larger than 1 MB and you want to retrieve the entire result set, you will need to perform subsequent Scan operations that include the ExclusiveStartKey parameter. The following example shows how to do this:

$startKey = array();

do {
    $args = array('TableName' => 'TheNameOfYourTable') + $startKey;
    $result = $client->scan($args);

    foreach ($result['Items'] as $item) {
        // Do something with the $item

    $startKey['ExclusiveStartKey'] = $result['LastEvaluatedKey'];
} while ($startKey['ExclusiveStartKey']);

Using an iterator to perform the Scan operation makes this much simpler.

$iterator = $client->getScanIterator(array(
    'TableName' => 'TheNameOfYourTable'

foreach ($iterator as $item) {
    // Do something with the $item

Using the iterator allows you to get the full result set, regardless of how many MB of data there are, and still be able to use a simple syntax to iterate through the results. The actual object returned by getScanIterator(), or any get*Iterator() method, is an instance of the AwsCommonIteratorAwsResourceIterator class.

Warning: Doing a full table scan on a large table may consume a lot of provisioned throughput and, depending on the table’s size and throughput settings, can take time to complete. Please be cautious before running the examples from this post on your own tables.

Iterators also allow you to put a limit on the maximum number of items you want to iterate through.

$iterator = $client->getScanIterator(array(
    'TableName' => 'TheNameOfYourTable'
), array(
    'limit' => 20

$count = 0;
foreach ($iterator as $item) {
echo $count;
#> 20

Now that you know how iterators work, let’s work through another example. Let’s say you have a DynamoDB table named "Contacts" with the following simple schema:

  • Id (Number)
  • FirstName (String)
  • LastName (String)

You can display the full name of each contact with the following code:

$contacts = $client->getScanIterator(array(
    'TableName' => 'Contacts'

foreach ($contacts as $contact) {
    $firstName = $contact['FirstName']['S'];
    $lastName = $contact['LastName']['S'];
    echo "{$firstName} {$lastName}n";

Item attribute values in your DynamoDB result are keyed by both the attribute name and attribute type. In many cases, especially when using a loosely typed language like PHP, the type of the item attribute may not be important, and a simple associative array might be more convenient. The SDK (as of version 2.4.1) includes the AwsDynamoDbIteratorItemIterator class which you can use to decorate a Scan, Query, or BatchGetItem iterator object in order to enumerate the items without the type information.

use AwsDynamoDbIteratorItemIterator;

$contacts = new ItemIterator($client->getScanIterator(array(
    'TableName' => 'Contacts'

foreach ($contacts as $contact) {
    echo "{$contact['FirstName']} {$contact['LastName']}n";

The ItemIterator also has two more features that can be useful for certain schemas.

  1. If you have attributes of the binary (B) or binary set (BS) type, the ItemIterator will automatically apply base64_decode() to the values for you.
  2. The item will actually be enumerated as a GuzzleCommonCollection object. A Collection behaves like an array (i.e., it implements the ArrayAccess interface) and has some additional convenience methods. Additionally, it returns null instead of triggering notices for undefined indices. This is useful for working with items, since the NoSQL nature of DynamoDB does not restrict you to following a fixed schema with all of your items.

We hope that using iterators makes working with the AWS SDK for PHP easier and reduces the amount of code you have to write. You can use the ItemIterator class to get even easier access to the data in your Amazon DynamoDB tables.

AWS SDK ZF2 Module 1.1.0

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the availability of version 1.1.0 of the AWS SDK ZF2 Module. This version includes a session save handler for Amazon DynamoDB, so that you can use DynamoDB as a session store for your Zend Framework 2 applications.

Web Identity Federation using the AWS SDK for .NET

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Today’s post is about web identity federation. AWS Security Token Service (STS) has introduced this new feature, which allows customers to give constrained, time-limited access of their AWS resources to users who identify themselves via popular third-party identity providers (IdPs). AWS currently supports Amazon, Facebook, and Google as IdPs whose tokens can be used to gain access to AWS resources. This feature enables scenarios where app developers can give their customers access to AWS resources under their own (developers’) accounts in a controlled fashion using the customer’s existing account with any of the IdPs. By taking this approach, developers don’t need to distribute their AWS credentials in their applications and do account management for their customers. If you are interested in using this feature in your Windows Phone or Windows Store apps, check out the Developer Preview of the next version of the AWS SDK for .NET. The Developer Preview adds support for .NET Framework 4.5 and the Windows Phone and WinRT platforms.

We’ll now look at the steps required for you to use web identity federation and a few C# code snippets that will show you how to get temporary access tokens and access AWS resources after authenticating with an IdP. We are using Facebook as the IdP in the sample below. For details on using other IdPs, check this link.

Setting up an IAM role

We start off by creating an IAM role (this is a one-time activity.) This is the role that your users will assume when they successfully authenticate through an IdP. When you create this role, you need to specify two policies: the trust policy, which specifies who can assume the role (the trusted entity, or principal), and the access policy, which describes privileges associated with this role. Below is an example of a trust policy using Facebook as the IdP.

      "Condition": {

You’ll need to replace the string MY_APP_ID with your Facebook app ID. This policy allows the users authenticated through Facebook IdP to use the web identity federation API (AssumeRoleWithWebIdentity operation), which grants the users temporary AWS credentials. We also have a condition in the policy that the Facebook app ID should match the specified one. This policy also makes use of policy variables, which are discussed in more detail here.

When creating your IAM role via the AWS Management Console, the Role Creation wizard will walk you through the process of creating the trust policy, but you will need to supply the access policy by hand. Below is the access policy that specifies the privileges associated with this role. In this sample, we will provide access to S3 operations on a bucket designated for the Facebook app. You’ll need to replace MY_APPS_BUCKET_NAME with the bucket name for your app.

   "Action":["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
   "Resource": "arn:aws:s3:::MY_APPS_BUCKET_NAME/${}/*"

The first statement in this policy allows each user of the app to Get, Put, or Delete objects in the specified S3 bucket with a prefix containing their Facebook user ID. This has the effect of creating "folders" for each user under which their objects will reside. The second statement allows users to retrieve only their objects’ contents by enforcing the prefix condition on the specified bucket.

Connecting with the identity provider

Now that the IAM role is in place you can call the STS AssumeRoleWithWebIdentity API, specifying the ARN of the IAM role that you created and a token provided by the IdP. In this example using Facebook, the token would be the access token that Facebook login provides in response to an authentication request (details on how we get the Facebook access token are not covered in this post, this link is a good starting point to understand the Facebook login process). Here is the C# snippet for calling AssumeRoleWithWebIdentity. Notice that we pass in an AnonymousAWSCredentials object for the credentials parameter when constructing the STS client, as we do not need to have AWS credentials to make this call.

var stsClient = new AmazonSecurityTokenServiceClient(new AnonymousAWSCredentials());

// Assume the role using the token provided by Facebook.
var assumeRoleResult = stsClient.AssumeRoleWithWebIdentity(new AssumeRoleWithWebIdentityRequest
    WebIdentityToken = "FACEBOOK_ACCESS_TOKEN",
    ProviderId = "",
    RoleArn = "ROLE_ARN",
    RoleSessionName = "MySession",
    DurationSeconds = 3600

Here are the parameters we pass to the API.

  • WebIdentityToken – the token received from the IdP after a user authenticates with it.
  • ProviderId – the name of the IdP. The supported values are,, and
  • Role ARN – the Amazon Resource Name of the role the user will assume. The ARN is of the format arn:aws:iam::123456789012:role/RoleForFacebook.
  • RoleSessionName – the name to give to this specific session. This name is used to identify the session.
  • DurationSeconds – the duration for which the security token that is returned will be valid, in seconds. The default value is 1 hour (3600 seconds).

Accessing AWS resources

The AssumeRoleWithWebIdentity API returns a session token that your application can use to access any resource mapped to the role. This is done by constructing a SessionAWSCredentials object and using it for subsequent calls to access resources and perform actions permitted by the assumed role. The below sample code accesses the objects in the app’s S3 bucket and performs operations on them. Remember that in the access policy you provided when creating the role, the user was restricted to access only S3 objects that contained the Facebook username prefixed to the path. Here assumeRoleResult.SubjectFromWebIdentityToken is the Facebook provided user id of the customer and objectName is the name of the S3 object being created.

// Create an S3 client using session credentials returned by STS
var credentials = assumeRoleResult.Credentials;
SessionAWSCredentials sessionCreds = new SessionAWSCredentials(credentials.AccessKeyId, credentials.SecretAccessKey, credentials.SessionToken);
var s3Client = new AmazonS3Client(sessionCreds, s3Config);

var key = string.Format("{0}/{1}", assumeRoleResult.SubjectFromWebIdentityToken, objectName);

// Put an object in the user's "folder".
s3Client.PutObject(new PutObjectRequest
    BucketName = "MY_APPS_BUCKET_NAME",
    Key = key,
    ContentBody = content

// List objects in the user's "folder".
var listObjectResponse = s3Client.ListObjects(new ListObjectsRequest
    BucketName = "MY_APPS_BUCKET_NAME",
    Prefix = assumeRoleResult.SubjectFromWebIdentityToken + "/"

// Get the object with the specified key.
var getObjectRespone = s3Client.GetObject(new GetObjectRequest
    BucketName = "MY_APPS_BUCKET_NAME",
    Key = key


In this post, we saw how web identity federation can be used to give access to AWS resources to customers who authenticate through one of the supported IdPs. We also walked through the steps and code snippets to use this feature.

The DynamoDBMapper, Local Secondary Indexes, and You!

Earlier this year, Amazon DynamoDB released support for local secondary indexes. At that time, the AWS SDK for Java added support for LSIs, for both the low-level(AmazonDynamoDBClient) and high-level(DynamoDBMapper) APIs in the package. Since then, I have seen a few questions on how to use the DynamoDBMapper with local secondary indexes. In this post, I will build on the Music Collection sample that is included in the Amazon DynamoDB documentation.

The example table uses a String hash key (Artist), a String range key (SongTitle), and a local secondary index on the AlbumTitle attribute (also a String). I created the table used in this example with the DynamoDB support that is part of the AWS Toolkit for Eclipse, but you could use the code included in the documentation or the AWS Management Console. I also used the Eclipse Toolkit to populate the table with some sample data. Next, I created a POJO to represent an item in the MusicCollection table. The code for MusicCollectionItem is shown below.

public class MusicCollectionItem {

    private String artist;
    private String songTitle;
    private String albumTitle;
    private String genre;
    private String year;

    public String getArtist() { return artist; }
    public void setArtist(String artist) { this.artist = artist; }

    @DynamoDBRangeKey(attributeName = "SongTitle")
    public String getSongTitle() { return songTitle; }
    public void setSongTitle(String songTitle) { this.songTitle = songTitle; }

    public String getAlbumTitle() { return albumTitle; }
    public void setAlbumTitle(String albumTitle) { this.albumTitle = albumTitle; }

    public String getGenre() { return genre; }
    public void setGenre(String genre) { this.genre = genre; }

    public String getYear() { return year;}
    public void setYear(String year) { this.year = year; }

As you can see, MusicCollectionItem has the hash key and range key annotations, but also a new annotation DynamoDBIndexRangeKey. You can find the documentation for that annotation here. The DynamoDBIndexRangeKey marks the property as an alternate range key to be used in a local secondary index. Since Amazon DynamoDB can support up to five local secondary indexes, I can also have up to five attributes annotated with the DynamoDBIndexRangeKey. Also note in the code above, since the documentation sample uses PascalCase, I needed to include the attributeName='X' in each of the annotations. If you were starting from scratch, you could make this code simpler by using attribute names that match your instance variable names.

So now that you have both a table and a corresponding POJO using a local secondary index, how do you use it with the DynamoDBMapper? Using a local secondary index with the mapper is pretty straightforward. You create the mapper the same way as before:

dynamoDB = Region.getRegion(Regions.US_WEST_2)
           .createClient(AmazonDynamoDBClient.class, new ClasspathPropertiesFileCredentialsProvider(), null);
mapper = new DynamoDBMapper(dynamoDB);;

Next, you can query the range key in the same manner as you would a table without a local secondary index:

String artist = "The Okee Dokee Brothers";
MusicCollectionItem musicKey = new MusicCollectionItem();
DynamoDBQueryExpression<MusicCollectionItem> queryExpression = new DynamoDBQueryExpression<MusicCollectionItem>()
List<MusicCollectionItem> myCollection = mapper.query(MusicCollectionItem.class, queryExpression);

This code looks up my kids new favorite artist and returns all the song titles that are in my Amazon DynamoDB table. I could add a Condition that would limit the song titles, but I wanted to get list of all of them.

But what if I want to know which songs are on The Okee Dokee Brothers latest album—Can you Canoe? Well luckily, I have a local secondary index on the AlbumTitle attribute. Before local secondary indexes, I could only do a Scan operation, which would have scanned the entire table, but with local secondary indexes I can easily do a Query operation. The code for using the index is:

rangeKeyCondition = new Condition();
     .withAttributeValueList(new AttributeValue().withS("Can You Canoe?"));
queryExpression = new DynamoDBQueryExpression<MusicCollectionItem>()
     .withRangeKeyCondition("AlbumTitle", rangeKeyCondition);
myCollection = mapper.query(MusicCollectionItem.class, queryExpression);

As you can see, doing a query on a local secondary index with the DynamoDBMapper is exactly the same as doing a range key query.

Now that I have shown how easy it is to use a local secondary index with the DynamoDBMapper, how will you use them? Let us know in the comments!