AWS Developer Blog

Metric Configuration in AWS SDK for Java

by Hanson Char | on | in Java | Permalink | Comments |  Share

As we mentioned in an earlier blog, you can now enable the automatic generation of performance metrics when using the AWS SDK for Java, and have them automatically uploaded to Amazon CloudWatch for monitoring purposes. Sometimes, however, you may want to generate more fine-grained metrics, such as per-host and per-JVM metrics, that are not enabled by default. Or, you may want to customize the name space for the metrics uploaded to Amazon CloudWatch to something more meaningful to your use cases. This is where you may find the metric configuration options helpful.

What options are available?

Here is a quick summary of five metric configuration options that you may find useful:

Option Description Default
metricNameSpace The metric namespace. "AWSSDK/Java"
includePerHostMetrics If specified, additional metrics will be generated on a per-host basis. Per-host level metric is disabled.
jvmMetricName If specified, additional metrics will be generated on a per-JVM basis. Per-JVM level metric is disabled.
credentialFile Used to specify an AWS credential property file for uploading metrics to Amazon CloudWatch. The default mechanism of DefaultAWSCredentialsProviderChain is used.
cloudwatchRegion The Amazon CloudWatch region for metrics uploading purposes. "us-east-1"

Are there any sample metric configurations?

Here are three sample metric configurations via system properties.

Sample 1: How to enable per host metrics

Suppose you are running the same application on multiple hosts, and you want to

  1. specify your own metric name space of "MyApp"
  2. generate additional metrics on a per-host basis
  3. make use of a AWS credential property file that resides at "/path/cred.property" on each host
  4. upload the metrics to the Amazon CloudWatch region "us-west-2"

You can do so by specifying the system property:

-Dcom.amazonaws.sdk.enableDefaultMetrics=metricNameSpace=MyApp,
  includePerHostMetrics,credentialFile=/path/cred.property,
  cloudwatchRegion=us-west-2

(All in a single line with no space.)

That’s it—per host metric will be enabled once the JVM is started.

Sample 2: How to enable both per-host and per-JVM metrics

This is similar to sample 1, but suppose your application involves running two instances of JVMs, both accessing AWS on each host. Now, you may want to generate metrics not only on a per-host basis, but also on a per-JVM basis. We can do so by giving the two JVMs different names. Let’s name the first JVM "Gamma", and the second "Delta". For the first JVM, this translates to specifying the system property:

-Dcom.amazonaws.sdk.enableDefaultMetrics=metricNameSpace=MyApp,
  includePerHostMetrics,credentialFile=/path/cred.property,
  cloudwatchRegion=us-west-2,jvmMetricName=Gamma

Similarly, for the second JVM:

-Dcom.amazonaws.sdk.enableDefaultMetrics=metricNameSpace=MyApp,
  includePerHostMetrics,credentialFile=/path/cred.property,
  cloudwatchRegion=us-west-2,jvmMetricName=Delta

(All in a single line with no space.)

Note the two specifications above differ only in the value of jvmMetricName. You should then be able to visualize the metrics aggregated at the respective levels in the Amazon CloudWatch console.

Sample 3: How to enable per-JVM but not per-host metrics

This is almost the same as sample 2. All you need to do is to remove the includePerHostMetrics option, like so for the first JVM:

-Dcom.amazonaws.sdk.enableDefaultMetrics=metricNameSpace=MyApp,
  credentialFile=/path/cred.property,
  cloudwatchRegion=us-west-2,jvmMetricName=Gamma 

For the second JVM:

-Dcom.amazonaws.sdk.enableDefaultMetrics=metricNameSpace=MyApp,
  credentialFile=/path/cred.property,
  cloudwatchRegion=us-west-2,jvmMetricName=Delta 

(All in a single line with no space.)

More details are available in the javadoc of AwsSdkMetrics, and the metric package summary. That’s all for now. Hope you find these options useful, and happy monitoring !

Creating Amazon DynamoDB Tables with PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

Version 2.0 of the AWS Tools for Windows PowerShell contains new cmdlets that allow you to manage tables in Amazon DynamoDB. The cmdlets all share the same noun prefix, DDB, and can be discovered using Get-Command:

PS C:> Get-Command -Module AWSPowerShell -Noun DDB*

CommandType     Name                                               ModuleName
-----------     ----                                               ----------
Cmdlet          Add-DDBIndexSchema                                 AWSPowerShell
Cmdlet          Add-DDBKeySchema                                   AWSPowerShell
Cmdlet          Get-DDBTable                                       AWSPowerShell
Cmdlet          Get-DDBTables                                      AWSPowerShell
Cmdlet          New-DDBTable                                       AWSPowerShell
Cmdlet          New-DDBTableSchema                                 AWSPowerShell
Cmdlet          Remove-DDBTable                                    AWSPowerShell
Cmdlet          Update-DDBTable                                    AWSPowerShell

This post looks at the New-DDBTable cmdlet and the schema builder cmdlets — New-DDBTableSchema, Add-DDBKeySchema, and Add-DDBIndexSchema — that you can use in a pipeline to make table definition and creation simple and fluent.

Defining Schema

The schema builder cmdlets allow you to define the schema for your table and can be used in a PowerShell pipeline to incrementally refine and extend the schema you require. The schema object is then passed to New-DDBTable (either in the pipeline or as the value for the -Schema parameter) to create the table you need. Behind the scenes, these cmdlets and New-DDBTable infer and wire up the correct settings for your table with respect to hash keys (on the table itself or in the indexes) without you needing to manually add this information.

Let’s take a look at the syntax for the schema builder cmdlets (parameters inside [] are optional; for parameters that accept a range of values, the allowable values are shown in {} separated by |):

# takes no parameters, returns a new Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema object
New-DDBTableSchema

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
Add-DDBKeySchema -KeyName "keyname" 
                 -KeyDataType { "N" | "S" | "B" }
                 [ -KeyType { "hash" | "range" } ]
                 -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
Add-DDBIndexSchema -IndexName "indexName"
                   -RangeKeyName "keyName"
                   -RangeKeyDataType { "N" | "S" | "B" }
                   [ -ProjectionType { "keys_only" | "include" | "all" } ]
                   [ -NonKeyAttribute @( "attrib1", "attrib2", ... ) ]
                   -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema 

Not all of the parameters for each cmdlet are required as the cmdlets accept certain defaults. For example, the default key type for Add-DDBKeySchema is "hash". For Add-DDBIndexSchema, -ProjectionType is optional (and -NonKeyAttribute is needed only if -ProjectionType is set to "include"). If you’re familiar with the Amazon DynamoDB API, you’ll probably recognize the type codes used with -KeyDataType and -RangeKeyDataType. You can find the API reference for the CreateTable operation here.

Using the Create a Table example shown on the CreateTable API reference page, here’s how we can easily define the schema using these cmdlets in a pipeline:

PS C:> New-DDBTableSchema `
            | Add-DDBKeySchema -KeyName "ForumName" -KeyDataType "S" `
            | Add-DDBKeySchema -KeyName "Subject" -KeyType "range" -KeyDataType "S" `
            | Add-DDBIndexSchema -IndexName "LastPostIndex" `
                                 -RangeKeyName "LastPostDateTime" `
                                 -RangeKeyDataType "S" `
                                 -ProjectionType "keys_only"

AttributeSchema                  KeySchema                        LocalSecondaryIndexSchema        GlobalSecondaryIndexSchema
---------------                  ---------                        -------------------------        --------------------------
{ForumName, Subject, LastPost... {ForumName, Subject}             {LastPostIndex}                  {}

PS C:>

As you can see from the output, the cmdlets took the empty schema object created by New-DDBTableSchema and extended it with the data that New-DDBTable will need. One thing to note is that, apart from New-DDBTableSchema, the cmdlets can be run in any order, any number of times. This gives you complete freedom to experiment at the console without needing to define all the keys up front and then define the index schema and so on. You can also clone the schema object and stash away a basic template that you can then further refine for multiple different tables (the Clone() method on the schema object makes a deep copy of the data it contains).

Creating the Table

Once the schema is defined, it can be passed to New-DDBTable to request that the table be created. The schema can be passed into New-DDBTable using a pipeline or by passing the schema object to the -Schema parameter. Here is the syntax for New-DDBTable:

# The schema definition object may be piped to the cmdlet or passed as the value for -Schema
New-DDBTable -TableName "tableName"
             -Schema Amazon.PowerShell.Cmdlets.DDB.Model.TableSchema 
             -ReadCapacity  value
             -WriteCapacity value

As you can see, it’s pretty simple. To use the previous example schema definition—but this time actually create the table—we can extend our pipeline like this:

PS C:> New-DDBTableSchema `
            | Add-DDBKeySchema -KeyName "ForumName" -KeyDataType "S" `
            | Add-DDBKeySchema -KeyName "Subject" -KeyType "range" -KeyDataType "S" `
            | Add-DDBIndexSchema -IndexName "LastPostIndex" `
                                 -RangeKeyName "LastPostDateTime" `
                                 -RangeKeyDataType "S" `
                                 -ProjectionType "keys_only" `
            | New-DDBTable "Threads" -ReadCapacity 10 -WriteCapacity 5

AttributeDefinitions : {ForumName, LastPostDateTime, Subject}
TableName            : Threads
KeySchema            : {ForumName, Subject}
TableStatus          : CREATING
CreationDateTime     : 11/29/2013 5:47:31 PM
ProvisionedThroughput: Amazon.DynamoDBv2.Model.ProvisionedThroughputDescription
TableSizeBytes       : 0
ItemCount            : 0
LocalSecondaryIndexes: {LastPostIndex}
GlobalSecondaryIndexes: {}

PS C:>

By default Add-DDBIndexSchema constructs local secondary indices. To have the cmdlet construct a global secondary index schema entry instead, you simply add the -Global switch plus the required provisioning -ReadCapacity and -WriteCapacity parameter values you need. You can also optionally specify -HashKeyName and -HashKeyDataType instead of, or in addition to, the range key parameters:

    ...
    | Add-DDBIndexSchema -Global `
                         -IndexName "myGlobalIndex" `
                         -HashKeyName "hashKeyName" `
                         -HashKeyDataType "N" `
                         -RangeKeyName "rangeKeyName" `
                         -RangeKeyDataType "S" `
                         -ProjectionType "keys_only" `
                         -Global `
                         -ReadCapacity 5 `
                         -WriteCapacity 5 `
                         ...

Let us know in the comments what you think about the fluent-style cmdlet piping, or how well these DynamoDB cmdlets fit your scripting needs.

Using IAM Users (Access Key Management for .NET Applications – Part 2)

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In the previous post about access key management, we covered the different methods to provide AWS access keys to your .NET applications. We also talked about a few best practices, one of which is to use IAM users to access AWS instead of the root access keys of your AWS account. In this post, we’ll see how to create IAM users and set up different options for them, using the AWS SDK for .NET.

The root access keys associated with your AWS account should be safely guarded, as they have full privileges over AWS resources belonging to your account and access to your billing information. Therefore, instead of using the root access keys in applications or providing them to your team/organization, you should create IAM users for individuals or applications. IAM users can make API calls, use the AWS Management Console, and have their access limited by IAM policies. Let’s see the steps involved to start using IAM users.

Create an IAM user

For this example, we are going to use the following policy, which gives access to a specific bucket. You’ll need to replace BUCKET_NAME with the name of the bucket you want to use.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListAllMyBuckets"],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket","s3:GetBucketLocation"],
      "Resource": "arn:aws:s3:::BUCKET_NAME"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject","s3:GetObject","s3:DeleteObject"],
      "Resource": "arn:aws:s3:::BUCKET_NAME/*"
    }
  ]
}

In cases where you are creating a policy on the fly or you want a strongly typed mechanism to create policies, you can use the Policy class found in the Amazon.Auth.AccessControlPolicy namespace to construct a policy. For more details, check Creating Access Policies in Code.

var iamClient = new AmazonIdentityManagementServiceClient(ACCESS_KEY, SECRET_KEY, RegionEndpoint.USWest2);

// Create an IAM user
var userName = "Alice";
iamClient.CreateUser(new CreateUserRequest
{
  UserName = userName,
  Path = "/developers/"
});

// Add a policy to the user
iamClient.PutUserPolicy(new PutUserPolicyRequest
{
  UserName = userName,
  PolicyName = allowS3BucketAccess,
  PolicyDocument = s3AccessPolicy
});

The Path parameter in the CreateUser call is an optional parameter that can be used to give a path to the user. In this example, the Amazon Resource Name (ARN) for the user created in the above example will be arn:aws:iam::account-number-without-hyphens:user/developers/Alice. The path for an IAM user is part of its Amazon Resource Name (ARN) and is a simple but powerful mechanism to organize users and create policies that apply to a subset of your users.

Use IAM groups

Instead of assigning permissions to an IAM user, we can create an IAM group with the relevant permissions and then add the user to the group. The group’s permissions are then applicable to all users belonging to it. With this approach, we don’t have to manage permissions for each user.

// Create an IAM group
var groupName = "DevGroup";
iamClient.CreateGroup(new CreateGroupRequest
{
  GroupName = groupName
});

// Add a policy to the group
iamClient.PutGroupPolicy(new PutGroupPolicyRequest
{
  GroupName = groupName,
  PolicyName = allowS3BucketAccess,
  PolicyDocument = s3AccessPolicy
});

// Add the user to the group
iamClient.AddUserToGroup(new AddUserToGroupRequest
{
  UserName = userName,
  GroupName = groupName
});

The preceding code creates an IAM group, assigns a policy, and then adds a user to the group. If you are wondering how the the permissions are evaluated when a group has multiple policies or a user belongs to multiple groups, IAM Policy Evaluation Logic explains this in detail.

Generate access key for an IAM user

To access AWS using the API or command line interface (CLI), the IAM user needs an access key that consists of the access key ID and secret access key.

// Create an access key for the IAM user
AccessKey accessKey = iamClient.CreateAccessKey(new CreateAccessKeyRequest
{
  UserName = userName
}).AccessKey;

The CreateAccessKey method returns an instance of the AccessKey class that contains the access key ID [AccessKey.AccessKeyId] and secret access key [AccessKey.SecretAccessKey]. You will need to save the secret key or securely distribute it to the user since you will not be able to retrieve it again. You can always create a new access key and delete the old access key (using the DeleteAccessKey method) if you lose it.

Enable access to the AWS Management Console

IAM users can access the AWS Management Console to administer the resources to which they have permissions. To enable access to the AWS Management Console, you need to create a login profile for the user and then provide them with the URL of your account’s sign-in page.

// Allow the IAM user to access AWS Console
iamClient.CreateLoginProfile(new CreateLoginProfileRequest
{
  UserName = userName,
  Password = "" // Put the user's console password here.
});

In this post we saw how to use IAM users for accessing AWS instead of the root access keys of your AWS account. In the next post in this series, we’ll talk about rotating credentials.

Configuring DynamoDB Tables for Development and Production

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

The Object Persistence Model API in the SDK uses annotated classes to tell the SDK which table to store objects in. For example, the DyanmoDBTable attribute on the Users class below tells the SDK to store instances of the Users class into the "Users" table.

[DynamoDBTable("Users")]
public class Users
{
    [DynamoDBHashKey]
    public string Id { get; set; }

    public string FirstName { get; set; }

    public string LastName { get; set; }
	
    ...
}

A common scenario is to have a different set of tables for production and development. To handle this scenario, the SDK supports setting a prefix in the application’s app.config file with the AWS.DynamoDBContext.TableNamePrefix app setting. This app.config file indicates that all the tables used by the Object Persistence Model should have the "Dev_" prefix.

<appSettings>
  ...
  <add key="AWSRegion" value="us-west-2" />
  <add key="AWS.DynamoDBContext.TableNamePrefix" value="Dev_"/>
  ...
</appSettings>

The prefix can also be modified at run time by setting either the global property AWSConfigs.DynamoDBContextTableNamePrefix or the TableNamePrefix property for the DynamoDBContextConfig used to store the objects.

 

New, Simplified Method Forms in the AWS SDK for Java

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

We’re always looking for new ways to improve the tools our team builds, like the AWS SDK for Java and the AWS Toolkit for Eclipse. Sometimes those improvements come as brand new functionality, such as Amazon CloudWatch Metrics for the AWS SDK for Java, and sometimes they’re small tweaks to make the tools faster or easier to use.

Today, I want to show off a small tweak that we added recently to make invoking requests with the SDK a little bit easier and more concise. We’ve tweaked the AmazonDynamoDBClient and AmazonSNSClient classes so that lots of common operations are even easier and more succinct to invoke.

For some of the most commonly used operations with Amazon DynamoDB and Amazon SNS, you can now skip constructing request objects and just pass in your request parameters directly. The request objects are still useful in many cases, such as providing access to less common parameters, or allowing you to build your request parameters in one part of your code and pass the request objects to another part of your code to be executed. These new method forms simply provide an alternate way to quickly invoke operations with common parameter combinations.

Here’s an example of using a few of the new, simplified method forms in the Amazon SNS client:

AmazonSNSClient sns = new AmazonSNSClient(myCredentials);
String topicArn = sns.createTopic("myNewTopic").getTopicArn();
String subscriptionArn = sns.subscribe(topicArn, "email", "me@email.com").getSubscriptionArn();
sns.publish(topicArn, "hello SNS world!");
sns.unsubscribe(subscriptionArn);
sns.deleteTopic(topicArn);

We’ll be adding more methods like these to other service clients in the SDK, too. What clients would you like to see updated next? Are there other common request parameter combinations that you’d like us to add?

Using AWS CloudTrail in PHP – Part 1

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

AWS CloudTrail is a new service that was announced at AWS re:Invent 2013.

CloudTrail provides a history of AWS API calls for your account, delivered as log files to one of your Amazon S3 buckets. The AWS API call history includes API calls made via the AWS Management Console, AWS SDKs, command line interface, and higher-level AWS services like AWS CloudFormation. Using CloudTrail can help you with security analysis, resource change tracking, and compliance auditing.

Today, I want to show you how to create a trail and start logging API calls using the AWS SDK for PHP. The CloudTrail client is available as of version 2.4.10 of the SDK.

Creating a trail for logging

The easiest way to create a trail is through the AWS Management Console (see Creating and Updating Your Trail), but if you need to create a trail through your PHP code (e.g., automation), you can use the SDK.

Setting up the log file destination

CloudTrail creates JSON-formatted log files containing your AWS API call history and stores them in the Amazon S3 bucket you choose. Before you set up your trail, you must first set up an Amazon S3 bucket with an appropriate bucket policy.

First, create an Amazon S3 client object (e.g., $s3Client).

Creating the Amazon S3 bucket

Use the Amazon S3 client to create a bucket. (Remember, bucket names must be globally unique.)

$bucket = 'YOUR_BUCKET_NAME';

$s3Client->createBucket(array(
    'Bucket' => $bucket
));

$s3Client->waitUntilBucketExists(array(
    'Bucket' => $bucket
));

Creating the bucket policy

Once the bucket is available, you need to create a bucket policy. This policy should grant the the CloudTrail service the access it needs to upload log files into your bucket. The CloudTrail documentation has an example of a bucket policy that we will use in the next code example. You will need to substitute a few of your own values into the example policy including:

  • Bucket Name: The name of the Amazon S3 bucket where your log files should be delivered.
  • Account Number: This is your AWS account ID, which is the 12-digit number found on the Account Identifiers section of the AWS Security Credentials page.
  • Log File Prefix: An optional key prefix you specify when you create a trail that is prepended to the object keys of your log files.

The following code prepares the policy document and applies the policy to the bucket.

$prefix = 'YOUR_LOG_FILE_PREFIX';
$account = 'YOUR_AWS_ACCOUNT_ID';
$policy = <<<POLICY
"Version": "2012-10-17",
"Statement": [
  {
    "Sid": "AWSCloudTrailAclCheck20131101",
    "Effect": "Allow",
    "Principal": {
      "AWS":[
        "arn:aws:iam::086441151436:root",
        "arn:aws:iam::113285607260:root"
      ]
    },
    "Action": "s3:GetBucketAcl",
    "Resource": "arn:aws:s3:::{$bucket}"
  },
  {
    "Sid": "AWSCloudTrailWrite20131101",
    "Effect": "Allow",
    "Principal": {
      "AWS": [
        "arn:aws:iam::086441151436:root",
        "arn:aws:iam::113285607260:root"
      ]
    },
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::{$bucket}/{$prefix}/AWSLogs/{$account}/*",
    "Condition": {
      "StringEquals": {
        "s3:x-amz-acl": "bucket-owner-full-control"
      }
    }
  }
]
POLICY;

$s3Client->putBucketPolicy(array(
    'Bucket' => $bucket,
    'Policy' => $policy,
));

Creating the trail

Now that the bucket has been set up, you can create a trail. Instantiate a CloudTrail client object, then use the createTrail() method of the client to create the trail.

use AwsCloudTrailCloudTrailClient;

$cloudTrailClient = CloudTrailClient::factory(array(
    'key'    => 'YOUR_AWS_ACCESS_KEY_ID',
    'secret' => 'YOUR_AWS_SECRET_KEY',
    'region' => 'us-east-1', // or us-west-2
));

$trailName = 'YOUR_TRAIL_NAME';
$cloudTrailClient->createTrail(array(
    'Name'         => $trailName,
    'S3BucketName' => $bucket,
));

NOTE: Currently, the CloudTrail service only allows for 1 trail at a time.

Start logging

After creating a trail, you can use the SDK to turn on logging via the startLogging() method.

$cloudTrailClient->startLogging(array(
    'Name' => $trailName
));

Your log files are published to your bucket approximately every 5 minutes and contain JSON-formatted data about your AWS API calls. Log files written to your bucket will persist forever by default. However, you can alter your bucket’s lifecycle rules to automatically delete files after a certain retention period or archive them to Amazon Glacier.

Turning it off

If you want to turn off logging, you can use the stopLogging() method.

$cloudTrailClient->stopLogging(array(
    'Name' => $trailName
));

Disabling logging does not delete your trail or log files. You can resume logging by calling the startLogging() method.

In some cases (e.g., during testing) you may want to remove your trail and log files completely. You can delete your trail and bucket using the SDK as well.

Deleting the trail

To delete a trail, use the deleteTrail() method.

$cloudTrailClient->deleteTrail(array(
    'Name' => $trailName
));

Deleting your log files and bucket

To delete the log files and your bucket, you can use the Amazon S3 client.

// Delete all the files in the bucket
$s3Client->clearBucket($bucket);

// Delete the bucket
$s3Client->deleteBucket(array(
    'Bucket' => $bucket
));

Look for Part 2

In the next part of Using AWS CloudTrail in PHP, I’ll show you how you can read your log files and iterate over individual log records using the SDK.

In the meantime, check out the AWS CloudTrail User Guide to learn more about the service.

Using SimpleCov with Multiple Test Suites

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

It can be helpful to generate coverage reports when testing software. While coverage reports do not guarantee well tested software, they can highlight were test coverage is lacking. This is especially true for legacy, or un-tested projects.

Recently I ran into a situation where I wanted to generate a coverage report, but the project used multiple test frameworks. One framework is used for unit testing, the other for integration testing. Fortunately, SimpleCov makes it easy to merge coverage reports.

Basic SimpleCov Usage

To use SimpleCov, you normally require simplecov and then call Simplecov.start with configuration options. Here is a typical configuration from an RSpec test helper:

require 'simplecov'
SimpleCov.start do
  # configure SimpleCov
end

require 'rspec'
require 'my-library'

With multiple test frameworks, you might find yourself wanting to duplicate this configuration code. Instead, move the shared SimpleCov configuration to a .simplecov file. This file is parsed by Ruby. I like to conditionally run SimpleCov when testing based on an environment variable. Here is an example:

# in .simplecov
if ENV['COVERAGE']
  SimpleCov.start do
    # configure SimpleCov
  end
end

With this file, my test helpers are reduced to:

require 'simplecov'
require 'rspec'
require 'my-library'

Lastly, when using Rake, I like to make it possible to generate coverage reports for unit tests, integration tests, or both. I accomplish this by grouping my test tasks:

task 'test:unit' do
   # ...
end

desc 'Runs integration tests'
task 'test:integration' do
   # ...
end

desc 'Runs unit and integration tests'
task 'test' => ['test:unit', 'test:integration']

desc 'Generate coverage report'
task 'coverage' do
  ENV['COVERAGE'] = true
  rm_rf "coverage/"
  task = Rake::Task['test']
  task.reenable
  task.invoke
end

Good luck and have fun testing!

Credentials Best Practices

by David Murray | on | in Java | Permalink | Comments |  Share

Introduction

Your Amazon Web Services account is (we hope!) pretty important to you. Whether you’re running mission-critical applications that need to be protected from malicious interlopers, or you simply want to ensure that only the people you specify can bill resources to your AWS account, it is vital that you keep your account and its associated AWS resources secure.
 
The client libraries in the AWS SDK for Java almost universally require you to specify a set of AWS security credentials to use when making service requests. These credentials prove that your application has permission to perform any requested actions, keeping your account safe from attackers who do not know these credentials.
 
This blog post is the first in a series in which we will discuss best practices around managing and securing your security credentials and explore features of the AWS SDK for Java that help you do so easily. In this post, I’ll quickly lay out an easy way to securely use credentials with the AWS SDK for Java from applications running on Amazon EC2. In subsequent posts, we’ll dive a bit deeper into some of the principles that make this approach secure and discuss some of the more advanced scenarios enabled by the SDK.
 

Identity and Access Management

Credentials for programmatic access to the AWS APIs come in the form of an Access Key ID (or "access key") and a Secret Access Key (or "secret key"). Similar to the familiar concepts of a username and password, the access key identifies who is making the call, while the secret key proves that the caller is actually who they say they are.
 
Identity — who your callers are — and access management — what those callers are allowed to do — for your account are managed through the appropriately-named AWS Identity and Access Management (IAM) service. This service lets you define "users" (representing either actual human users or autonomous software applications), configure policies granting different permissions to individual users or groups of users, and manage sets of credentials that can be used to authenticate as a particular user.
 

Configuring Credentials Using the AWS SDK for Java

Alright, let’s see some code! In the AWS SDK for Java, the client objects you use to interact with individual services each get credentials to sign their requests using an implementation of the AWSCredentialsProvider interface. When the client makes a request, it internally calls the getCredentials() method on its AWSCredentialsProvider instance to retrieve an appropriate set of credentials to use to sign the request. The SDK provides a number of different implementations of this interface, which attempt to retrieve credentials from various different places.
 
To set the credentials provider used by a client instance, just pass it to the constructor:
 
AWSCredentialsProvider provider = ...;

// This client will authenticate using credentials from the given provider.
AmazonDynamoDBClient client = new AmazonDynamoDBClient(provider);
 

IAM Roles for Amazon EC2

If your application runs on Amazon EC2 instances, a really great way to get a set of credentials to use is via IAM Roles for Amazon EC2. A "role" is an IAM concept similar to a user, but without permanent credentials associated with it. Instead, a user or application can be given permission to "assume" a role, retrieving a temporary set of credentials that allow it to perform actions that the role’s policy allows.
 
When launching an EC2 instance, you can choose to associate it with an IAM role. Any application running on that EC2 instance is then allowed to assume the associated role. Amazon EC2 handles all the legwork of securely authenticating instances to the IAM service to assume the role and periodically refreshing the retrieved role credentials, keeping your application super-secure with almost no work on your part.
 
Using credentials for a role associated with your EC2 Instance from the AWS SDK for Java is super easy — just use the InstanceProfileCredentialsProvider:
 
AWSCredentialsProvider provider = new InstanceProfileCredentialsProvider();
AmazonDynamoDBClient client = new AmazonDynamoDBClient(provider);
 
In fact, if you use one of the client constructors that do not take an AWSCredentialsProvider, the client will also use an InstanceProfileCredentialsProvider (after first checking for overrides specified via an environment variable or system properties).
 

Associating an EC2 Instance with a Role

It’s really easy to create a role and associate it with your EC2 instances via the AWS Management Console (there’s a nice walkthrough here, but if you want to do so programmatically that’s no sweat either. First, we’ll create a new role. We configure it to allow Amazon EC2 to assume the role on our instances’ behalf, and give the role permission to access a particular S3 bucket (for the sake of example). This probably only needs to be done once when initially setting up your application, after which we can keep reusing the same role for any new EC2 instances we launch.
 
// Requires credentials for an administrative user.

AmazonIdentityManagement iam = new AmazonIdentityManagementClient(...);

iam.createRole(new CreateRoleRequest()
    .withRoleName("test-role")
    .withAssumeRolePolicyDocument(new Policy()
        .withStatements(new Statement(Effect.Allow)
            .withActions(STSActions.AssumeRole)
            .withPrincipals(new Principal(Services.AmazonEC2)))
        .toJson()));

iam.putRolePolicy(new PutRolePolicyRequest()
    .withRoleName("test-role")
    .withPolicyName("allow-s3-read")
    .withPolicyDocument(new Policy()
        .withStatements(new Statement(Effect.Allow)
        .withActions(S3Actions.GetObject)
        .withResources(new S3ObjectResource("top-secret-bucket", "*")))
    .toJson()));
Next, we create an instance profile and add the role to it. An instance profile allows you to associate multiple roles with a particular instance, which is useful in some advanced scenarios. For now, we’ll just add a single role. Like creating a role, this probably only needs to be done once when initially configuring your application.
iam.createInstanceProfile(new CreateInstanceProfileRequest()
    .withInstanceProfileName("test-instance-profile"));

iam.addRoleToInstanceProfile(new AddRoleToInstanceProfileRequest()
    .withInstanceProfileName("test-instance-profile")
    .withRoleName("test-role"));
Once our role and instance profile are set up, we can launch new Amazon EC2 instances associated with our newly-created instance profile, making credentials available to any applications running on the instance via InstanceProfileCredentialsProvider.
 
// Requires credentials for a user (or role) with permission to launch EC2
// instances AND pass roles. See http://docs.aws.amazon.com/IAM/latest/UserGuide/
// role-usecase-ec2app.html#role-usecase-ec2app-permissions for an example.

AmazonEC2 ec2 = new AmazonEC2Client(...);

ec2.runInstances(new RunInstancesRequest()
    .withInstanceType(InstanceType.T1Micro)
    .withImageId("ami-d03ea1e0")    // 64-bit Amazon Linux AMI 2013.09
    .withIamInstanceProfile(new IamInstanceProfileSpecification()
        .withName("test-instance-profile"))
    .withMinCount(1)
    .withMaxCount(1));
 
Instance profiles can also be associated with EC2 instances created indirectly via Auto Scaling, AWS Elastic Beanstalk, AWS CloudFormation, and AWS OpsWorks!
 

Conclusion

That’s all for now! Stay tuned till next time, when we’ll talk in a bit more depth about the principles behind some of the important things that IAM roles for EC2 Instances takes care of for you under the covers, and discuss ways to apply those same principles to securely handle credentials for applications that aren’t running on EC2 instances.
 
Already using roles for EC2? Let us know what you think in the comments!
 

AWS re:Invent .NET Recap

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Jim and I had a great time at re:Invent this year talking to all the AWS users. It was really interesting to hear all the different ways our SDK and tools are being used. We got some great feature requests and now we are excited to be back in the office to start working on them.

The video and slides of our talk on building scalable .NET apps on AWS are now online.

The topics we cover in our talk are

  • Amazon DynamoDB Object Persistence Model
  • Getting and Putting objects into Amazon S3
  • Using Amazon SQS to manage background processing
  • Using AWS Elastic Beanstalk customization to install a Windows service
  • Using Web Identity Federation to get credentials securely to our Windows Store App

If you weren’t able to come to re:Invent but have ideas for our SDK and tools you want to share, you can always reach us through comments here, through our forums, and through GitHub.

Running Your Minitest Unit Test Suite

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

I have blogged a few times recently about Minitest. With Minitest you need to chose how you will execute your tests. When using other tools, like Rspec, there is a bundled test runner.

$ rspec
............

Finished in 0.03324 seconds
12 examples, 0 failures

Minitest does not provide a test runner as a command line script. One common workaround is to use minitest/autorun.

# inside test/my_class_test.rb
require 'minitest/autorun'

class MyClassTest < Minitest::Test
  ...
end

Now you cn execute your tests using the ruby command:

$ ruby test/my_class_test.rb

Minitest uses very little Ruby magic, but this is one case it indulges. minitest/autorun uses #at_exit to execute tests. This makes it possible for you to specify many test files and have them all run at the end.

Instead of supplying a list of test files at the command line, I prefer to setup a rake task that executes my test files. Here is a simple Rake task file that creates two tasks. The first task executes my unit tests. The second task executes my unit tests while also generating a coverage report.

# inside tasks/test.rake
require 'rake/testtask'

Rake::TestTask.new do |t|
  t.libs.push 'test'
  t.pattern = 'test/**/*_test.rb'
  t.warning = true
  t.verbose = true
end

task :default => :test

desc 'Generates a coverage report'
task :coverage do
  ENV['COVERAGE'] = 'true'
  Rake::Task['test'].execute
end

Next, I place the following line at the top of each test file. This allows me to still run the test files from the command line, while loading any shared testing helpers.

require 'test_helper'

Finally my test/test_helper.rb file looks like this:

if ENV['COVERAGE']
  require 'simplecov'
  SimpleCov.start do
    add_filter 'test'
    command_name 'Mintest'
  end
end

require 'minitest/autorun'
require 'my-library'

Running rake from the command line will now run my unit tests. rake coverage will generate a coverage report. This setup is simple and the tests run fast. I hope this helps.

Have fun and keep on Testing!