AWS Developer Blog

IAM Roles for Amazon EC2 Instances (Credential Management Part 4)

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

This is the fourth and final part (part 1, part 2, part 3) in a series on how to securely manage your AWS access credentials.

This week I am focusing on using AWS Identity and Access Management (IAM) roles for Amazon EC2 instances with the AWS SDK for Ruby (aws-sdk). Simply put, IAM roles for EC2 instances remove the need to try to bootstrap your instances with credentials. Example:

# look ma! No credentials (when run from EC2)!
require 'aws-sdk'['my-bucket']

You can stop worrying about how to get credentials onto a box that Auto Scaling just spun-up on your behalf. Your cron scripts on instances no longer have to search around for credentials on disk or worry about when they get rotated.

One of the best features of IAM roles for EC2 instances is the credentials on EC2 are auto-rotated! Instances started with an IAM instance profile will get temporary credentials deployed on a regular basis to the EC2 instance metadata service.

My favorite feature? The aws-sdk gem just works when running on an instance with an IAM profile.

How it Works

  • You create an IAM instance profile
  • Start one or more instances, with the instance profile
  • Upon instance boot, session credentials will be available on your instance(s)
  • Credentials are rotated regularly (before they expire)

You can reuse an instance profile as many times as you like. The aws-sdk gem will automatically attempt to load credentials from the metadata service when no other credentials are provided.

Create an Instance Profile with the aws-sdk Gem

An instance profile consists of a role. The role consists of a policy. Each of these has a name. I am going to use the aws-sdk to create a sample profile starting with the policy.

For this example, I’ll be building an instance profile that has limited permissions. I only want applications on EC2 to be able to read from Amazon S3 (list and get buckets/objects). First, I need to build the policy.

require 'aws-sdk'

AWS.config(:access_key_id => '...', :secret_access_key => '…')

# the role, policy and profile all have names, pick something descriptive
role_name = 's3-read-only'
policy_name = 's3-read-only'
profile_name = 's3-read-only'

# required so that Amazon EC2 can generate session credentials on your behalf
assume_role_policy_document = '{"Version":"2008-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":[""]},"Action":["sts:AssumeRole"]}]}'

# build a custom policy    
policy =
policy.allow(:actions => ["s3:Get*","s3:List*"], :resources => '*')

Now that I have a policy built, I am going to build a role and add the policy to the role. I am going to use the policy and role names I chose above.

iam =

# create the role
  :role_name => role_name,
  :assume_role_policy_document => assume_role_policy_document)

# add the policy to role
  :role_name => role_name,
  :policy_name => policy_name,
  :policy_document => policy.to_json)

Last step is to create a profile for your role.

resp = iam.client.create_instance_profile(
  :instance_profile_name => instance_profile_name)

# this may be handy later
profile_arn = resp[:instance_profile][:arn]

  :instance_profile_name => instance_profile_name,
  :role_name => role_name)

Using an IAM Instance Profile

You can now use the instance profile name (or ARN we captured above) to run instances with your special profile.

# you can use the profile name or ARN as the :iam_instance_profile option
ec2 =
ec2.instances.create(:image_id => "ami-12345678", :iam_instance_profile => profile_name)

Thats it! Your new instance will boot with session credentials available that the aws-sdk can consume with zero configuration. Happy computing!

Credential Providers (Credential Management Part 3)

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

In part 1 of this series, I wrote about how to configure your access credentials with the AWS SDK for Ruby (aws-sdk gem). In part 2 we learned how to rotate your access credentials using the aws-sdk gem.

This week we explore credential providers and how they can help you keep your secrets safe and fresh.

Credential Providers

A credential provider is an object that responds to the following methods:

  • #access_key_id
  • #secret_access_key
  • #session_token
  • #refresh

Internally, the aws-sdk gem uses a chain of credential providers to load credentials from various locations including:

  • AWS.config
  • ENV (from multiple key prefixes)
  • EC2 instance metadata service

You can fully customize this behavior by configuring your own custom credential provider, as follows.

AWS.config(:credential_provider =>

Why Use a Custom Credential Provider?

In the previous post in this series we discussed rotating credentials. It can be painful to build logic that restarts processes or applications that are using stale or soon-to-be removed/expired credentials.

If your application uses a custom credential provider, the application does not need to be restarted. The SDK automatically calls #refresh on the credential provider when it receives a response from AWS that indicates its credentials are expired.

In-house hosted applications and utility scripts can use a custom provider to load credentials from a simple web service that vends current credentials. This web service could even go as far as vending session credentials that auto-expire and are limited to specific AWS operations. This greatly reduces exposure if these credentials are ever leaked.

Build a Custom Credential Provider

Here is a really simple custom credential provider that makes a HTTPS request to https://internal.domain/ and expects a JSON response of credentials.

require 'net/https'

class CustomCredentialProvider

  include AWS::Core::CredentialProviders::Provider


  def get_credentials
      http ='internal.domain', 443)
      http.use_ssl = true
      http.verify_mode = OpenSSL::SSL::VERIFY_PEER
      response = http.request('/'))

      # symbolize keys to :access_key_id, :secret_access_key
      if response.code == '200'
        JSON.load(response.body).inject({}) {|h,(k,v)| h.merge(k.to_sym => v) }


The include statement in the example above does much of the heavy lifting. It defines all of the public methods required for a credential provider. It also caches the credentials until #refresh is called. We only have to define #get_credentials and return a hash with symbol keys (or an empty hash if we fail).

You can make this example more robust if you:

  • Set network timeouts
  • Rescue Exceptions raised by HTTP that do not extend StandardError
  • Add basic retry logic to handle transient network errors

In the next (and last) post in this series I will explore how the aws-sdk gem uses the EC2 metadata service.

AWS Java Meme Generator Sample Application

If you couldn’t make it to AWS re:Invent this year, you can watch all of the presentations on the AWS YouTube channel. My talk was about using the AWS Toolkit for Eclipse to develop and deploy a simple meme generation app.

The application uses a common AWS architectural design pattern to process its workload and serve content. All the binary image data is stored in an Amazon S3 bucket; the image metadata is stored in Amazon DynamoDB; and the image processing jobs are managed using an Amazon SQS queue.

Here’s what happens when a customer creates a new meme image:

  1. The JSP page running in AWS Elastic Beanstalk asks Amazon S3 for a set of all the images in the bucket, and displays them to the customer.
  2. The customer selects their image and a caption to write onto it, then initiates a post.
  3. The JSP page inserts a new item into DynamoDB containing the customer’s choices, such as the S3 key of the blank image and the caption to write onto it.
  4. The JSP page inserts a message into the SQS queue containing the ID of the DynamoDB item inserted in the previous step.
  5. The JSP page polls the DynamoDB item periodically, waiting for the state to become “DONE”.
  6. A back-end image processing node on Amazon EC2 polls the SQS queue for work to do and finds the message inserted by the JSP page.
  7. The back-end worker loads the appropriate item from DynamoDB, downloads the blank macro image from Amazon S3, writes the caption onto the image, then uploads it back to the bucket.
  8. The back-end worker marks the DynamoDB item as “DONE”.
  9. The JSP page notices the work is done and displays the finished image to the customer.

Several customers in attendance expressed interest in the source code for the application, so we have released it on GitHub. It takes a little work to set up, mostly because you need to add the SDK and its third-party libraries to the project’s classpath. Follow the instructions in the README file, and please let us know how we can improve them!

Rotating Credentials (Credential Management Part 2)

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

In a previous blog post I wrote about ways to securely configure your AWS access credentials when using the aws-sdk gem. This week I want to talk about a security best practice, credential rotation.

Did you know that AWS recommends that you rotate your access keys every 90 days?

Even if you are very careful with your AWS access credentials, you may find yourself in a situation where someone has gained access to your secrets. If you build your applications with a regular key rotation solution, then an out-of-bounds replacement of keys can be painless. In the heat of the moment when you are scrambling to replace compromised keys, this can be a life saver.

Rotating Credentials

The process for rotating credentials boils down to the following steps:

  • Generate new keys
  • Securely distribute keys to your applications
  • Ensure the applications refresh their keys
  • Disable the old access keys
  • Ensure everything still works
  • Delete the old access keys

For best effect, you should automate this process. If you have to do it by hand, the process will be much more error prone and you will likely do it less often. You can use the aws-sdk gem to do much of the work for you.

This simple example demonstrates how to generate a new key pair, disable old keys and then eventually delete the old keys. I inserted placeholders for where you should distribute your new keys and refresh your applications with the new keys.

iam =

# create new set of access credentials
new_keys = iam.access_keys.create

# you should persist the new key pair somewhere secure to start with # access key id
new_keys.secret # secret access key

## deploy the new keys to your applications now, make
## sure they pick up the new keys

# deactivate the old keys
old_keys = iam.access_keys['AKID12346789…'] # old access key id

## the old keys still exist, they are temporarily disabled, use
## this time to test your applications to ensure they are working

# if you are confident your applications are using the new keys
# you can then safely delete the old key pair

How you distribute your keys and refresh your application is going to be very specific to your own needs. Just be certain to test your applications before you delete your disabled keys. You can not restore them once they have been deleted.

For the next post in this series, I will write about credential providers and how the aws-sdk makes it easy for your applications to pick up new credentials without restarts or downtime. This can be very useful when you are rotating credentials.

Iterating Over Your Objects with Amazon S3

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

There are a lot of hidden gems inside the AWS SDK for Java, and we’ll be highlighting as many as we can through this blog.

Today, we look at how to interact with paginated object and version listings from Amazon S3. Normally, when you list the contents of your Amazon S3 bucket, you’re responsible for understanding that Amazon S3 returns paginated results. This means that you get a page of results (not necessarily the entire result set), and then have to use a nextToken parameter to request the next page of results, and repeat this process until you’ve read the complete data set.

Fortunately, the AWS SDK for Java provides some utilities to automatically handle these paginated result sets for you. The S3Objects and S3Versions classes allow you to easily iterate over objects and object versions in your Amazon S3 buckets, without having to explicitly deal with pagination.

Using these iterators to traverse objects in your bucket is easy. Instead of calling s3.listObjects(...) directly, just use one of the static methods in S3Objects, such as withPrefix(...) to get back an iterable list. This allows you to easily traverse all the object summaries for the objects in your bucket, without ever having to explicitly deal with pagination.

AmazonS3Client s3 = new AmazonS3Client(myCredentials);
for ( S3ObjectSummary summary : S3Objects.withPrefix(s3, "my-bucket", "photos/") ) {
    System.out.printf("Object with key '%s'n", summary.getKey());

If you’ve enabled object versioning for your buckets, then you can use the S3Versions class in exactly the same way to iterate through all the object versions in your buckets.

AmazonS3Client s3 = new AmazonS3Client(myCredentials);
for ( S3VersionSummary summary : S3Versions.forPrefix(s3, "my-bucket", "photos/") ) {
    System.out.printf("Version '%s' of key '%s'n", 
                      summary.getVersionId(), summary.getKey());

Sending Email with JavaMail and AWS

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

The Amazon Simple Email Service and the JavaMail API are a natural match for each other. Amazon Simple Email Service (Amazon SES) provides a highly scalable and cost-effective solution for bulk and transactional email-sending. JavaMail provides a standard and easy-to-use API for sending mail from Java applications. The AWS SDK for Java brings these two together with an AWS JavaMail provider, which gives developers the power of Amazon SES, combined with the ease of use and standard interface of the JavaMail API.

Using the AWS JavaMail provider from the SDK is easy. The following code shows how to set up a JavaMail session and send email using the AWS JavaMail transport.

 * Setup JavaMail to use Amazon SES by specifying
 * the "aws" protocol and our AWS credentials.
Properties props = new Properties();
props.setProperty("mail.transport.protocol", "aws");
props.setProperty("", credentials.getAWSAccessKeyId());
props.setProperty("", credentials.getAWSSecretKey());

Session session = Session.getInstance(props);

// Create a new Message
Message msg = new MimeMessage(session);
msg.setFrom(new InternetAddress(""));
msg.addRecipient(Message.RecipientType.TO, new InternetAddress(";
msg.setSubject("Hello AWS JavaMail World");
msg.setText("Sending email with the AWS JavaMail provider is easy!");

// Reuse one Transport object for sending all your messages
// for better performance
Transport t = new AWSJavaMailTransport(session, null);
t.sendMessage(msg, null);

// Close your transport when you're completely done sending
// all your messages.

You can find the complete source code for this sample in the samples directory of the SDK, or go directly to it on GitHub.

The full sample also demonstrates how to verify email addresses using Amazon SES, a necessary prerequisite for sending email to those addresses until you request full production access to Amazon SES. See the Amazon Simple Email Service Developer Guide for more information on Verifying Email Addresses and Requesting Production Access.

Running the AWS SDK for Android S3Uploader sample with Eclipse

As we announced previously, the AWS Toolkit for Eclipse now supports creating AWS-enabled Android projects, making it easier to get started talking to AWS services from your Android app. The Toolkit will also optionally create a sample Android application that talks to S3. Let’s walk through creating a new AWS Android project and running the sample.

First, make sure that you have the newest AWS Toolkit for Eclipse, available at

To create a new AWS-enabled Android project, choose File > New > Project… and find the AWS Android Project wizard.

The wizard will ask you to choose a project name and an Android target. If you haven’t set up your Android SDK yet, you’ll be able to do so from this wizard. Also make sure the option to create a sample application is checked.

That’s it! The newly created project is configured with the AWS SDK for Android and the sample application. You’ll want to edit the file to fill in your AWS credentials and choose an S3 bucket name before running the application.

If this is your first time using the Android Eclipse plug-in, you may need to create an Android Virtual Device at this point using the AVD Manager view. On Windows 7, I found that I couldn’t start the emulator with the default memory settings, as referenced in this Stack Overflow question, so I had to change them:

With this change, the emulator started right up for me, and I was able to see the S3Uploader application in the device’s application list.

Finally, there’s one last trick you might find useful in using the sample application: it relies on images in the Android image gallery of the emulated device. If you can’t be bothered with mounting a file system, a simple way to get some images in there is to save them from the web browser. Just start the web browser, then tap-hold on an image and choose “Save Image”.

We’re excited by how much easier it is to get this sample running now that Eclipse does most of the setup for you. Give it a try, and let us know how it works for you!

Configuring SDK Download Behavior in the AWS Toolkit for Eclipse

The AWS Toolkit for Eclipse will automatically download new releases of the AWS SDK for Java, ensuring that you always have the most recent version of the service clients and productivity libraries. Some customers with slow network connections told us that the automatic downloads were sometimes triggered when they didn’t want to wait. In response, we made a couple of small changes to make this process more predictable and easier to manage.

First, we changed the directory where we download the SDKs. In previous releases of the Toolkit, the SDKs were stored in a directory specific to your eclipse workspace, so you would get a new SDK downloaded every time you started a new workspace. To eliminate this duplication, we consolidated all SDKs, for all workspaces, into one directory. It defaults to your home directory, but you can configure it to be wherever you want via a new preference page.

We also added a preference setting to not automatically check for and download new releases, so that customers adversely impacted by downloading every release of the SDK can opt out of this behavior. Even if you decide to manage your SDK releases manually, you can always update to the latest version using the Check for updates now button in the preferences.

As a final note, the same preferences can be configured for the AWS SDK for Android.

Faster AWS Tests with VCR

by Loren Segal | on | in Ruby | Permalink | Comments |  Share

Performing integration testing against a live AWS service can often be inefficient since you are forced to make requests across the network for every test. Fortunately, there are tools that you can use to help reduce the amount of network traffic your tests require and make them run faster in the process. We will look at how to do this with the VCR gem available for Ruby.

What is VCR?

VCR is a library that is able to record HTTP network traffic over the wire and then replay those requests locally. Since recording occurs only once, all subsequent executions of your tests do not need to hit the network, and will likely be much faster.

Installing and Using VCR

To install VCR, you need to install two gems: the VCR gem, and the FakeWeb HTTP mocking library:

gem install vcr fakeweb

To use VCR, you simply configure the library with the path on disk to store fixture data (the recorded network traffic), as well as the HTTP mocking library to use (we use fakeweb as the HTTP mocking library):

require 'vcr'

VCR.configure do |c|
  c.cassette_library_dir = 'fixtures/vcr_cassettes'
  c.hook_into :fakeweb

After you’ve configured VCR, you can use it with a simple VCR.use_cassette method that wraps and records a block of arbitrary network code:

VCR.use_cassette('some_cassette') do
  # Perform some network transfer

On subsequent executions of this code, VCR uses the data loaded from the fixtures on disk located in fixtures/vcr_cassettes/some_cassette.yaml.

Testing AWS with VCR

Now that you know how to configure and use VCR, it’s easy to plug this library into your existing tests that use the AWS SDK for Ruby to record and replay requests over the network. In our case, we will use the RSpec testing framework, but you can use any testing framework you like.

Test Setup

The top of your test file or helper file requires a bit of configuration. For one, you need to configure VCR (as shown previously), but you also need to configure RSpec to wrap all of the test blocks inside of the VCR.use_cassette method. If you are using RSpec for your testing, you can add the following helper/configuration code to any test suite to get VCR to automatically record and replay your tests for improved speed:

require 'rspec'
require 'aws-sdk'
require 'vcr'

VCR.configure do |c|
  c.cassette_library_dir = 'fixtures/vcr_cassettes'
  c.hook_into :fakeweb

# Fixes a missing attribute in Fakeweb's stubbed HTTP client
class FakeWeb::StubSocket; attr_accessor :read_timeout end

RSpec.configure do |c|
  c.around(:each) do |example|
    VCR.use_cassette(example.metadata[:full_description]) do

Test Example

Now that you’ve set up your test environment, let’s see how it affects your tests. Say, for example, you had a test that uploaded a bunch of files to Amazon S3 and tried to read them back out. The test might look something like this:

describe 'Uploading files to S3' do
  let(:s3) { }
  let(:bucket) { s3.buckets['test_s3_upload'] }
  before { s3.buckets.create( }

  it 'uploads multiple files to S3 and reads them out' do
    # Uploads items
    25.times do |i|

    # Reads items back
    25.times do |i|
      bucket.objects["file#{i}.txt"].read.should == "DATA"

Depending on latency, running this test the first time (when VCR is recording) could take anywhere from 5 to 15 seconds. For instance:

$ rspec test_s3.rb 

Finished in 14.18 seconds
1 example, 0 failures

But VCR has now recorded our data inside fixtures/vcr_cassettes, and your subsequent execution will be much faster:

$ rspec test_s3.rb

Finished in 0.48604 seconds
1 example, 0 failures

Using VCR, this test just went from taking 14 seconds down to 0.5 seconds— MUCH faster. This is because VCR loaded the server-side response from a local file on disk instead of sending the request over the wire.

Clear Your Fixture Cache Often

Although VCR can make your tests much faster, you do lose out on some of the accuracy provided by testing directly against live AWS services. VCR is useful to temporarily cache test results in fast development cycles, but it should not be a complete replacement to full-on integration testing.

One easy way to use the power of a library like VCR and still have the accuracy of full integration testing is to delete your fixtures/vcr_cassettes cache often. You may even want to ignore it from your repository in order to emphasize that the data is, in fact, temporary. That way you will still have fast tests most of the time but still get up-to-date responses from the service at regular intervals during development.


Testing with VCR can make your tests much faster and reduce your costs when testing against AWS services. Best of all, adding this performance optimization is almost fully transparent to your tests, uses actual data from the AWS services, and is simpler than manually creating fixtures that mock the server responses yourself. Try it out and see how much time you can shave off of your tests!

Android Support in the AWS Toolkit for Eclipse

We’ve launched a new feature in the AWS Toolkit for Eclipse to make it even easier to create AWS enabled Android applications.

The New AWS Android Project wizard creates an Android project preconfigured with the AWS SDK for Android in the classpath. The project can also optionally include a sample application demonstrating how to upload images to Amazon S3 and view them in a mobile browser.

For more information, see the full release notes.