AWS Developer Blog

AWS re:Invent PHP Presentation Video Posted

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

The AWS SDK for PHP team attended AWS re:Invent this year to give our presentation titled Mastering the AWS SDK for PHP. Jeremy and I enjoyed talking with other PHP developers during our PHP office hours, and we got some great feedback on the AWS SDK for PHP.

In case you weren’t able to attend, the video of our presentation has been posted:

We fielded a lot of questions after the talk, many of which revolved around our Amazon S3 stream wrapper and our Amazon S3 directory upload and download abstractions. We’re happy to see so much interest in these features! But don’t let the questions stop there; send us any questions you have about the presentation or the SDK in general in the comments.

Don’t forget: you can view a list of all of the available AWS re:Invent presentation videos on our YouTube channel, and you can find the slides for all of the AWS re:Invent presentations here.

See you next year!

AWS re:Invent 2013 Talk Now Available

by Loren Segal | on | in Ruby | Permalink | Comments |  Share

This week, talks from AWS re:Invent 2013 started to become available through YouTube and SlideShare. If you were at re:Invent this year, you may have seen Trevor and I give a talk on the new AWS SDK for Ruby V2. If you missed it, or if you just want to check it out again, the talk is linked below and ready for viewing.

View Slides on SlideShare

V2 is Moving Forward

We are still working on V2 of the SDK. In fact, we just recently polished and committed pagination support on our GitHub repository.

More Resources for Frontend JavaScript Developers

Finally, if you happen to do any JavaScript development in the frontend of your web applications (and who doesn’t?), I gave a talk highlighting our new AWS SDK for JavaScript, which is now available directly in browsers and mobile devices. If you are interested, you can see the talk for that below:

View Slides on SlideShare

Enabling Metrics with the AWS SDK for Java

by Hanson Char | on | in Java | Permalink | Comments |  Share
Ever thought about generating metrics that measure your application’s performance on accessing AWS, and then having those metrics uploaded to Amazon CloudWatch for visualization or monitoring purposes ? How about generating performance metrics of your JVM’s when used against AWS ? Wouldn’t it be nice to capture and visualize metrics related to the runtime environment such as the heap memory, number of threads, and opened file descriptors all in one place ?
 
We are excited to announce a new feature in the AWS SDK for Java that can do exactly that – automatic metric generation, collection, and uploads to CloudWatch.
 
This feature is disabled by default. To enable it, simply include a system property pointing to your AWS security credential file when starting up the JVM. For example:
      -Dcom.amazonaws.sdk.enableDefaultMetrics=credentialFile=/path/aws.properties
And you are all set!
 
"But," you may ask, "why do I need to specify a credential file ?" Great question! Under the hood, the default SDK metrics collection needs the necessary AWS security credentials to access Amazon CloudWatch. That’s what the credentialFile attribute is for. On the other hand, if you are accessing AWS via the more secure and recommended option of Instance profile credentials through the Amazon EC2 instance metadata service, you don’t even need to specify the credential file. In other words, to enable metrics, you need only specify:
      -Dcom.amazonaws.sdk.enableDefaultMetrics
Once you enable this feature, every time there is a service request to AWS via the SDK for Java, metric data points will get generated, queued for statistical summary, and then uploaded asynchronously to Amazon CloudWatch about once every minute. The default set of metrics is divided into three major categories: AWS Request Metrics, AWS Service Metrics, and Machine Metrics. AWS Request Metrics covers areas such as the latency of the HTTP request/response, number of requests, exceptions, and retries. Examples of AWS Service Metrics include the throughput and byte count metrics for S3 uploads and downloads. Machine Metrics, on the other hand, covers the runtime environment including heap memory, number of threads, and open file descriptors. Even though there is little reason to exclude Machine Metrics, you can do so by including excludeMachineMetrics in the very same system property, like so:
      -Dcom.amazonaws.sdk.enableDefaultMetrics=
        credentialFile=/path/aws.properties,excludeMachineMetrics
(All in a single line with no space.)
 
Once you’ve uploaded metrics to Amazon CloudWatch, not only can you visualize them in the AWS Management Console, you can also set alarms on potential problems like  memory leakage, file descriptor leakage, and so on. All metrics captured by the SDK for Java are under the namespace AWSSDK/Java, and by default are uploaded to the Amazon CloudWatch default region us-east-1. If you want to change the region, simply specify the value for the cloudwatchRegion attribute in the system property. For example, the following line overrides the Amazon CloudWatch region for metric uploads to us-west-2:
      -Dcom.amazonaws.sdk.enableDefaultMetrics=
        credentialFile=/path/aws.properties,cloudwatchRegion=us-west-2
(All in a single line with no space.)
 
Following are some sample screen shots of what the metrics would look like in the AWS Management Console.
 
Request Metrics showing the client-side HTTP response time ( HttpClientReceiveResponseTime) vs. client-side total request execution time ( ClientExecuteTime) of writing to Amazon DynamoDB with PutItemRequest:
 
Request Metrics
 
Machine Metrics sample screenshot, showing the JVM’s heap memory while making those AWS requests:
 
Machine Metrics
 
Service Metrics sample screenshot, showing the S3 download and upload throughput (byte/second):
 
Service Metrics
 
Please see the package summary for a full list of the predefined core metric types. Additional features, such as the dynamic control of the metrics system via JMX, will likely take up more blog space than we have here. If you can’t wait, however, just fire up jconsole and look for MBeans under the namespace com.amazonaws.management. Let us know what you think!
 

 

From Minitest::Spec to Minitest::Test

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

In a previous blog post, I introduced Minitest from the perspective of RSpec. Some Minitest users prefer to avoid the specification style of Minitest::Spec. Instead they use Minitest::Test. It’s closer to the metal and uses a more vanilla Ruby syntax.

Here is an example spec file using Minitest::Spec:

require 'spec_helper'

describe MyClass do
  describe '#some_method' do
    it 'returns a string' do
      MyClass.new.some_method.must_be_kind_of(String)
    end
  end
end

Converting this to use Minitest::Test looks like:

require 'test_helper'

class MyClassTest < Minitest::Test
  def test_some_method_returns_a_string
    assert_kind_of String, MyClass.new.some_method
  end
end

Some key differences:

  • Assertions are instance methods provided by the test class. Instead of calling magic methods added to Object, you pass the object under test into the assertion. Example:

    value.must_be_kind_of(String)
    

    becomes:

    assert_kind_of(String, value) 
    
  • There is no DSL for defining test cases. The it method is removed. Instead, all methods prefixed with test_ are executed as test cases.

  • Nesting describe blocks is a useful technique for grouping specs. You can still do this, but you have to nest test classes.

    class MyClassTest < Minitest::Test
        class SubClassTest < Minitest::Test
        ...
      end
    end
    

I don’t feel as strongly as others about using vanilla Minitest::Test over Minitest::Spec. I personally find the specs easier to read, but that may be due to my experience with RSpec. You may have a different experience based on your testing background.

Happy Testing!

Release: AWS SDK for PHP – Version 2.4.11

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.4.11 of the AWS SDK for PHP. This release updates the Amazon EC2 and Amazon RDS clients to use the latest API versions.

Changelog

  • Added support for copying DB snapshots from one AWS region to another to the Amazon RDS client
  • Added support for pagination of the DescribeInstances and DescribeTags operations to the Amazon EC2 client
  • Added support for the new C3 instance types and the g2.2xlarge instance type to the Amazon EC2 client
  • Added support for enabling Single Root I/O Virtualization (SR-IOV) support for the new C3 instance types to the Amazon EC2 client
  • Updated the Amazon EC2 client to use the 2013-10-15 API version
  • Updated the Amazon RDS client to use the 2013-09-09 API version
  • Updated the Amazon CloudWatch client to use Signature Version 4

Install/Download the Latest SDK

Subscribing an SQS Queue to an SNS Topic

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

In version 2.0.2.3 of the SDK we added an enhancement to the SDK to make it easier to subscribe an Amazon SQS queue to an Amazon SNS topic. You have always been able to subscribe queues to topics using the Subscribe method on the SNS client, but after you subscribed to the topic with your queue, you also had to set a policy on the queue using the SetQueueAttributes method from the SQS client. The policy gives permission to the topic to send a message to the queue.

With this new feature, you can call SubscribeQueue from the SNS client, and it will take care of both the subscription and setting up the policy. This code snippet shows how to create a queue and topic, subscribe the queue, and then send a message.

string queueURL = sqsClient.CreateQueue(new CreateQueueRequest
{
    QueueName = "theQueue"
}).QueueUrl;


string topicArn = snsClient.CreateTopic(new CreateTopicRequest
{
    Name = "theTopic"
}).TopicArn;

snsClient.SubscribeQueue(topicArn, sqsClient, queueURL);

// Sleep to wait for the subscribe to complete.
Thread.Sleep(TimeSpan.FromSeconds(5));

// Publish the message to the topic
snsClient.Publish(new PublishRequest
{
    TopicArn = topicArn,
    Message = "Test Message"
});

// Get the message from the queue.
var messages = sqsClient.ReceiveMessage(new ReceiveMessageRequest
{
    QueueUrl = queueURL,
    WaitTimeSeconds = 20
}).Messages;

From RSpec to Minitest

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

One of my favorite aspects of working with Ruby is how natural it is to write tests for. The Ruby community does an excellent job of encouraging authors to produce well tested code. There is a plethora of well supported tools to choose from. I like to joke that new Ruby developers write Micro test frameworks instead of "Hello World!".

Much of the Ruby code I have maintained uses RSpec. Lately I have been spending some time with Minitest. You may have worked with Minitest — it ships as part of the Ruby standard library.

Why bother learning another testing framework when your current tool suits your needs? My answer is why not? It’s always good to expand your horizons. I have found learning a new testing tool expands my ability to write good tests. I pick up new patterns and the context switch forces to me to question my standard testing approaches. As a result, I tend to write better tests.

Minitest::Spec

Minitest::Spec does a great job of bridging the gap between RSpec-style specifications and Minitest-style unit tests.

Here is an example RSpec test file:

require 'spec_helper'

describe MyClass do
  describe '#some_method' do
    it 'returns a a string' do
      MyClass.new.should be_a(String)
    end
  end
end

And the same thing using Minitest::Spec:

require 'test_helper'

describe MyClass do
  describe '#some_method' do
    it 'returns a string' do
      MyClass.new.some_method.must_be_kind_of(String)
    end
  end
end

Matchers

The primary difference above is how you make assertions. RSpec-style should matchers can be converted to Minitest expectations with ease. The table below gives a few examples.

RSpec Matcher Minitest Matcher
obj.should be(value) obj.must_be(value)
obj.should be_empty obj.must_be_empty
obj.should be(nil) obj.must_be_nil
obj.should equal(value) obj.must_equal(value)
lambda { … }.should raise_error(ErrorClass) lambda { … }.must_raise(ErrorClass)

See the Minitest api documentation for more expectations.

Mocks

Mocks (and stubs) are where the two testing libraries differ the most. RSpec provides doubles; Minitest provides a Mock class.

@mock = Minitest::Mock.new

You can set expectations about what messages the mock should receive. You name the method to expect, what to return from the mock and then optionally the arguments the method should receive. Given I need a mock for a user, and I expect the user’s delete method will be called, I could do the following:

user = Minitest::Mock.new
user.expect(:delete, true) # returns true, expects no args

UserDestoyer.new.delete_user(user)

assert user.verify

Calling #verify is necessary for the mock to enforce the expectations. RSpec makes this a little easier, but its not a huge adjustment.

Stubs

Stubs are pretty straight forward. Unlike RSpec, the stub lasts only till the end of the block. You also cannot stub methods that don’t exist yet.

Time.stub :now, Time.at(0) do
  assert obj_under_test.stale?
end

Summary

Minitest works well and I’m impressed by how fast it run tests. Coming from RSpec you may find yourself missing features. Instead of trying to find exact replacements, consider using plain old Ruby solutions instead. You may find you write better tests. I still enjoy working with RSpec and appreciate it for its strengths, but you should also consider giving Minitest a spin.

Using Credentials from AWS Security Token Service

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

A recent post on the AWS PHP Development forum inspired me to write a quick post about how to use credentials vended by AWS Security Token Service with the AWS SDK for PHP.

What is AWS Security Token Service?

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege AWS credentials for AWS Identity and Access Management (AWS IAM) users or for users that you authenticate via identity federation. One common use case for using temporary credentials is to grant mobile or client-side applications access to AWS resources by authenticating users through third-party identity providers (read more about Web Identity Federation).

Getting Temporary Credentials

AWS STS has five operations that return temporary credentials: AssumeRole, AssumeRoleWithWebIdentity, AssumeRoleWithSAML (recently added), GetFederationToken, and GetSessionToken. Using the GetSessionToken operation is easy, so let’s use that one as an example. Assuming you have an instance of AwsStsStsClient stored in the $sts variable, this is how you call the method:

$result = $sts->getSessionToken();

See? I told you it was easy. The result for GetSessionToken and the other AWS STS operations always contains a 'Credentials' value. If you print the result (e.g., print_r($result)), it looks like the following:

Array
(
    ...
    [Credentials] => Array
    (
        [SessionToken] => '<base64 encoded session token value>'
        [SecretAccessKey] => '<temporary secret access key value>'
        [Expiration] => 2013-11-01T01:57:52Z
        [AccessKeyId] => '<temporary access key value>'
    )
    ...
)

Using Temporary Credentials

You can use temporary credentials with another AWS client by instantiating the client and passing in the values received from AWS STS directly.

use AwsS3S3Client;

$result = $sts->getSessionToken();

$s3 = S3Client::factory(array(
    'key'    => $result['Credentials']['AccessKeyId'],
    'secret' => $result['Credentials']['SecretAccessKey'],
    'token'  => $result['Credentials']['SessionToken'],
));

You can also construct a Credentials object and use that when instantiating the client.

use AwsCommonCredentialsCredentials;
use AwsS3S3Client;

$result = $sts->getSessionToken();

$credentials = new Credentials(
    $result['Credentials']['AccessKeyId'],
    $result['Credentials']['SecretAccessKey'],
    $result['Credentials']['SessionToken']
);

$s3 = S3Client::factory(array('credentials' => $credentials));

However, the best way to provide temporary credentials is to use the createCredentials() helper method included with StsClient. This method extracts the data from an AWS STS result and creates the Credentials object for you.

$result = $sts->getSessionToken();
$credentials = $sts->createCredentials($result);

$s3 = S3Client::factory(array('credentials' => $credentials));

You can also use the same technique when setting credentials on an existing client object.

$credentials = $sts->createCredentials($sts->getSessionToken());
$s3->setCredentials($credentials);

Closing Notes

For information about why you might need to use temporary credentials in your application or project, see Scenarios for Granting Temporary Access in the AWS STS documentation.

If you would like to read more about providing credentials to the SDK, check out one of our other blog posts: Providing Credentials to the AWS SDK for PHP.

AWS re:Invent 2013 Wrap-up

We’re back in Seattle after spending last week in Las Vegas at AWS re:Invent 2013! It was great to meet so many Java developers building applications on AWS. We heard lots of excellent feature requests for all the different tools and projects our team works on, and we’re excited to get started building them!

The slides from my session on the SDK and Eclipse Toolkit are online, and we’ll let you know as soon as the videos from the sessions start appearing online, too.

I’ve also uploaded the latest code for the AWS Meme Generator to GitHub. I used this simple web application in my session to demonstrate a few features in the AWS SDK for Java and the AWS Toolkit for Eclipse. Check out the project on GitHub and try it out yourself!

If you didn’t make it to AWS re:Invent 2013, or if you were there, but just didn’t get a chance to stop by the AWS SDKs and Tools booth, let us know in the comments below what kinds of features you’d like to see in tools like the AWS SDK for Java and the AWS Toolkit for Eclipse.

Amazon S3 Lifecycle Management

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Amazon Simple Storage Service (S3) provides a simple method to control the lifecycle of your S3 objects. In this post, we examine how you can easily set up rules to delete or archive old data in S3 using the AWS SDK for .NET.

Lifecycle Rules

Lifecycle configurations are associated with a bucket. A lifecycle configuration consists of a number of rules, with each rule specifying the objects it acts on and the actions to take. Rules specify which objects they act on by defining a prefix. A rule can archive an object to Amazon Glacier, delete an object, or both. The action associated with a rule specifies a time constraint on it, acting on the objects that are either older than a specific number of days, or after a particular date. A rule also has a Status, which can be set to Enabled or Disabled. If you haven’t set this field, the rule will be disabled by default.

For instance, it’s possible to configure a rule that all objects with the prefix "logs/" must be archived to Glacier after one month. Here is a rule that does just that:

var rule1 = new LifecycleRule
{
    Prefix = "logs/", 
    Transition = new LifecycleTransition
    {
        Days = 30,
        StorageClass = S3StorageClass.Glacier
    },
    Status = LifecycleRuleStatus.Enabled
};

Rules can also be configured for a specific date. The following rule is configured to delete all objects with the prefix "june/" on the 1st of August, 2014.

var rule2 = new LifecycleRule
{
    Prefix = "june/", 
    Expiration = new LifecycleRuleExpiration
    {
        Date = new DateTime(2014, 08, 01)
    },
    Status = LifecycleRuleStatus.Enabled
};

Finally, a rule can contain both a transition and an expiration command. The following rule transitions objects to Glacier after 2 months and deletes objects after 1 year. This sample also configures a disabled rule.

var rule3 = new LifecycleRule
{
    Prefix = "user-data/",
    Status = LifecycleRuleStatus.Disabled,
    Transition = new LifecycleTransition
    {
        Days = 60,
        StorageClass = S3StorageClass.Glacier
    },
    Expiration = new LifecycleRuleExpiration
    {
        Days = 365
    },
    Status = LifecycleRuleStatus.Disabled
};

Lifecycle Configuration

A lifecycle configuration is simply a list of rules. In the following example, we construct a lifecycle configuration that consists of the rules we created earlier, and then this configuration is applied to our test bucket.

S3Client.PutLifecycleConfiguration(new PutLifecycleConfigurationRequest
{
    BucketName = "sample-bucket",
    Configuration = new LifecycleConfiguration
    {
        Rules = new List { rule1, rule2, rule3 }
    }
});

When dealing with configurations, we must configure all rules on a bucket. This means that if you wish to modify or add new rules, you must first retrieve the current configuration, modify it, and then apply it to the bucket. The following sample shows how we can enable all disabled rules and remove a specific rule.

// Retrieve current configuration
var configuration = S3Client.GetLifecycleConfiguration(
new GetLifecycleConfigurationRequest
{
    BucketName = "sample-bucket"
}).Configuration;

// Remove rule with prefix 'june/'
configuration.Rules.Remove(configuration.Rules.Find(r => r.Prefix == "june/"));
// Enable all disabled rules
foreach (var rule in configuration.Rules)
    if (rule.Status == LifecycleRuleStatus.Disabled)
        rule.Status = LifecycleRuleStatus.Enabled;

// Save the updated configuration
S3Client.PutLifecycleConfiguration(new PutLifecycleConfigurationRequest
{
    BucketName = "sample-bucket",
    Configuration = configuration
});

Finally, if you want to turn off all lifecycle rules for a bucket, you must either disable all rules (by setting Status = LifecycleRuleStatus.Disabled) or call the DeleteLifecycleConfiguration method, as follows.

// Remove a bucket's lifecycle configuration
S3Client.DeleteLifecycleConfiguration(new DeleteLifecycleConfigurationRequest
{
    BucketName = "sample-bucket"
});

Summary

In this blog post, we’ve shown how simple it is to configure the lifecycle of your S3 objects. For more information on this topic, see S3 Object Lifecycle Management.