AWS Developer Blog

Using Improved Conditional Writes in DynamoDB

by David Yanacek | on | in Java | Permalink | Comments |  Share

Last month the Amazon DynamoDB team announced a new pair of features: Improved Query Filtering and Conditional Updates.  In this post, we’ll show how to use the new and improved conditional writes feature of DynamoDB to speed up your app.

Let’s say you’re building a racing game, where two players advance in position until they reach the finish line.  To manage the state in DynamoDB, each game could be stored in its own Item in DynamoDB, in a Game table with GameId as the primary key, and each player position stored in a different attribute.  Here’s an example of what a Game item could look like:

    {
        "GameId": "abc",
        "Status": "IN_PROGRESS",
        "Player1-Position": 0,
        "Player2-Position": 0
    }

To make players move, you can use the atomic counters feature of DynamoDB in the UpdateItem API to send requests like, “increase the player position by 1, regardless of its current value”.  To prevent players from advancing before the game starts, you can use conditional writes to make the same request as before, but only “as long as the game status is IN_PROGRESS.”  Conditional writes are a way of instructing DynamoDB to perform a given write request only if certain attribute values in the item match what you expect them to be at the time of the request.

But this isn’t the whole story.  How do you determine the winner of the game, and prevent players from moving once the game is over?  In other words, we need a way to atomically make it so that all players stop once one reaches the end of the race (no ties allowed!).

This is where the new improved conditional writes come in handy.  Before, the conditional writes feature supported tests for equality (attribute “x” equals “20”).  With improved conditions, DynamoDB supports tests for inequality (attribute “x” is less than “20”).  This is useful for the game application, because now the request can be, “increase the player position by 1 as long as the status of the game equals IN_PROGRESS, and the positions of player 1 and player 2 are less than 20.”  During player movement, one player will eventually reach the finish line first, and any future moves after that will be blocked by the conditional writes.  Here’s the code:


    public static void main(String[] args) {

        // To run this example, first initialize the client, and create a table
        // named 'Game' with a primary key of type hash / string called 'GameId'.
        
        AmazonDynamoDB dynamodb; // initialize the client
        
        try {
            // First set up the example by inserting a new item
            
            // To see different results, change either player's
            // starting positions to 20, or set player 1's location to 19.
            Integer player1Position = 15;
            Integer player2Position = 12;
            dynamodb.putItem(new PutItemRequest()
                    .withTableName("Game")
                    .addItemEntry("GameId", new AttributeValue("abc"))
                    .addItemEntry("Player1-Position",
                        new AttributeValue().withN(player1Position.toString()))
                    .addItemEntry("Player2-Position",
                        new AttributeValue().withN(player2Position.toString()))
                    .addItemEntry("Status", new AttributeValue("IN_PROGRESS")));
            
            // Now move Player1 for game "abc" by 1,
            // as long as neither player has reached "20".
            UpdateItemResult result = dynamodb.updateItem(new UpdateItemRequest()
                .withTableName("Game")
                .withReturnValues(ReturnValue.ALL_NEW)
                .addKeyEntry("GameId", new AttributeValue("abc"))
                .addAttributeUpdatesEntry(
                     "Player1-Position", new AttributeValueUpdate()
                         .withValue(new AttributeValue().withN("1"))
                         .withAction(AttributeAction.ADD))
                .addExpectedEntry(
                     "Player1-Position", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withN("20"))
                         .withComparisonOperator(ComparisonOperator.LT))
                .addExpectedEntry(
                     "Player2-Position", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withN("20"))
                         .withComparisonOperator(ComparisonOperator.LT))
                .addExpectedEntry(
                     "Status", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withS("IN_PROGRESS"))
                         .withComparisonOperator(ComparisonOperator.EQ))
     
            );
            if ("20".equals(result.getAttributes().get("Player1-Position").getN())) {
                System.out.println("Player 1 wins!");
            } else {
                System.out.println("The game is still in progress: "
                    + result.getAttributes());
            }
        } catch (ConditionalCheckFailedException e) {
            System.out.println("Failed to move player 1 because the game is over");
        }
    }

With this algorithm, player movement now takes only one write operation to DynamoDB.  What would it have taken without improved conditions?  Using only equality conditions, the app would have needed to follow the read-modify-write pattern:

  1. Read each item, making note of each player’s position, and verify that neither player already reached the end of the race.
  2. Advance the player’s position by 1, with a condition that both players were still in the position we read in step 1).

Notice that this algorithm requires two round-trips to DynamoDB, whereas with improved conditions, it can be done in only one round-trip.  This reduces both latency and cost.

You can find more information about conditional writes in Amazon DynamoDB in the Developer Guide.

Referencing Credentials using Profiles

There are a number of ways to provide AWS credentials to your .NET applications. One approach is to embed your credentials in the appSettings sections of your App.config file. While this is easy and convenient, your AWS credentials might end up getting checked into source control or published to places that you didn’t mean. A better approach is to use profiles, which was introduced in version 2.1 of the AWS SDK for .NET. Profiles offer an easy-to-use mechanism to safely store credentials in a central location outside your application directory. After setting up your credential profiles once, you can refer to them by name in all of the applications you run on that machine. The App.config file will look similar to this example when using profiles.

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="development"/>
      <add key="AWSRegion" value="us-west-2" />
   </appSettings>
</configuration>

The SDK supports two different profile stores. The first is what we call the SDK store which stores the profiles encrypted in the C:Users<username>AppDataLocalAWSToolkit folder. This is the same store used by the AWS Toolkit for Visual Studio and AWS Tools for PowerShell. The second store is the credentials file under c:Users<username>.aws. The credentials file is used by the other AWS SDKs and AWS Command Line Interface. The SDK will always check the SDK store first and then fallback to the credentials file.

Setting up Profiles with Visual Studio

The Visual Studio Toolkit lists all the profiles registered in the SDK store in the AWS Explorer. To add new profiles click the New Account Profile button.

When you create a new project in Visual Studio using one of the AWS project templates the project wizard will allow you to pick an existing profile or create a new profile. The selected profile will be referenced in the App.config of the new project.

 

Setting up Profiles with PowerShell

Profiles can also be setup using the AWS Tools for Windows PowerShell.

PS C:> Set-AWSCredentials -AccessKey 123MYACCESSKEY -SecretKey 456SECRETKEY -StoreAs development

Like the Toolkit these credentials will be accessible to the SDK and Toolkit after running this command. To use the profile in PowerShell run the following command before using AWS cmdlets.

PS C:> Set-AWSCredentials -ProfileName development

Setting up Profiles with the SDK

Profiles can also be managed using just the AWS SDK for .NET using the Amazon.Util.ProfileManager class. Here is how you can register a profile using the ProfileManager.

Amazon.Util.ProfileManager.RegisterProfile(profileName, accessKey, secretKey)

You can also list the registered profiles and unregistered profiles using the ListProfileNames and UnregisterProfile methods.

Getting the SDK from Nuget

If you get the SDK from NuGet the package’s install script will add an empty AWSProfileName tag to the App.config file if the app setting doesn’t already exist. You can use any of the already mentioned methods for registering profiles. Alternatively, you can use the PowerShell script account-management.ps1 that comes with the NuGet package and will be placed in /packages/AWSSDK-X.X.X.X/tools/ folder. This is an interactive script that will let you register, list and unregister profiles.

Credentials File Format

The previous methods for adding profiles have all been about adding credentials to the SDK store. To put credentials in the SDK store requires using one of these tools because the credentials are encrypted. The alternative is to use the credentials file. This is a plain text file similar to a .ini file. Here is an example of a credentials file with two profiles.

[default]
aws_access_key_id = <access-key>
aws_secret_access_key = <secret-key>

[development]
aws_access_key_id = <access-key>
aws_secret_access_key = <secret-key>

Default Profile

When you create a service client without specifying credentials or profile name the SDK will search for a default profile. The default profile’s name is "default" and it will first be searched for in the SDK store and then the credentials file. When the AWS Tools for PowerShell was released last year it introduced a default profile called "AWS PS Default". To make all of our tools have a consistent experience, we have changed AWS Tools for PowerShell to now use "default" for the default. To make sure we didn’t break any existing users, the AWS Tools for PowerShell will still try to load the old profile ("AWS PS Default") when "default" is not found, but will now save credentials to "default" profile unless otherwise specified.

Credentials Search Path

If an application is creating a service client without specifying credentials then the SDK uses the following order to find credentials.

  • Look for AWSAccessKey and AWSSecretKey in App.config.

    • Important to note that the 2.1 version of the SDK didn’t break any existing applications using the AWSAccessKey and AWSSecretKey app settings.
  • Search the SDK Store

    • If the AWSProfileName exists then see if this profile exists. If no AWSProfileName is specified look for the default profile called "default" in the SDK Store.
  • Search the credentials file

    • If the AWSProfileName exists then see if this profile exists. If no AWSProfileName is specified look for the default profile called "default" in the credentials file.
  • Search for Instance Profiles

    • These are credentials that can be found on EC2 instance that were created with instance profiles.

Setting Profile in Code

It is also possible to specify the profile to use in code, in addition to using App.config. This code shows how to create an Amazon S3 client for the development profile.

Amazon.Runtime.AWSCredentials credentials = new Amazon.Runtime.StoredProfileAWSCredentials("development");
Amazon.S3.IAmazonS3 s3Client = new AmazonS3Client(credentials, Amazon.RegionEndpoint.USWest2);

Alternative Credentials File

Both the SDK store and the credentials file are located under the current user’s home directory. If your application is running under a different user – such as Local System – then the AWSProfilesLocation app setting can be set to use an alternative credentials file. For example, this App.Config tells the SDK to look for credentials in the C:aws_service_credentialscredentials file.

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="development"/>
      <add key="AWSProfilesLocation" value="C:aws_service_credentialscredentials"/>
      <add key="AWSRegion" value="us-west-2" />
   </appSettings>
</configuration>

Downloading Objects from Amazon S3 using the AWS SDK for Ruby

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

The AWS SDK for Ruby provides a few methods for getting objects out of Amazon S3. This blog post focuses on using the v2 Ruby SDK (the aws-sdk-core gem) to download objects from Amazon S3.

Downloading Objects into Memory

For small objects, it can be useful to get an object and have it available in your Ruby processes. If you do not specify a :target for the download, the entire object is loaded into memory into a StringIO object.

s3 = Aws::S3::Client.new
resp = s3.get_object(bucket:'bucket-name', key:'object-key')

resp.body
#=> #<StringIO ...> 

resp.body.read
#=> '...'

Call #read or #string on the StringIO to get the body as a String object.

Downloading to a File or IO Object

When downloading large objects from Amazon S3, you typically want to stream the object directly to a file on disk. This avoids loading the entire object into memory. You can specify the :target for any AWS operation as an IO object.

File.open('filename', 'wb') do |file|
  reap = s3.get_object({ bucket:'bucket-name', key:'object-key' }, target: file)
end

The #get_object method still returns a response object, but the #body member of the response will be the file object given as the :target instead of a StringIO object.

You can specify the target as String or Pathname, and the Ruby SDK will create the file for you.

resp = s3.get_object({ bucket:'bucket-name', key:'object-key' }, target: '/path/to/file')

Using Blocks

You can also use a block for downloading objects. When you pass a block to #get_object, chunks of data are yielded as they are read off the socket.

File.open('filename', 'wb') do |file|
  s3.get_object(bucket: 'bucket-name', key:'object-key') do |chunk|
    file.write(chunk)
  end
end

Please note, when using blocks to downloading objects, the Ruby SDK will NOT retry failed requests after the first chunk of data has been yielded. Doing so could cause file corruption on the client end by starting over mid-stream. For this reason, I recommend using one of the preceding methods for specifying the target file path or IO object.

Retries

The Ruby SDK retries failed requests up to 3 times by default. You can override the default using :retry_limit. Setting this value to 0 disables all retries.

If the Ruby SDK encounters a network error after the download has started, it attempts to retry request. It first checks to see if the IO target responds to #truncate. If it does not, the SDK disables retries.

If you prefer to disable this default behavior, you can either use the block mode or set :retry_limit to 0 for your S3 client.

Range GETs

For very large objects, consider using the :range option and download the object in parts. Currently there are no helper methods for this in the Ruby SDK, but if you are interested in submitting something, we accept pull requests!

Happy downloading.

Amazon S3 Client-Side Authenticated Encryption

by Hanson Char | on | in Java | Permalink | Comments |  Share

Encrypting data using the Amazon S3 encryption client is one way you can provide an additional layer of protection for sensitive information you store in Amazon S3. Now the Amazon S3 encryption client provides you with the ability to use authenticated encryption for your stored data via the new CryptoMode.AuthenticatedEncryption option. The Developer Preview of this client-side encryption option utilizes AES-GCM – a standard authenticated encryption algorithm recommended by NIST.

When CryptoMode.AuthenticatedEncryption is in use, an improved key wrapping algorithm will be applied to the envelope key, which is a one-time key randomly generated per S3 object. One of two key wrapping algorithms is used, depending on the encryption material you use. "AESWrap" is applied if the client-supplied encryption material contains a symmetric key; "RSA/ECB/OAEPWithSHA-256AndMGF1Padding" is used if the encryption material contains a key pair. Both key wrapping algorithms improve the level of protection of the envelope key with integrity check in addition to using encryption alone.

Enabling Authenticated Encryption

This new mode of authenticated encryption is disabled by default. This means the Amazon S3 encryption client will continue to function as before unless explicitly configured otherwise.

To enable the use of client-side authenticated encryption, two steps are required:

  1. Include the latest Bouncy Castle jar in the classpath; and
  2. Explicitly specify the cryptographic mode of authenticated encryption when instantiating an S3 encryption client
new AmazonS3EncryptionClient(...,
  new CryptoConfiguration(CryptoMode.AuthenticatedEncryption));

Once enabled, all new S3 objects will be encrypted using AES-GCM before being stored in S3. Otherwise, everything remains the same as described in the Getting Started guide at Client-Side Data Encryption with the AWS SDK for Java and Amazon S3. In other words, all APIs of the S3 encryption client including Range-Get and Multipart Upload will work the same way regardless of the selected cryptographic mode.

How CryptoMode.AuthenticatedEncryption Works

Storage

If CryptoMode.AuthenticatedEncryption is not enabled, the default behavior of the S3 encryption client will persist S3 objects using the same cryptographic algorithm as before, which is encryption-only.

However, if CryptoMode.AuthenticatedEncryption has been enabled, new S3 objects will be encrypted using the standard authenticated encryption algorithm, AES-GCM. Furthermore, the generated one-time envelope key will be protected using a new key-wrapping algorithm.

Retrieval

Existing S3 objects that have been encrypted using the default encryption-only scheme, CryptoMode.EncryptionOnly, will continue to work as before with no behavior changes regardless of whether CryptoMode.AuthenticatedEncryption is enabled or not.

However, if an S3 object that has been encrypted under CryptoMode.AuthenticatedEncryption is retrieved in its entirety, not only is the object automatically decrypted when retrieved, the integrity of the object is also verified (via AES-GCM). If for any reason the object failed the integrity check, a SecurityException would be thrown. A sample exception message:

java.lang.SecurityException: javax.crypto.BadPaddingException: mac check in GCM failed

Note, however, if only part of an object is retrieved from S3 via the Range-Get operation, then only decryption will apply and not authentication since the entire object is required for authentication.

Two Modes of Authenticated Encryption Available

There are actually two authenticated encryption modes available: CryptoMode.AuthenticatedEncryption and CryptoMode.StrictAuthenticatedEncryption.

CryptoMode.StrictAuthenticatedEncryption is a variant of CryptoMode.AuthenticatedEncryption, but it enforces a strict use of authenticated encryption. Specifically, the S3 encryption client running in CryptoMode.StrictAuthenticatedEncryption will only accept retrieval of S3 objects protected via authentication encryption. Retrieving S3 objects in plaintext or encrypted using encryption-only mode will cause a SecurityException to be thrown under the strict mode. A sample exception message:

java.lang.SecurityException: S3 object [bucket: mybucket, key: mykey] not encrypted using authenticated encryption

Furthermore, attempts to perform a Range-get operation in strict authenticated encryption mode will also cause SecurityException to be thrown, since Range-get has no authentication on the data retrieved. A sample exception message:

java.lang.SecurityException: Range get is not allowed in strict crypto mode

The purpose of CryptoMode.StrictAuthenticatedEncryption is to eliminate the possibility of an attacker hypothetically forcing a downgrade to bypass authentication. In other words, running in CryptoMode.StrictAuthenticatedEncryption would provide the highest level of security but potentially at the cost of restricted operations. This strict use of authenticated encryption is meant only for highly security-sensitive applications where there is no need to retrieve S3 objects that have not been previously encrypted using authenticated encryption.

Migrating to Authenticated Encryption

It’s worth pointing out that older versions of the AWS SDK for Java are not equipped with authenticated encryption and therefore will not be able to decrypt objects encrypted with authenticated encryption. Therefore, before enabling CryptoMode.AuthenticatedEncryption, you should upgrade all instances of the AWS SDK for Java in your application to the latest version. With no configuration necessary, the latest version of Java SDK is able to retrieve and decrypt S3 objects that are originally encrypted either in encryption-only mode (AES-CBC) or authenticated encryption mode (AES-GCM). Once all instances of the SDK are upgraded, you can then safely enable CryptoMode.AuthenticatedEncryption to start writing new S3 objects using authenticated encryption. Here is a summary table.

Java SDK CryptoMode Encrypt Decrypt Range Get Multipart Upload Max Size (bytes)
1.7.8.1+ AuthenticatedEncryption AES‑GCM AES‑GCM AES-CBC Yes Yes ~64GB
1.7.8.1+ StrictAuthenticatedEncryption AES‑GCM AES‑GCM No Yes ~64GB
1.7.8.1+ EncryptionOnly AES‑CBC AES‑GCM AES‑CBC Yes Yes 5TB
pre-1.7.8 (Not Applicable) AES‑CBC AES‑CBC Yes Yes 5TB

New Runtime Dependency on Bouncy Castle Library

You may wonder why we do not statically include the Bouncy Castle crypto library jar as a direct dependency. First, by not having a static dependency on the Bouncy Castle Crypto APIs, we believe users can take advantage of the latest releases from Bouncy Castle in a more timely and flexible manner. This is especially relevant should there be security fixes to the library. The other reason is that only users who decide to make use of authenticated encryption would need to depend on the Bouncy Castle library. We therefore do not want to force everyone else to pull in a copy of Bouncy Castle unless they need to.

Authenticated Encryption or Not?

If the protection of S3 objects in your application requires not only confidentiality but also integrity and authenticity, and the size of each object is less than 64 GB, then CryptoMode.AuthenticatedEncryption may be just the option you have been looking for. Why 64GB? It is a limiting factor of the standard AES-GCM. More details can be found in the NIST GCM spec.

Does your application require storing S3 objects with authenticated encryption? Let us know what you think!

Release: AWS SDK for PHP – Version 2.6.2

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.2 of the AWS SDK for PHP.

  • Added support for Amazon SQS message attributes.
  • Fixed Amazon S3 multi-part uploads so that manually set ContentType values are not overwritten.
  • No longer recalculating file sizes when an Amazon S3 socket timeout occurs.
  • Added better environment variable detection.

Install the SDK

IAM Roles for Amazon EC2 instances (Access Key Management for .NET Applications – Part 4)

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In this post, we’ll see how to use Identity and Access Management(IAM) roles for Amazon EC2 instances. Using IAM roles for EC2 instances, you don’t need to manage or distribute credentials that your application needs. Instead, credentials are automatically distributed to EC2 instances and picked up by the AWS SDK for .NET. Here are the advantages of using this approach.

  • No need to distribute and manage credentials for your application
  • Credentials are periodically auto rotated and distributed to EC2 instances
  • The credentials are transparently available to your application through the SDK

Before we go further and look at code snippets, let’s talk about IAM roles and related concepts in a little more detail. A role lets you define a set of permissions to access resources that your application needs. This is specified using an access policy. A role also contains information about who can assume the role. This is specified using a trust policy. To use roles with EC2 instances, we need an instance profile. An instance profile is a container for roles and is used to pass role information to EC2 instances when they are launched. When you launch an EC2 instance with an instance profile, your application can make requests to AWS resources using the role credentials for the role associated with the instance profile.

In the rest of this post, we will perform the steps required to use IAM roles using the AWS SDK for .NET. Please note that all of these steps can be performed using the AWS Management Console as well.

Create an IAM Role

We start by creating an IAM role. As I mentioned before, you need to provide two pieces of information here: the access policy that will contain the permissions your application needs, and the trust policy that will specify that EC2 can assume this role. The trust policy is required so that EC2 can assume the role and fetch the temporary role credentials.

This is the trust policy that allows EC2 to assume the role.

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal":{"Service":["ec2.amazonaws.com"]},
    "Action": "sts:AssumeRole"
  }]
}

This is a sample access policy that gives restricted access to a bucket by allowing the ListBucket, PutObject and GetObject actions.

{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect":"Allow",
      "Action":[
        "s3:ListBucket"
      ],
      "Resource":"arn:aws:s3:::MyApplicationBucket"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::MyApplicationBucket/*"
    }
  ]
}

The following code creates a role with the given trust and access policy.

var roleName = "S3Access";
var profileName = "S3Access";
var iamClient = new AmazonIdentityManagementServiceClient();

// Create a role with the trust policy
var role = iamClient.CreateRole(new CreateRoleRequest
{
   RoleName = roleName,
   AssumeRolePolicyDocument = trustPolicy
});

// Add the access policy to the role
iamClient.PutRolePolicy(new PutRolePolicyRequest
{
    RoleName = roleName,
    PolicyName = "S3Policy",
    PolicyDocument = accessPolicy                
});

Create an instance profile

Now we create an instance profile for the role.

// Create an instance profile
iamClient.CreateInstanceProfile(new CreateInstanceProfileRequest
{
    InstanceProfileName = profileName                
});

// Add the role to the instance profile
iamClient.AddRoleToInstanceProfile(new AddRoleToInstanceProfileRequest
{
    InstanceProfileName = profileName,
    RoleName = roleName
});

Launch EC2 instance(s) with the instance profile

We can now launch EC2 instances with the instance profile that we created. Notice that we use the Amazon.EC2.Util.ImageUtilities helper class to retrieve the image identifier.

var ec2Client = new AmazonEC2Client();
            
// Find an image using ImageUtilities helper class
var image = Amazon.EC2.Util.ImageUtilities.FindImage(
    ec2Client,
    Amazon.EC2.Util.ImageUtilities.WINDOWS_2012_BASE);

//Launch an EC2 instance with the instance profile
var instance = ec2Client.RunInstances(new RunInstancesRequest
{
    ImageId = image.ImageId,
    IamInstanceProfile = new IamInstanceProfileSpecification
    {
        Name = profileName
    },
    MinCount=1, MaxCount =1,
});

Access AWS Resources from your application code deployed on EC2

You don’t need to make any changes to your application code to use IAM roles. Your application code should construct service clients without specifying any explicit credentials like the code below (without having any credentials in the application configuration file). Behind the scenes, the Amazon.Runtime.InstanceProfileAWSCredentials class fetches the credentials from EC2 Instance metadata service and automatically refreshes them when a new set of credentials is available.

// Create an S3 client with the default constructor,
// this will use the role credentials to access resources.
var s3Client = new AmazonS3Client();
var s3Objects = s3Client.ListObjects(new ListObjectsRequest 
{
    BucketName = "MyApplicationBucket" 
}).S3Objects;

In this post, we saw how IAM roles can greatly simplify and secure access key management for applications on Amazon EC2. We highly recommend that you use this approach for all applications that are run on Amazon EC2.

Release: AWS SDK for PHP – Version 2.6.1

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.1 of the AWS SDK for PHP. This release adds support for the latest features in Amazon DynamoDB, Amazon ElastiCache, and Auto Scaling; introduces support for a new INI-formatted credentials file (more information about this will be coming in a future blog post); and fixes a few issues in the Amazon S3 Stream Wrapper.

Install the SDK

Develop, Deploy, and Manage for Scale with AWS Elastic Beanstalk and AWS CloudFormation

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Evan Brown is doing a great five part series on the AWS Application Management Blog on developing, deploying, and managing for scale with Elastic Beanstalk and CloudFormation. In each of his five blog posts, Evan breaks down a different topic and explains best practices as well as practical tips and tricks for working with applications deployed using CloudFormation and Elastic Beanstalk.

Plus, each Thursday at 9 a.m. PDT, during the five part series, Evan and the CloudFormation team host a Google Hangout to discuss the topics in the blog.

This is week three of the five part series, so head over and check out the latest blog post.

Then, this Thursday at 9 a.m. PDT, and the two following Thursdays, head over to the AWS CloudFormation Google Hangout to discuss the post and ask questions of the engineers from the AWS CloudFormation team.

Don’t miss this great opportunity to discuss developing, deploying, and managing applications on AWS with CloudFormation engineers!

Overriding Endpoints in the AWS SDK for .NET

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

Sometimes, when sending requests using the AWS SDK for .NET, you are required to explicitly specify an endpoint URL for a service. One such scenario is when you use an older version of the SDK to send requests to a particular service and that service is introduced in a new region. To access the service in the new region without upgrading the SDK, set the ServiceURL property on the client configuration object. Here’s an example with Amazon S3:

var config = new AmazonS3Config { ServiceURL = myUrl };
var s3client = new AmazonS3Client(config);

This technique overrides the default endpoint for a single instance of the service client. It requires code changes to modify the URL for a region, and requires setup everywhere in the code where a service instance is created.

We recently added a feature to the AWS SDK for .NET version 2 (2.0.7.0 onwards) that allows developers to specify their own mapping of Service + Regions to URLs, which can vary from environment to environment, keeping the code the same. This default mapping is baked into the SDK, but can be overridden either in the App.config or in code.

To point to the override mapping in your App.config, set the AWSEndpointDefinition appSetting:

<appSettings>
   ...
   <add key="AWSEndpointDefinition" value="c:pathtoendpoints.xml"
   ...
</appSettings>

To set the override in code, you can use the AWSConfigs.EndpointDefinition property:

AWSConfigs.EndpointDefinition = @"c:pathtoendpoints.xml";

You can find the most up-to-date version of this file in the Github repository for the SDK. It’s a good idea to start with this file, and then make the needed modifications. It’s also important to note that you need the whole file, not just the endpoints that are different.

When new services and regions are announced, we will update this file along with the SDK.

Testing Webhooks Locally for Amazon SNS

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

In a recent post, I talked about Receiving Amazon SNS Messages in PHP. I showed you how to use the SNS Message and MessageValidator classes in the AWS SDK for PHP to handle incoming SNS messages. The PHP code for the webhook is easy to write, but can be difficult to test properly, since it must be deployed to a server in order to be accessible to Amazon SNS. I’ll show you how you can actually test your code locally with the help of a few simple tools.

Testing Tools

To test the code I wrote for the blog post, I used PHP’s built-in web server (available in PHP 5.4 and later) to serve the code locally. I used another tool called ngrok to expose the locally running PHP server to the public internet. Ngrok does this by creating a tunnel to a specified port on your local machine.

You can use PHP’s built-in web server and ngrok on Windows, Linux, and Mac OS X. If you have PHP 5.4+ installed, then the built-in server is ready to use. To install ngrok, use the simple instructions on the ngrok website. I work primarily in OS X, so you may need to modify the commands I use in the rest of this post if you are using another platform.

Setting Up the PHP Code

First, you’ll need the PHP code that will handle the incoming messages. My post about receiving SNS messages provides a complete code example for doing this.

Let’s create a new folder in your home directly to use for this test. We’ll also install Composer, the AWS SDK for PHP, create a directory for the webroot, and create files for the PHP code and a log.

mkdir ~/sns-message-test && cd ~/sns-message-test
curl -sS https://getcomposer.org/installer | php
php composer.phar require aws/aws-sdk-php:~2.6.0
touch messages.log
mkdir web && touch web/index.php

Now take the PHP code from the other blog post and put it in index.php. Here is that same code, but with the require statement needed to load the SDK with our current file structure. I am also going to update the code to log the incoming messages to a file so we can easily see that the messages are being handled correctly.

<?php

require __DIR__ . '/../vendor/autoload.php';

use AwsSnsMessageValidatorMessage;
use AwsSnsMessageValidatorMessageValidator;
use GuzzleHttpClient;

// Make sure the request is POST
if ($_SERVER['REQUEST_METHOD'] !== 'POST') {
    http_response_code(405);
    die;
}

try {
    // Create a message from the post data and validate its signature
    $message = Message::fromRawPostData();
    $validator = new MessageValidator();
    $validator->validate($message);
} catch (Exception $e) {
    // Pretend we're not here if the message is invalid
    http_response_code(404);
    die;
}

if ($message->get('Type') === 'SubscriptionConfirmation') {
    // Send a request to the SubscribeURL to complete subscription
    (new Client)->get($message->get('SubscribeURL'))->send();
}

// Log the message
$file = new SplFileObject(__DIR__ . '/../messages.log', 'a');
$file->fwrite($message->get('Type') . ': ' . $message->get('Message') . "n");

Creating an Amazon SNS Topic

Before you can perform any tests, you must set up an Amazon SNS topic. You can do this easily in the AWS Management Console by following the Getting Started with Amazon Simple Notification Service guide. This guide also shows how to subscribe to a topic and publish a message, which you will also need to do in a moment.

Setting Up the Server

OK, we have an Amazon SNS topic ready and all of the files we need in place. Now we need to start up the server and make it accessible to Amazon SNS. To do this, create 3 separate terminal windows or tabs, which we will use for 3 separate long-running processes: the server, ngrok, and tailing the messages log.

Launching the PHP Built-in Server

In the first terminal window, use the following command to start up the PHP built-in web server to serve our little test webhook. (Note: you can use a different port number, just make sure you use the same one with ngrok.)

php -S 127.0.0.1:8000 -t web/

This will create some output that looks something like the following:

PHP 5.4.24 Development Server started at Mon Mar 31 11:02:14 2014
Listening on http://127.0.0.1:8000
Document root is /Users/your-user/sns-message-test/web
Press Ctrl-C to quit.

If you access http://127.0.0.1:8000 from your web browser, you will likely see a blank page, but that request will show up in this terminal window. Since our code is set up to respond only to POST requests, we will see the expected behavior of a 405 HTTP code in the response.

[Mon Mar 31 11:02:44 2014] 127.0.0.1:61409 [405]: /

Creating a Tunnel with ngrok

In the second terminal window, use the following command to create an ngrok tunnel to the PHP server. Use the same port as you did in the previous section.

ngrok 8000

That was easy! The output of this command will contain a publicly accessible URL that forwards to your localhost.

Tunnel Status                 online
Version                       1.6/1.5
Forwarding                    http://58565ed9.ngrok.com -> 127.0.0.1:8000
Forwarding                    https://58565ed9.ngrok.com -> 127.0.0.1:8000
Web Interface                 127.0.0.1:4040
# Conn                        1
Avg Conn Time                 36.06ms

ngrok also provides a small web app running on localhost:4040 that displays all of the incoming requests through the tunnel. It also allows you to click a button to replay a request, which is really helpful for testing and debugging your webhooks.

Tailing the Message Logs

Let’s use the third terminal window to tail the log file that our PHP code writes the incoming messages to.

tail -f messages.log

This won’t show anything yet, but once we start publishing Amazon SNS messages to our topic, they should be printed out in this window.

Testing the Incoming SNS Messages

Now that everything is running and wired up, head back to the Amazon SNS console and subscribe the URL provided by ngrok as an HTTP endpoint for your SNS topic.

If all goes well, you should see output similar to the following on each of the 3 terminal windows.

PHP Server:

[Tue Apr  1 08:51:13 2014] 127.0.0.1:50190 [200]: /

ngrok:

POST /                        200 OK

Log:

SubscriptionConfirmation: You have chosen to subscribe to the topic arn:aws:sns:us-west-2:01234567890:sdk-test. To confirm the subscription, visit the SubscribeURL included in this message.

Back in the SNS console, you should see that the subscription has been confirmed. Next, publish a message to the topic to test that normal messages are processed correctly. The output should be similar:

PHP Server:

[Tue Apr  1 10:08:14 2014] 127.0.0.1:51235 [200]: /

ngrok:

POST /                        200 OK

Log:

Notification: THIS IS MY TEST MESSAGE!

Nice work!

Cleaning Up

Now that we are done, be sure to shutdown (Ctrl+C) ngrok, tail, and the local php server. Unsubscribe the defunct endpoint you used for this test, or just delete the SNS topic entirely if you aren’t using it for anything else.

With these tools, you can now test webhooks in your applications locally and interact with Amazon SNS more easily.