AWS Developer Blog

Monitoring Your Estimated Costs with Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The documentation for Amazon CloudWatch contains this sample scenario for setting up alarms to monitor your estimated charges. Apart from a one-time operation to enable billing alerts for your account, the same capability can be set up and maintained using the AWS Tools for Windows PowerShell.

Enabling Alerts

The first step is to enable billing alerts for your account. To do this one-time operation, you need to use the AWS Billing console.

Important Note: This is a one-way step! Once you enable alerts for an account, you cannot turn them off.

  1. Once you are logged into the console, click Preferences and then select the Receive Billing Alerts check box.
  2. Click the Save preferences button and then log out of the console.

It can take around 15 minutes after enabling this option before you can view billing data and set alarms—plenty of time to read the rest of this post!

The remainder of this post assumes you are working in a PowerShell console prompt (or environment like the PowerShell ISE), have the AWSPowerShell module loaded, and your environment is configured to default to the account that you just enabled billing alerts for. If you’re not sure how to do this, check out this post on configuring accounts for PowerShell. In addition to setting the account, we’ll also need to use the US East (Virginia) region for the cmdlets we need to run, since this is where all metric data related to billing is held. We could add a -Region us-east-1 parameter to each cmdlet, but it’s simpler in this case to set a default for the current shell or script:

PS C:> Set-DefaultAWSRegion us-east-1

Now all cmdlets that we run in the current shell or script will operate by default against this region.

Setting Up the Billing Alarm and Notification

Once we’ve enabled billing alerts, we can start to construct alarm notifications. Just as in the Amazon CloudWatch sample, we’ll create an alarm that will trigger an Amazon SNS topic to send an email notification when our total estimated charges for the period exceeds $200.

We’ll first set up the email notification topic, and then use the topic as the alarm action later when we create the alarm.

Creating the Notification Topic

To create a new topic and subscribe an email endpoint to it, we can run this pipeline (indentation used for clarity):

PS C:> ($topicARN = New-SNSTopic -Name BillingAlarmNotifications) | 
                 Connect-SNSNotification -Protocol email `
                                         -Endpoint email@address.com
pending confirmation

The output from the pipeline, pending confirmation, signals that we need to go to our email and confirm the subscription. Once we do this, our topic is all set up to send notifications to the specified email. Notice that we capture the Amazon Resource Name (ARN) of the new topic into the variable $topicARN. We’ll need this when creating the subsequent alarm.

Creating the Alarm

Now that we have the notification topic in place, we can perform the final step to create the alarm.

To do this, we’ll use the Write-CWMetricAlarm cmdlet. For readers who know the underlying Amazon CloudWatch API, this cmdlet maps to the PutMetricAlarm operation and is used to both create and update alarms. Before creating an alarm, we need to know the namespace and the name of the metric it should be associated with. We can get a list of available metrics by using the Get-CWMetrics cmdlet:

PS C:> Get-CWMetrics

Namespace           MetricName                  Dimensions
---------           ----------                  ----------
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {Currency}

At first glance, this looks like a set of duplicated metrics, but by examining the Dimensions for each object we see the following:

PS C:> (Get-CWMetrics).Dimensions

Name                    Value
----                    -----
ServiceName             AmazonEC2
Currency                USD
ServiceName             AmazonSimpleDB
Currency                USD
ServiceName             AWSQueueService
Currency                USD
ServiceName             AWSDataTransfer
Currency                USD
ServiceName             AmazonSNS
Currency                USD
ServiceName             AmazonS3
Currency                USD
Currency                USD

Now we can see that what initially looked like duplicate metrics are in fact separate metrics for 6 services (in this example) plus one extra that only has a Dimension of Currency—this is the Total Estimated Charge metric we’re interested in for this post. If you wanted to set up billing alerts for, say, Amazon EC2 usage only, then you would simply use that specific dimension when creating the alarm.

Alarms need to have a name that is unique to your account. This, plus the namespace, metric name, and dimension is all we need to create the alarm for the metric, which will be measured periodically. In this example, our alarm threshold (-Threshold parameter) is $200. We want to check every six hours, which we specify using the -Period parameter (the value is in seconds, where 21600 seconds is 6 hours). We want the alarm to fire the first time that the metric breaches, so the value for our -EvaluationPeriods parameter will be 1.

Write-CWMetricAlarm -AlarmName "My Estimated Charges" `
                    -AlarmDescription "Estimated Monthly Charges" `
                    -Namespace "AWS/Billing" `
                    -MetricName EstimatedCharges `
                    -Dimensions @{ Name="Currency"; Value="USD" } `
                    -AlarmActions $topicARN `
                    -ComparisonOperator GreaterThanOrEqualToThreshold `
                    -EvaluationPeriods 1 `
                    -Period 21600 `
                    -Statistic Maximum `
                    -Threshold 200

Note that Amazon CloudWatch returns no response output from the call. If we want to look at the alarm we just created, we can use the Get-CWAlarm cmdlet:

PS C:> Get-CWAlarm "My Estimated Charges"
AlarmName                          : My Estimated Charges
AlarmArn                           : arn:aws:cloudwatch:us-east-1:123412341234:alarm:My Estimated Charges
AlarmDescription                   : Estimated Monthly Charges
AlarmConfigurationUpdatedTimestamp : 3/27/2014 9:41:57 AM
ActionsEnabled                     : True
OKActions                          : {}
AlarmActions                       : {arn:aws:sns:us-east-1:123412341234:BillingNotification}
InsufficientDataActions            : {}
StateValue                         : OK
StateReason                        : Threshold Crossed: 1 datapoint (1.38) was not greater than or equal to the threshold (200.0).
StateReasonData                    : {"version":"1.0","queryDate":"2014-03-27T16:41:58.550+0000","startDate":"2014-03-27T10:41:00.0
                                     00+0000","statistic":"Maximum","period":21600,"recentDatapoints":[1.38],"threshold":20.0}
StateUpdatedTimestamp              : 3/27/2014 9:41:58 AM
MetricName                         : EstimatedCharges
Namespace                          : AWS/Billing
Statistic                          : Maximum
Dimensions                         : {Currency}
Period                             : 21600
Unit                               :
EvaluationPeriods                  : 1
Threshold                          : 200
ComparisonOperator                 : GreaterThanOrEqualToThreshold

All that remains is to wait for the alarm to fire (or, depending on your reasons for wanting to set up the alarm, to not fire!).

Using Improved Conditional Writes in DynamoDB

by David Yanacek | on | in Java | Permalink | Comments |  Share

Last month the Amazon DynamoDB team announced a new pair of features: Improved Query Filtering and Conditional Updates.  In this post, we’ll show how to use the new and improved conditional writes feature of DynamoDB to speed up your app.

Let’s say you’re building a racing game, where two players advance in position until they reach the finish line.  To manage the state in DynamoDB, each game could be stored in its own Item in DynamoDB, in a Game table with GameId as the primary key, and each player position stored in a different attribute.  Here’s an example of what a Game item could look like:

    {
        "GameId": "abc",
        "Status": "IN_PROGRESS",
        "Player1-Position": 0,
        "Player2-Position": 0
    }

To make players move, you can use the atomic counters feature of DynamoDB in the UpdateItem API to send requests like, “increase the player position by 1, regardless of its current value”.  To prevent players from advancing before the game starts, you can use conditional writes to make the same request as before, but only “as long as the game status is IN_PROGRESS.”  Conditional writes are a way of instructing DynamoDB to perform a given write request only if certain attribute values in the item match what you expect them to be at the time of the request.

But this isn’t the whole story.  How do you determine the winner of the game, and prevent players from moving once the game is over?  In other words, we need a way to atomically make it so that all players stop once one reaches the end of the race (no ties allowed!).

This is where the new improved conditional writes come in handy.  Before, the conditional writes feature supported tests for equality (attribute “x” equals “20”).  With improved conditions, DynamoDB supports tests for inequality (attribute “x” is less than “20”).  This is useful for the game application, because now the request can be, “increase the player position by 1 as long as the status of the game equals IN_PROGRESS, and the positions of player 1 and player 2 are less than 20.”  During player movement, one player will eventually reach the finish line first, and any future moves after that will be blocked by the conditional writes.  Here’s the code:


    public static void main(String[] args) {

        // To run this example, first initialize the client, and create a table
        // named 'Game' with a primary key of type hash / string called 'GameId'.
        
        AmazonDynamoDB dynamodb; // initialize the client
        
        try {
            // First set up the example by inserting a new item
            
            // To see different results, change either player's
            // starting positions to 20, or set player 1's location to 19.
            Integer player1Position = 15;
            Integer player2Position = 12;
            dynamodb.putItem(new PutItemRequest()
                    .withTableName("Game")
                    .addItemEntry("GameId", new AttributeValue("abc"))
                    .addItemEntry("Player1-Position",
                        new AttributeValue().withN(player1Position.toString()))
                    .addItemEntry("Player2-Position",
                        new AttributeValue().withN(player2Position.toString()))
                    .addItemEntry("Status", new AttributeValue("IN_PROGRESS")));
            
            // Now move Player1 for game "abc" by 1,
            // as long as neither player has reached "20".
            UpdateItemResult result = dynamodb.updateItem(new UpdateItemRequest()
                .withTableName("Game")
                .withReturnValues(ReturnValue.ALL_NEW)
                .addKeyEntry("GameId", new AttributeValue("abc"))
                .addAttributeUpdatesEntry(
                     "Player1-Position", new AttributeValueUpdate()
                         .withValue(new AttributeValue().withN("1"))
                         .withAction(AttributeAction.ADD))
                .addExpectedEntry(
                     "Player1-Position", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withN("20"))
                         .withComparisonOperator(ComparisonOperator.LT))
                .addExpectedEntry(
                     "Player2-Position", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withN("20"))
                         .withComparisonOperator(ComparisonOperator.LT))
                .addExpectedEntry(
                     "Status", new ExpectedAttributeValue()
                         .withValue(new AttributeValue().withS("IN_PROGRESS"))
                         .withComparisonOperator(ComparisonOperator.EQ))
     
            );
            if ("20".equals(result.getAttributes().get("Player1-Position").getN())) {
                System.out.println("Player 1 wins!");
            } else {
                System.out.println("The game is still in progress: "
                    + result.getAttributes());
            }
        } catch (ConditionalCheckFailedException e) {
            System.out.println("Failed to move player 1 because the game is over");
        }
    }

With this algorithm, player movement now takes only one write operation to DynamoDB.  What would it have taken without improved conditions?  Using only equality conditions, the app would have needed to follow the read-modify-write pattern:

  1. Read each item, making note of each player’s position, and verify that neither player already reached the end of the race.
  2. Advance the player’s position by 1, with a condition that both players were still in the position we read in step 1).

Notice that this algorithm requires two round-trips to DynamoDB, whereas with improved conditions, it can be done in only one round-trip.  This reduces both latency and cost.

You can find more information about conditional writes in Amazon DynamoDB in the Developer Guide.

Referencing Credentials using Profiles

There are a number of ways to provide AWS credentials to your .NET applications. One approach is to embed your credentials in the appSettings sections of your App.config file. While this is easy and convenient, your AWS credentials might end up getting checked into source control or published to places that you didn’t mean. A better approach is to use profiles, which was introduced in version 2.1 of the AWS SDK for .NET. Profiles offer an easy-to-use mechanism to safely store credentials in a central location outside your application directory. After setting up your credential profiles once, you can refer to them by name in all of the applications you run on that machine. The App.config file will look similar to this example when using profiles.

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="development"/>
      <add key="AWSRegion" value="us-west-2" />
   </appSettings>
</configuration>

The SDK supports two different profile stores. The first is what we call the SDK store which stores the profiles encrypted in the C:Users<username>AppDataLocalAWSToolkit folder. This is the same store used by the AWS Toolkit for Visual Studio and AWS Tools for PowerShell. The second store is the credentials file under c:Users<username>.aws. The credentials file is used by the other AWS SDKs and AWS Command Line Interface. The SDK will always check the SDK store first and then fallback to the credentials file.

Setting up Profiles with Visual Studio

The Visual Studio Toolkit lists all the profiles registered in the SDK store in the AWS Explorer. To add new profiles click the New Account Profile button.

When you create a new project in Visual Studio using one of the AWS project templates the project wizard will allow you to pick an existing profile or create a new profile. The selected profile will be referenced in the App.config of the new project.

 

Setting up Profiles with PowerShell

Profiles can also be setup using the AWS Tools for Windows PowerShell.

PS C:> Set-AWSCredentials -AccessKey 123MYACCESSKEY -SecretKey 456SECRETKEY -StoreAs development

Like the Toolkit these credentials will be accessible to the SDK and Toolkit after running this command. To use the profile in PowerShell run the following command before using AWS cmdlets.

PS C:> Set-AWSCredentials -ProfileName development

Setting up Profiles with the SDK

Profiles can also be managed using just the AWS SDK for .NET using the Amazon.Util.ProfileManager class. Here is how you can register a profile using the ProfileManager.

Amazon.Util.ProfileManager.RegisterProfile(profileName, accessKey, secretKey)

You can also list the registered profiles and unregistered profiles using the ListProfileNames and UnregisterProfile methods.

Getting the SDK from Nuget

If you get the SDK from NuGet the package’s install script will add an empty AWSProfileName tag to the App.config file if the app setting doesn’t already exist. You can use any of the already mentioned methods for registering profiles. Alternatively, you can use the PowerShell script account-management.ps1 that comes with the NuGet package and will be placed in /packages/AWSSDK-X.X.X.X/tools/ folder. This is an interactive script that will let you register, list and unregister profiles.

Credentials File Format

The previous methods for adding profiles have all been about adding credentials to the SDK store. To put credentials in the SDK store requires using one of these tools because the credentials are encrypted. The alternative is to use the credentials file. This is a plain text file similar to a .ini file. Here is an example of a credentials file with two profiles.

[default]
aws_access_key_id = <access-key>
aws_secret_access_key = <secret-key>

[development]
aws_access_key_id = <access-key>
aws_secret_access_key = <secret-key>

Default Profile

When you create a service client without specifying credentials or profile name the SDK will search for a default profile. The default profile’s name is "default" and it will first be searched for in the SDK store and then the credentials file. When the AWS Tools for PowerShell was released last year it introduced a default profile called "AWS PS Default". To make all of our tools have a consistent experience, we have changed AWS Tools for PowerShell to now use "default" for the default. To make sure we didn’t break any existing users, the AWS Tools for PowerShell will still try to load the old profile ("AWS PS Default") when "default" is not found, but will now save credentials to "default" profile unless otherwise specified.

Credentials Search Path

If an application is creating a service client without specifying credentials then the SDK uses the following order to find credentials.

  • Look for AWSAccessKey and AWSSecretKey in App.config.

    • Important to note that the 2.1 version of the SDK didn’t break any existing applications using the AWSAccessKey and AWSSecretKey app settings.
  • Search the SDK Store

    • If the AWSProfileName exists then see if this profile exists. If no AWSProfileName is specified look for the default profile called "default" in the SDK Store.
  • Search the credentials file

    • If the AWSProfileName exists then see if this profile exists. If no AWSProfileName is specified look for the default profile called "default" in the credentials file.
  • Search for Instance Profiles

    • These are credentials that can be found on EC2 instance that were created with instance profiles.

Setting Profile in Code

It is also possible to specify the profile to use in code, in addition to using App.config. This code shows how to create an Amazon S3 client for the development profile.

Amazon.Runtime.AWSCredentials credentials = new Amazon.Runtime.StoredProfileAWSCredentials("development");
Amazon.S3.IAmazonS3 s3Client = new AmazonS3Client(credentials, Amazon.RegionEndpoint.USWest2);

Alternative Credentials File

Both the SDK store and the credentials file are located under the current user’s home directory. If your application is running under a different user – such as Local System – then the AWSProfilesLocation app setting can be set to use an alternative credentials file. For example, this App.Config tells the SDK to look for credentials in the C:aws_service_credentialscredentials file.

<configuration>
   <appSettings>
      <add key="AWSProfileName" value="development"/>
      <add key="AWSProfilesLocation" value="C:aws_service_credentialscredentials"/>
      <add key="AWSRegion" value="us-west-2" />
   </appSettings>
</configuration>

Downloading Objects from Amazon S3 using the AWS SDK for Ruby

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

The AWS SDK for Ruby provides a few methods for getting objects out of Amazon S3. This blog post focuses on using the v2 Ruby SDK (the aws-sdk-core gem) to download objects from Amazon S3.

Downloading Objects into Memory

For small objects, it can be useful to get an object and have it available in your Ruby processes. If you do not specify a :target for the download, the entire object is loaded into memory into a StringIO object.

s3 = Aws::S3::Client.new
resp = s3.get_object(bucket:'bucket-name', key:'object-key')

resp.body
#=> #<StringIO ...> 

resp.body.read
#=> '...'

Call #read or #string on the StringIO to get the body as a String object.

Downloading to a File or IO Object

When downloading large objects from Amazon S3, you typically want to stream the object directly to a file on disk. This avoids loading the entire object into memory. You can specify the :target for any AWS operation as an IO object.

File.open('filename', 'wb') do |file|
  reap = s3.get_object({ bucket:'bucket-name', key:'object-key' }, target: file)
end

The #get_object method still returns a response object, but the #body member of the response will be the file object given as the :target instead of a StringIO object.

You can specify the target as String or Pathname, and the Ruby SDK will create the file for you.

resp = s3.get_object({ bucket:'bucket-name', key:'object-key' }, target: '/path/to/file')

Using Blocks

You can also use a block for downloading objects. When you pass a block to #get_object, chunks of data are yielded as they are read off the socket.

File.open('filename', 'wb') do |file|
  s3.get_object(bucket: 'bucket-name', key:'object-key') do |chunk|
    file.write(chunk)
  end
end

Please note, when using blocks to downloading objects, the Ruby SDK will NOT retry failed requests after the first chunk of data has been yielded. Doing so could cause file corruption on the client end by starting over mid-stream. For this reason, I recommend using one of the preceding methods for specifying the target file path or IO object.

Retries

The Ruby SDK retries failed requests up to 3 times by default. You can override the default using :retry_limit. Setting this value to 0 disables all retries.

If the Ruby SDK encounters a network error after the download has started, it attempts to retry request. It first checks to see if the IO target responds to #truncate. If it does not, the SDK disables retries.

If you prefer to disable this default behavior, you can either use the block mode or set :retry_limit to 0 for your S3 client.

Range GETs

For very large objects, consider using the :range option and download the object in parts. Currently there are no helper methods for this in the Ruby SDK, but if you are interested in submitting something, we accept pull requests!

Happy downloading.

Amazon S3 Client-Side Authenticated Encryption

by Hanson Char | on | in Java | Permalink | Comments |  Share

Encrypting data using the Amazon S3 encryption client is one way you can provide an additional layer of protection for sensitive information you store in Amazon S3. Now the Amazon S3 encryption client provides you with the ability to use authenticated encryption for your stored data via the new CryptoMode.AuthenticatedEncryption option. The Developer Preview of this client-side encryption option utilizes AES-GCM – a standard authenticated encryption algorithm recommended by NIST.

When CryptoMode.AuthenticatedEncryption is in use, an improved key wrapping algorithm will be applied to the envelope key, which is a one-time key randomly generated per S3 object. One of two key wrapping algorithms is used, depending on the encryption material you use. "AESWrap" is applied if the client-supplied encryption material contains a symmetric key; "RSA/ECB/OAEPWithSHA-256AndMGF1Padding" is used if the encryption material contains a key pair. Both key wrapping algorithms improve the level of protection of the envelope key with integrity check in addition to using encryption alone.

Enabling Authenticated Encryption

This new mode of authenticated encryption is disabled by default. This means the Amazon S3 encryption client will continue to function as before unless explicitly configured otherwise.

To enable the use of client-side authenticated encryption, two steps are required:

  1. Include the latest Bouncy Castle jar in the classpath; and
  2. Explicitly specify the cryptographic mode of authenticated encryption when instantiating an S3 encryption client
new AmazonS3EncryptionClient(...,
  new CryptoConfiguration(CryptoMode.AuthenticatedEncryption));

Once enabled, all new S3 objects will be encrypted using AES-GCM before being stored in S3. Otherwise, everything remains the same as described in the Getting Started guide at Client-Side Data Encryption with the AWS SDK for Java and Amazon S3. In other words, all APIs of the S3 encryption client including Range-Get and Multipart Upload will work the same way regardless of the selected cryptographic mode.

How CryptoMode.AuthenticatedEncryption Works

Storage

If CryptoMode.AuthenticatedEncryption is not enabled, the default behavior of the S3 encryption client will persist S3 objects using the same cryptographic algorithm as before, which is encryption-only.

However, if CryptoMode.AuthenticatedEncryption has been enabled, new S3 objects will be encrypted using the standard authenticated encryption algorithm, AES-GCM. Furthermore, the generated one-time envelope key will be protected using a new key-wrapping algorithm.

Retrieval

Existing S3 objects that have been encrypted using the default encryption-only scheme, CryptoMode.EncryptionOnly, will continue to work as before with no behavior changes regardless of whether CryptoMode.AuthenticatedEncryption is enabled or not.

However, if an S3 object that has been encrypted under CryptoMode.AuthenticatedEncryption is retrieved in its entirety, not only is the object automatically decrypted when retrieved, the integrity of the object is also verified (via AES-GCM). If for any reason the object failed the integrity check, a SecurityException would be thrown. A sample exception message:

java.lang.SecurityException: javax.crypto.BadPaddingException: mac check in GCM failed

Note, however, if only part of an object is retrieved from S3 via the Range-Get operation, then only decryption will apply and not authentication since the entire object is required for authentication.

Two Modes of Authenticated Encryption Available

There are actually two authenticated encryption modes available: CryptoMode.AuthenticatedEncryption and CryptoMode.StrictAuthenticatedEncryption.

CryptoMode.StrictAuthenticatedEncryption is a variant of CryptoMode.AuthenticatedEncryption, but it enforces a strict use of authenticated encryption. Specifically, the S3 encryption client running in CryptoMode.StrictAuthenticatedEncryption will only accept retrieval of S3 objects protected via authentication encryption. Retrieving S3 objects in plaintext or encrypted using encryption-only mode will cause a SecurityException to be thrown under the strict mode. A sample exception message:

java.lang.SecurityException: S3 object [bucket: mybucket, key: mykey] not encrypted using authenticated encryption

Furthermore, attempts to perform a Range-get operation in strict authenticated encryption mode will also cause SecurityException to be thrown, since Range-get has no authentication on the data retrieved. A sample exception message:

java.lang.SecurityException: Range get is not allowed in strict crypto mode

The purpose of CryptoMode.StrictAuthenticatedEncryption is to eliminate the possibility of an attacker hypothetically forcing a downgrade to bypass authentication. In other words, running in CryptoMode.StrictAuthenticatedEncryption would provide the highest level of security but potentially at the cost of restricted operations. This strict use of authenticated encryption is meant only for highly security-sensitive applications where there is no need to retrieve S3 objects that have not been previously encrypted using authenticated encryption.

Migrating to Authenticated Encryption

It’s worth pointing out that older versions of the AWS SDK for Java are not equipped with authenticated encryption and therefore will not be able to decrypt objects encrypted with authenticated encryption. Therefore, before enabling CryptoMode.AuthenticatedEncryption, you should upgrade all instances of the AWS SDK for Java in your application to the latest version. With no configuration necessary, the latest version of Java SDK is able to retrieve and decrypt S3 objects that are originally encrypted either in encryption-only mode (AES-CBC) or authenticated encryption mode (AES-GCM). Once all instances of the SDK are upgraded, you can then safely enable CryptoMode.AuthenticatedEncryption to start writing new S3 objects using authenticated encryption. Here is a summary table.

Java SDK CryptoMode Encrypt Decrypt Range Get Multipart Upload Max Size (bytes)
1.7.8.1+ AuthenticatedEncryption AES‑GCM AES‑GCM AES-CBC Yes Yes ~64GB
1.7.8.1+ StrictAuthenticatedEncryption AES‑GCM AES‑GCM No Yes ~64GB
1.7.8.1+ EncryptionOnly AES‑CBC AES‑GCM AES‑CBC Yes Yes 5TB
pre-1.7.8 (Not Applicable) AES‑CBC AES‑CBC Yes Yes 5TB

New Runtime Dependency on Bouncy Castle Library

You may wonder why we do not statically include the Bouncy Castle crypto library jar as a direct dependency. First, by not having a static dependency on the Bouncy Castle Crypto APIs, we believe users can take advantage of the latest releases from Bouncy Castle in a more timely and flexible manner. This is especially relevant should there be security fixes to the library. The other reason is that only users who decide to make use of authenticated encryption would need to depend on the Bouncy Castle library. We therefore do not want to force everyone else to pull in a copy of Bouncy Castle unless they need to.

Authenticated Encryption or Not?

If the protection of S3 objects in your application requires not only confidentiality but also integrity and authenticity, and the size of each object is less than 64 GB, then CryptoMode.AuthenticatedEncryption may be just the option you have been looking for. Why 64GB? It is a limiting factor of the standard AES-GCM. More details can be found in the NIST GCM spec.

Does your application require storing S3 objects with authenticated encryption? Let us know what you think!

Release: AWS SDK for PHP – Version 2.6.2

by Michael Dowling | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.2 of the AWS SDK for PHP.

  • Added support for Amazon SQS message attributes.
  • Fixed Amazon S3 multi-part uploads so that manually set ContentType values are not overwritten.
  • No longer recalculating file sizes when an Amazon S3 socket timeout occurs.
  • Added better environment variable detection.

Install the SDK

IAM Roles for Amazon EC2 instances (Access Key Management for .NET Applications – Part 4)

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In this post, we’ll see how to use Identity and Access Management(IAM) roles for Amazon EC2 instances. Using IAM roles for EC2 instances, you don’t need to manage or distribute credentials that your application needs. Instead, credentials are automatically distributed to EC2 instances and picked up by the AWS SDK for .NET. Here are the advantages of using this approach.

  • No need to distribute and manage credentials for your application
  • Credentials are periodically auto rotated and distributed to EC2 instances
  • The credentials are transparently available to your application through the SDK

Before we go further and look at code snippets, let’s talk about IAM roles and related concepts in a little more detail. A role lets you define a set of permissions to access resources that your application needs. This is specified using an access policy. A role also contains information about who can assume the role. This is specified using a trust policy. To use roles with EC2 instances, we need an instance profile. An instance profile is a container for roles and is used to pass role information to EC2 instances when they are launched. When you launch an EC2 instance with an instance profile, your application can make requests to AWS resources using the role credentials for the role associated with the instance profile.

In the rest of this post, we will perform the steps required to use IAM roles using the AWS SDK for .NET. Please note that all of these steps can be performed using the AWS Management Console as well.

Create an IAM Role

We start by creating an IAM role. As I mentioned before, you need to provide two pieces of information here: the access policy that will contain the permissions your application needs, and the trust policy that will specify that EC2 can assume this role. The trust policy is required so that EC2 can assume the role and fetch the temporary role credentials.

This is the trust policy that allows EC2 to assume the role.

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal":{"Service":["ec2.amazonaws.com"]},
    "Action": "sts:AssumeRole"
  }]
}

This is a sample access policy that gives restricted access to a bucket by allowing the ListBucket, PutObject and GetObject actions.

{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect":"Allow",
      "Action":[
        "s3:ListBucket"
      ],
      "Resource":"arn:aws:s3:::MyApplicationBucket"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::MyApplicationBucket/*"
    }
  ]
}

The following code creates a role with the given trust and access policy.

var roleName = "S3Access";
var profileName = "S3Access";
var iamClient = new AmazonIdentityManagementServiceClient();

// Create a role with the trust policy
var role = iamClient.CreateRole(new CreateRoleRequest
{
   RoleName = roleName,
   AssumeRolePolicyDocument = trustPolicy
});

// Add the access policy to the role
iamClient.PutRolePolicy(new PutRolePolicyRequest
{
    RoleName = roleName,
    PolicyName = "S3Policy",
    PolicyDocument = accessPolicy                
});

Create an instance profile

Now we create an instance profile for the role.

// Create an instance profile
iamClient.CreateInstanceProfile(new CreateInstanceProfileRequest
{
    InstanceProfileName = profileName                
});

// Add the role to the instance profile
iamClient.AddRoleToInstanceProfile(new AddRoleToInstanceProfileRequest
{
    InstanceProfileName = profileName,
    RoleName = roleName
});

Launch EC2 instance(s) with the instance profile

We can now launch EC2 instances with the instance profile that we created. Notice that we use the Amazon.EC2.Util.ImageUtilities helper class to retrieve the image identifier.

var ec2Client = new AmazonEC2Client();
            
// Find an image using ImageUtilities helper class
var image = Amazon.EC2.Util.ImageUtilities.FindImage(
    ec2Client,
    Amazon.EC2.Util.ImageUtilities.WINDOWS_2012_BASE);

//Launch an EC2 instance with the instance profile
var instance = ec2Client.RunInstances(new RunInstancesRequest
{
    ImageId = image.ImageId,
    IamInstanceProfile = new IamInstanceProfileSpecification
    {
        Name = profileName
    },
    MinCount=1, MaxCount =1,
});

Access AWS Resources from your application code deployed on EC2

You don’t need to make any changes to your application code to use IAM roles. Your application code should construct service clients without specifying any explicit credentials like the code below (without having any credentials in the application configuration file). Behind the scenes, the Amazon.Runtime.InstanceProfileAWSCredentials class fetches the credentials from EC2 Instance metadata service and automatically refreshes them when a new set of credentials is available.

// Create an S3 client with the default constructor,
// this will use the role credentials to access resources.
var s3Client = new AmazonS3Client();
var s3Objects = s3Client.ListObjects(new ListObjectsRequest 
{
    BucketName = "MyApplicationBucket" 
}).S3Objects;

In this post, we saw how IAM roles can greatly simplify and secure access key management for applications on Amazon EC2. We highly recommend that you use this approach for all applications that are run on Amazon EC2.

Release: AWS SDK for PHP – Version 2.6.1

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.1 of the AWS SDK for PHP. This release adds support for the latest features in Amazon DynamoDB, Amazon ElastiCache, and Auto Scaling; introduces support for a new INI-formatted credentials file (more information about this will be coming in a future blog post); and fixes a few issues in the Amazon S3 Stream Wrapper.

Install the SDK

Develop, Deploy, and Manage for Scale with AWS Elastic Beanstalk and AWS CloudFormation

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Evan Brown is doing a great five part series on the AWS Application Management Blog on developing, deploying, and managing for scale with Elastic Beanstalk and CloudFormation. In each of his five blog posts, Evan breaks down a different topic and explains best practices as well as practical tips and tricks for working with applications deployed using CloudFormation and Elastic Beanstalk.

Plus, each Thursday at 9 a.m. PDT, during the five part series, Evan and the CloudFormation team host a Google Hangout to discuss the topics in the blog.

This is week three of the five part series, so head over and check out the latest blog post.

Then, this Thursday at 9 a.m. PDT, and the two following Thursdays, head over to the AWS CloudFormation Google Hangout to discuss the post and ask questions of the engineers from the AWS CloudFormation team.

Don’t miss this great opportunity to discuss developing, deploying, and managing applications on AWS with CloudFormation engineers!

Overriding Endpoints in the AWS SDK for .NET

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

Sometimes, when sending requests using the AWS SDK for .NET, you are required to explicitly specify an endpoint URL for a service. One such scenario is when you use an older version of the SDK to send requests to a particular service and that service is introduced in a new region. To access the service in the new region without upgrading the SDK, set the ServiceURL property on the client configuration object. Here’s an example with Amazon S3:

var config = new AmazonS3Config { ServiceURL = myUrl };
var s3client = new AmazonS3Client(config);

This technique overrides the default endpoint for a single instance of the service client. It requires code changes to modify the URL for a region, and requires setup everywhere in the code where a service instance is created.

We recently added a feature to the AWS SDK for .NET version 2 (2.0.7.0 onwards) that allows developers to specify their own mapping of Service + Regions to URLs, which can vary from environment to environment, keeping the code the same. This default mapping is baked into the SDK, but can be overridden either in the App.config or in code.

To point to the override mapping in your App.config, set the AWSEndpointDefinition appSetting:

<appSettings>
   ...
   <add key="AWSEndpointDefinition" value="c:pathtoendpoints.xml"
   ...
</appSettings>

To set the override in code, you can use the AWSConfigs.EndpointDefinition property:

AWSConfigs.EndpointDefinition = @"c:pathtoendpoints.xml";

You can find the most up-to-date version of this file in the Github repository for the SDK. It’s a good idea to start with this file, and then make the needed modifications. It’s also important to note that you need the whole file, not just the endpoints that are different.

When new services and regions are announced, we will update this file along with the SDK.