AWS Developer Blog

Amazon S3 Server-Side Encryption with Customer-Provided Keys

by Jason Fulghum | on | in Java | Permalink | Comments |  Share

Amazon S3 recently launched a new feature that lets developers take advantage of server-side encryption, but still control their encryption keys. This new server-side encryption mode for Amazon S3 is called Server-Side Encryption with Customer-Provided Keys (SSE-C).

Using server-side encryption in Amazon S3 with your own encryption keys is easy using the AWS SDK for Java. Just pass along an instance of SSECustomerKey with your requests to Amazon S3.

The SSECustomerKey class holds your encryption key material for AES-256 encryption and an optional MD5 for checking the data integrity of the encryption key when it gets passed to Amazon S3. You can specify your AES-256 encryption key as a Java SecretKey object, a byte[] of the raw key material, or as a base64-encoded string. The MD5 is optional since the SDK will automatically generate it for you to ensure your encryption key is transmitted to Amazon S3 without any corruption.

Here’s an example of using server-side encryption with a customer-provided encryption key using the AWS SDK for Java:

AmazonS3 s3 = new AmazonS3Client();
SecretKey secretKey = loadMyEncryptionKey();
SSECustomerKey sseCustomerKey = new SSECustomerKey(secretKey);

// Upload a file that will be encrypted with our key once it gets to S3
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, key, file)
        .withSSECustomerKey(sseCustomerKey);
s3.putObject(putObjectRequest);

// To download data encrypted with SSE-C, you must provide the 
// correct SSECustomerKey, otherwise the request will fail
GetObjectRequest getObjectRequest = new GetObjectRequest(bucketName, key)
        .withSSECustomerKey(sseCustomerKey);
S3Object s3Object = s3.getObject(getObjectRequest);

You can use server-side encryption with customer-provided keys with these Amazon S3 operations in the AWS SDK for Java:

You can also take advantage of server-side encryption with customer-provided keys using the Amazon S3 TransferManager API. Just specify your SSECustomerKey in the same way as you do when using AmazonS3Client:

TransferManager tm = new TransferManager();

PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, key, file)
        .withSSECustomerKey(sseCustomerKey);
Upload upload = tm.upload(putObjectRequest);

// TransferManager processes transfers asynchronously
// waitForCompletion will block the current thread until the transfer finishes
upload.waitForCompletion();

GetObjectRequest getObjectRequest = new GetObjectRequest(bucketName, key)
        .withSSECustomerKey(sseCustomerKey);
Download download = tm.download(getObjectRequest, myFile);

Do you have data that requires being encrypted at rest? How are you planning on using server-side encryption with customer-provided keys?

Release: AWS SDK for PHP – Version 2.6.9

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.9 of the AWS SDK for PHP. This release adds support for uploading document batches and submitting search and suggestion requests to an Amazon CloudSearch domain using the new CloudSearch Domain client. It also adds support for configuring delivery notifications to the Amazon SES client, and updates the Amazon CloudFront client to work with the latest API version.

  • Added support for the CloudSearchDomain client, which allows you to search and upload documents to your CloudSearch domains.
  • Added support for delivery notifications to the Amazon SES client.
  • Updated the CloudFront client to support the 2014-05-31 API.
  • Merged PR #316 as a better solution for issue #309.

Install the SDK

Release: AWS SDK for PHP – Version 2.6.8

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.8 of the AWS SDK for PHP. This release updates the Amazon Elastic Transcoder and Amazon EMR clients to use the latest service descriptions, and fixes a few issues.

  • Added support for closed captions to the Elastic Transcoder client.
  • Added support for IAM roles to the Elastic MapReduce client.
  • Updated the S3 PostObject to ease customization.
  • Fixed an issue in some EC2 waiters by merging PR #306.
  • Fixed an issue with the DynamoDB WriteRequestBatch by merging PR #310.
  • Fixed issue #309, where the url_stat() logic in the S3 Stream Wrapper was affected by a change in the latest versions of PHP. If you are running version 5.4.29+, 5.5.13+, or 5.6.0+ of PHP, and you are using the S3 Stream Wrapper, you need to update your SDK in order to prevent runtime errors.

We also released version 2.6.7 last week, but forgot to mention it on the blog. Here are the changes from 2.6.7:

  • Added support for Amazon S3 server-side encryption using customer-provided encryption keys.
  • Updated the Amazon SNS client to support message attributes.
  • Updated the Amazon Redshift client to support new cluster parameters.
  • Updated PHPUnit dev dependency to 4.* to work around a PHP serializing bug.

Install the SDK

Guzzle 4 and the AWS SDK

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

Since Guzzle 4 was released in March (and even before then), we’ve received several requests for us to update the AWS SDK for PHP to use Guzzle 4. Earlier this month, we tweeted about it too and received some pretty positive feedback about the idea. We wanted to take some time to talk about what upgrading Guzzle would mean for the SDK and solicit your feedback.

The SDK relies heavily on Guzzle

If you didn’t already know, the AWS SDK for PHP relies quite heavily on version 3 of Guzzle. The AWS service clients extend from the Guzzle service clients, and we have formatted the entire set of AWS APIs into Guzzle "service descriptions". Roughly 80 percent of what the SDK does is done with Guzzle. We say all this because we want you to understand that updating the SDK to use Guzzle 4 is potentially a big change.

What does Guzzle 4 offer?

We’ve had several requests for Guzzle 4 support, and we agree that it would be great. But what exactly does Guzzle 4 offer — besides it being the new "hotness" — that makes it worth the effort?

We could mention a few things about the code itself: it’s cleaner, it’s better designed, and it has simpler and smaller interfaces. While those are certainly good things, they’re not strong enough reasons to change the SDK. However, Guzzle 4 also includes some notable improvements and new features, including:

  • It’s up to 30 percent faster and consumes less memory than Guzzle 3 when sending requests serially.
  • It no longer requires cURL, but still uses cURL by default, if available.
  • It supports swappable HTTP adapters, which enables you to provide custom adapters. For example, this opens up the possibility for a non-blocking, asynchronous adapter using ReactPHP.
  • It has improved cURL support, including faster and easier handling of parallel requests using a rolling queue approach instead of batching.

These updates would provide great benefits to SDK users, and would allow even more flexible and efficient communications with AWS services.

Guzzle 4 has already been adopted by Drupal, Laravel, Goutte, and other projects. I expect it to be adopted by even more during the rest of this year, and as some of the supplementary Guzzle packages reach stable releases. We definitely want users of the AWS SDK for PHP to be able to use the SDK alongside these other packages without causing conflicts or bloat.

Consequences of updating to Guzzle 4

Because the AWS SDK relies so heavily on Guzzle, the changes to Guzzle will require changes to the SDK.

In Guzzle 4, many things have changed. Classes have been renamed or removed, including classes that are used by the current SDK and SDK users. A few notable examples include the removal of the GuzzleBatch and GuzzleIterator namespaces, and how GuzzleHttpEntityBody has been changed and moved to GuzzleHttpStreamStream.

The event system of Guzzle 4 has also changed significantly. Guzzle has moved away from the Symfony Event Dispatcher, and is now using its own event system, which is pretty nice. This affects any event listeners and subscribers you may have written for Guzzle 3 or the SDK, because they will need a little tweaking to work in Guzzle 4.

Another big change in Guzzle 4 is that it requires PHP 5.4 (or higher). Using Guzzle 4 would mean that the SDK would also require PHP 5.4+.

Most of the changes in Guzzle 4 wouldn’t directly affect SDK users, but there are a few, like the ones just mentioned, that might. Because of this, if the SDK adopted Guzzle 4, it would require a new major version of the SDK: a Version 3.

What are your thoughts?

We think that updating the SDK to use Guzzle 4 is the best thing for the SDK and SDK users. Now that you know the benefits and the consequences, we want to hear from you. Do you have any questions or concerns? What other feedback or ideas do you have? Please join our discussion on GitHub or leave a comment below.

Amazon S3 Requester Pays

by Manikandan Subramanian | on | in Java | Permalink | Comments |  Share

You may have heard about the Requester Pays feature in Amazon S3 that allows bucket owners to pass the data transfer costs to users who download the data. Users can now use the AWS SDK for Java to enable/disable Requester Pays on their buckets.

To enable Requester Pays on an Amazon S3 bucket

// create a new AmazonS3 client.
AmazonS3 s3 = new AmazonS3Client();

// call enableRequesterPays method with the bucket name.
s3.enableRequesterPays(bucketName);

To disable Requester Pays on an Amazon S3 bucket

// create a new AmazonS3 client
AmazonS3 s3 = new AmazonS3Client();

// call disableRequesterPays method with the bucket name
s3.disableRequesterPays(bucketName);

In addition, the AWS SDK for Java also allows users to download data from a Requester Pays bucket. The following example shows how easy it is:

// create a new AmazonS3 client
AmazonS3 s3 = new AmazonS3Client();

// The requester pays flag must be set to true for accessing an Amazon S3 bucket that has requester pays enabled
// Otherwise, Amazon S3 would respond with an Access Denied exception.
// The isRequesterPays flag is explicitly set to acknowledge that the requester knows he or she will be charged for the download.

boolean isRequesterPays = true;
S3Object object = s3.getObject(new GetObjectRequest(bucketName,key,isRequesterPays));

Are you using the AWS SDK for Java to access Amazon S3? Let us know your experience.

Secure Local Development with the ProfileCredentialsProvider

We’ve talked in the past about the importance of secure credentials management. When your application is running in production, IAM roles for Amazon EC2 are a great way to securely deliver AWS credentials to your application. However, they’re by definition available only when your application is running on EC2 instances.

If you’re a developer making changes to an application, it’s often convenient to be able to fire up a local instance of the application to see your changes in action without having to spin up a full test environment in the cloud. If your application uses IAM roles for EC2 to pick up credentials when running in the cloud, this means you’ll need an additional way of injecting credentials when running locally on a developer’s box. It’s tempting to simply hardcode a set of credentials into the application for testing purposes, but this makes it distressingly easy to accidentally check those credentials in to source control.

The AWS SDK for Java includes a number of different credential providers that you can use as alternatives to hardcoded credentials. You can easily inject credentials into your application from system properties, environment variables, properties files, and more. All of these choices allow you to keep your credentials separate from your source code and reduce the risk of accidentally checking them in.

We’ve recently added a new credentials provider that loads credentials from a credentials profile file stored in your home directory. This option is particularly exciting because other tools like the AWS CLI and the AWS Toolkit for Eclipse also support reading credentials from and writing credentials to this file. You can configure your credentials in one place, and reuse them whether you’re running one-off CLI commands to check on the state of your resources, browsing around using the Toolkit, or running a local instance of one of your applications.

The default credentials profile file is located at System.getProperty("user.home") + ".aws/credentials". The format allows you to define multiple “profiles,” which makes it easy to maintain different sets of credentials for different projects with appropriately-scoped permissions; this way you don’t have to worry about a bug in the local version of your application accidentally wiping out your production system. Here’s a simple example:

  # Credentials for App-1's production stack (allowing only read-only
  # access for debugging production issues).
  [app-1-production]
  aws_access_key_id={access key id}
  aws_secret_access_key={secret access key}
  aws_session_token={optional session token}

  # Credentials for App-1's development stack, allowing full read-write
  # access.
  [app-1-development]
  aws_access_key_id={another access key id}
  aws_secret_access_key={another secret access key}

  # Default credentials to be used if no profile is specified.
  [default]
  aws_access_key_id=...
  aws_secret_access_key=...

If you’re running a recent version of the AWS CLI, you can set up a file in the correct format by running the aws configure command; you’ll be prompted to enter a set of credentials, which will be stored in the file. Similarly, if you’re running a recent version of the AWS Toolkit for Eclipse, any credentials you configure through its Preferences page will be written into the credentials profile file.

The AWS Toolkit for Eclipse Preferences Page

To use the ProfileCredentialsProvider when running local integration tests, simply add it to your credentials provider chain:

AmazonDynamoDBClient client = new AmazonDynamoDBClient(
      new AWSCredentialsProviderChain(

          // First we'll check for EC2 instance profile credentials.
          new InstanceProfileCredentialsProvider(),

          // If we're not on an EC2 instance, fall back to checking for
          // credentials in the local credentials profile file.
          new ProfileCredentialsProvider("app-1-development"));

The constructor parameter is the name of the profile to use; if you call the parameterless constructor, it will load the “default” profile. Another constructor overload allows you to override the location of the profiles file to load credentials from (or you can change this by setting the AWS_CREDENTIALS_PROFILES_FILE environment variable).

Have you already started using the new ProfileCredentialsProvider? Let us know what you think in the comments below!

AWS at Laracon 2014

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

I recently had the pleasure to attend and speak at Laracon, which is a conference for users of the Laravel PHP framework.

This is the second year that they have done Laracon. Last year, Laracon (US) was in Washington D.C., but this year they did it in New York City. The thing that impressed me most about this conference was how excited everyone was to be there. The Laravel community is very energetic, and they are growing. I definitely felt that energy, and I believe it helped make the event a good experience for all of the attendees.

I was honored to be able to speak to the attendees about Amazon Web Services. My talk was titled AWS for Artisans, and I focused on "The Cloud", AWS in general, and the AWS SDK for PHP. To tie everything together, I walked through the creation of a simple, but scalable, Laravel application, where pictures of funny faces are uploaded and displayed. I showed how the SDK was used and how AWS Elastic Beanstalk and other AWS services fit into the architecture.

Here are some of my favorite moments/comments from the presentation:

And here are the resources from the presentation:

There were already many existing AWS customers at Laracon, and it was nice to be able to talk to them, answer their questions, and hear their feedback and ideas. I also enjoyed talking to the developers that had yet to try AWS. Use the AWS credits I gave you to do something awesome! :-) Thank you to everyone that I had conversations with.

Release: AWS SDK for PHP – Version 2.6.6

by Jeremy Lindblom | on | in PHP | Permalink | Comments |  Share

We would like to announce the release of version 2.6.6 of the AWS SDK for PHP. This release, combined with the last few releases that we forgot to blog about, contain the following changes:

  • Added support for the Desired Partition Count scaling option to the CloudSearch client. Hebrew is also now a supported language.
  • Updated the STS service description to the latest version.
  • [Docs] Updated some of the documentation about credential profiles.
  • Fixed an issue with the regular expression in the S3Client::isValidBucketName method. See #298.
  • Added cross-region support for the Amazon EC2 CopySnapshot operation.
  • Added AWS Relational Database (RDS) support to the AWS OpsWorks client.
  • Added support for tagging environments to the AWS Elastic Beanstalk client.
  • Refactored the signature version 4 implementation to be able to pre-sign most operations.
  • Added support for lifecycles on versioning enabled buckets to the Amazon S3 client.
  • Fixed an Amazon S3 sync issue which resulted in unnecessary transfers when no $keyPrefix argument was utilized.
  • Corrected the CopySourceIfMatch and CopySourceIfNoneMatch parameter for Amazon S3 to not use a timestamp shape.
  • Corrected the sending of Amazon S3 PutBucketVersioning requests that utilize the MFADelete parameter.
  • Added the ability to modify Amazon SNS topic settings to the UpdateStack operation of the AWS CloudFormation client.
  • Added support for the us-west-1, ap-southeast-2, and eu-west-1 regions to the AWS CloudTrail client.
  • Removed no longer utilized AWS CloudTrail shapes from the model.

Install the SDK

Enhancements to the DynamoDB SDK

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

The release of AWS SDK for .NET version 2.1.0 has introduced a number of changes to the high-level Amazon DynamoDB classes. Now, less markup is required to use classes with DynamoDBContext, as the SDK now infers reasonable default behavior. You can customize this behavior through app.config/web.config files and during run time through the SDK. In this blog post, we discuss the impact of this change and the new ways you can customize the behavior of DynamoDBContext.

Attributes

With previous versions of the .NET SDK, classes that were used with DynamoDBContext had to have attributes on them specifying the target table, the hash/range keys, and other data. The classes looked like this:

[DynamoDBTable("Movies")]
public class Movie
{
    [DynamoDBHashKey]
    public string Title { get; set; }

    [DynamoDBRangeKey(AttributeName = "Released")]
    public DateTime ReleaseDate { get; set; }

    public List<string> Genres { get; set; }

    [DynamoDBProperty(Converter = typeof(RatingConverter))]
    public Rating Rating { get; set; }

    [DynamoDBIgnore]
    public string Comment { get; set; }

    [DynamoDBVersion]
    public int Version { get; set; }
}

As of version 2.1.0 of the SDK, some of the information that the attributes provided is now being inferred from the target table and the class. You can also provide this information in the app.config/web.config files. In the following section, we show how it’s possible to remove all markup from our Movie class, either by removing the now-optional attributes or by moving the configuration to app.config files.

First, however, let’s look at the various types of attributes that are available and what it means to remove them.

Table attribute

Removing the DynamoDBTable attribute now forces DynamoDBContext to use the class name as the target table name. So for the class SampleApp.Models.Movie, the target table would be "Movie".

Key attributes

Some attributes, such as DynamoDBHashKey, DynamoDBRangeKey, and various SecondaryIndex attributes, are now inferred from the DynamoDB table. So unless you were using those attributes to specify an alternate property name or a converter, it is now safe to omit those attributes from your class definition.

Client-side attributes

There are also attributes that are "client-side", in that there is no information stored about them in DynamoDB, so DynamoDBContext can make no inferences about them. These are DynamoDBIgnore, DynamoDBVersion, DynamoDBProperty, as well as any other attributes that were used to specify an attribute name or a converter. Removing these attributes alters the behavior of your application, unless you’ve added corresponding attribution information to your app.config/web.config file.

App.config

The new release of the SDK adds a way to configure how DynamoDBContext operates data through app.config/web.config files.

To better illustrate this new functionality, here is a modified class definition for the class Movie where all DynamoDB attributes have been removed, and a corresponding app.config which provides functionality identical to what we first started with.

public class Movie
{
    public string Title { get; set; }
    public DateTime ReleaseDate { get; set; }
    public List<string> Genres { get; set; }
    public Rating Rating { get; set; }
    public string Comment { get; set; }
    public int Version { get; set; }
}
<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK"/>
  </configSections>
  
  <aws>
    <dynamoDB>
      <dynamoDBContext>
        <mappings>
          <map type="SampleApp.Models.Movie, SampleDLL" targetTable="Movies">
            <property name="ReleaseDate" attribute="Released" />
            <property name="Rating" converter="SampleApp.Models.RatingConverter, SampleDLL" />
            <property name="Comment" ignore="true" />
            <property name="Version" version="true" />
          </map>
        </mappings>
      </dynamoDBContext>
    </dynamoDB>
  </aws>

</configuration>

Table aliases and prefixes

With this release, we have also added the ability to specify table aliases. You can now reconfigure the target table for a class without updating its DynamoDBTable attribute, or even for a class that is missing this attribute. This new feature is in addition to the already-existing prefix support, which allows simple separation of tables based on a common prefix.

Below is a simple .NET class named "Studio" that has no attributes. The configuration for this class is stored in the "Studios" table. Additionally, we have configured a prefix through the config, so the actual table where the class Studio is stored will be "Test-Studios".

// No DynamoDBTable attribute, so DynamoDBContext assumes the
// target table is "Studio"
public class Studio
{
    public string StudioName { get; set; }
    public string Address { get; set; }
    // other properties
}
<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK"/>
  </configSections>
  
  <aws>
    <dynamoDB>
      <dynamoDBContext tableNamePrefix="Test-">
        <tableAliases>
          <alias fromTable="Studio" toTable="Studios" />
        </tableAliases>
      </dynamoDBContext>
    </dynamoDB>
  </aws>

</configuration>

You can use aliases for both attributed and non-attributed classes. Note that the SDK first applies the configured aliases, then applies the prefix.

For more information on the updated configuration section, see the .NET developer guide.

AWSConfigs

All of the preceding configuration settings are also accessible through code, so you can modify the mappings, aliases, and prefixes during application run time. This is done using the Amazon.AWSConfigs.DynamoDBConfig.Context property. In the following code sample, we show how to modify the current prefix, configure a new alias, update an existing alias, and update a converter for the Movie.Rating property.

var contextConfig = Amazon.AWSConfigs.DynamoDBConfig.Context;

// set the prefix to "Prod-"
contextConfig.TableNamePrefix = "Prod-";

// add and update aliases
contextConfig.AddAlias(new TableAlias("Actor", "Actors"));
contextConfig.TableAliases["Studio"] = "NewStudiosTable";

// replace converter on "Rating" property
var typeMapping = contextConfig.TypeMappings[typeof(Movie)];
var propertyConfig = typeMapping.PropertyConfigs["Rating"];
propertyConfig.Converter = typeof(RatingConverter2);

Note: changes to these settings will take effect only for new instances of DynamoDBContext.

For more information on setting these configurations, see the .NET developer guide.

Monitoring Your Estimated Costs with Windows PowerShell

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The documentation for Amazon CloudWatch contains this sample scenario for setting up alarms to monitor your estimated charges. Apart from a one-time operation to enable billing alerts for your account, the same capability can be set up and maintained using the AWS Tools for Windows PowerShell.

Enabling Alerts

The first step is to enable billing alerts for your account. To do this one-time operation, you need to use the AWS Billing console.

Important Note: This is a one-way step! Once you enable alerts for an account, you cannot turn them off.

  1. Once you are logged into the console, click Preferences and then select the Receive Billing Alerts check box.
  2. Click the Save preferences button and then log out of the console.

It can take around 15 minutes after enabling this option before you can view billing data and set alarms—plenty of time to read the rest of this post!

The remainder of this post assumes you are working in a PowerShell console prompt (or environment like the PowerShell ISE), have the AWSPowerShell module loaded, and your environment is configured to default to the account that you just enabled billing alerts for. If you’re not sure how to do this, check out this post on configuring accounts for PowerShell. In addition to setting the account, we’ll also need to use the US East (Virginia) region for the cmdlets we need to run, since this is where all metric data related to billing is held. We could add a -Region us-east-1 parameter to each cmdlet, but it’s simpler in this case to set a default for the current shell or script:

PS C:> Set-DefaultAWSRegion us-east-1

Now all cmdlets that we run in the current shell or script will operate by default against this region.

Setting Up the Billing Alarm and Notification

Once we’ve enabled billing alerts, we can start to construct alarm notifications. Just as in the Amazon CloudWatch sample, we’ll create an alarm that will trigger an Amazon SNS topic to send an email notification when our total estimated charges for the period exceeds $200.

We’ll first set up the email notification topic, and then use the topic as the alarm action later when we create the alarm.

Creating the Notification Topic

To create a new topic and subscribe an email endpoint to it, we can run this pipeline (indentation used for clarity):

PS C:> ($topicARN = New-SNSTopic -Name BillingAlarmNotifications) | 
                 Connect-SNSNotification -Protocol email `
                                         -Endpoint email@address.com
pending confirmation

The output from the pipeline, pending confirmation, signals that we need to go to our email and confirm the subscription. Once we do this, our topic is all set up to send notifications to the specified email. Notice that we capture the Amazon Resource Name (ARN) of the new topic into the variable $topicARN. We’ll need this when creating the subsequent alarm.

Creating the Alarm

Now that we have the notification topic in place, we can perform the final step to create the alarm.

To do this, we’ll use the Write-CWMetricAlarm cmdlet. For readers who know the underlying Amazon CloudWatch API, this cmdlet maps to the PutMetricAlarm operation and is used to both create and update alarms. Before creating an alarm, we need to know the namespace and the name of the metric it should be associated with. We can get a list of available metrics by using the Get-CWMetrics cmdlet:

PS C:> Get-CWMetrics

Namespace           MetricName                  Dimensions
---------           ----------                  ----------
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {ServiceName, Currency}
AWS/Billing         EstimatedCharges            {Currency}

At first glance, this looks like a set of duplicated metrics, but by examining the Dimensions for each object we see the following:

PS C:> (Get-CWMetrics).Dimensions

Name                    Value
----                    -----
ServiceName             AmazonEC2
Currency                USD
ServiceName             AmazonSimpleDB
Currency                USD
ServiceName             AWSQueueService
Currency                USD
ServiceName             AWSDataTransfer
Currency                USD
ServiceName             AmazonSNS
Currency                USD
ServiceName             AmazonS3
Currency                USD
Currency                USD

Now we can see that what initially looked like duplicate metrics are in fact separate metrics for 6 services (in this example) plus one extra that only has a Dimension of Currency—this is the Total Estimated Charge metric we’re interested in for this post. If you wanted to set up billing alerts for, say, Amazon EC2 usage only, then you would simply use that specific dimension when creating the alarm.

Alarms need to have a name that is unique to your account. This, plus the namespace, metric name, and dimension is all we need to create the alarm for the metric, which will be measured periodically. In this example, our alarm threshold (-Threshold parameter) is $200. We want to check every six hours, which we specify using the -Period parameter (the value is in seconds, where 21600 seconds is 6 hours). We want the alarm to fire the first time that the metric breaches, so the value for our -EvaluationPeriods parameter will be 1.

Write-CWMetricAlarm -AlarmName "My Estimated Charges" `
                    -AlarmDescription "Estimated Monthly Charges" `
                    -Namespace "AWS/Billing" `
                    -MetricName EstimatedCharges `
                    -Dimensions @{ Name="Currency"; Value="USD" } `
                    -AlarmActions $topicARN `
                    -ComparisonOperator GreaterThanOrEqualToThreshold `
                    -EvaluationPeriods 1 `
                    -Period 21600 `
                    -Statistic Maximum `
                    -Threshold 200

Note that Amazon CloudWatch returns no response output from the call. If we want to look at the alarm we just created, we can use the Get-CWAlarm cmdlet:

PS C:> Get-CWAlarm "My Estimated Charges"
AlarmName                          : My Estimated Charges
AlarmArn                           : arn:aws:cloudwatch:us-east-1:123412341234:alarm:My Estimated Charges
AlarmDescription                   : Estimated Monthly Charges
AlarmConfigurationUpdatedTimestamp : 3/27/2014 9:41:57 AM
ActionsEnabled                     : True
OKActions                          : {}
AlarmActions                       : {arn:aws:sns:us-east-1:123412341234:BillingNotification}
InsufficientDataActions            : {}
StateValue                         : OK
StateReason                        : Threshold Crossed: 1 datapoint (1.38) was not greater than or equal to the threshold (200.0).
StateReasonData                    : {"version":"1.0","queryDate":"2014-03-27T16:41:58.550+0000","startDate":"2014-03-27T10:41:00.0
                                     00+0000","statistic":"Maximum","period":21600,"recentDatapoints":[1.38],"threshold":20.0}
StateUpdatedTimestamp              : 3/27/2014 9:41:58 AM
MetricName                         : EstimatedCharges
Namespace                          : AWS/Billing
Statistic                          : Maximum
Dimensions                         : {Currency}
Period                             : 21600
Unit                               :
EvaluationPeriods                  : 1
Threshold                          : 200
ComparisonOperator                 : GreaterThanOrEqualToThreshold

All that remains is to wait for the alarm to fire (or, depending on your reasons for wanting to set up the alarm, to not fire!).