Category: Compute*

Internal Elastic Load Balancers in the Virtual Private Cloud

Today’s guest post comes to you courtesy of Spencer Dillard, Product Manager for AWS Elastic Load Balancing.

— Jeff;

One of the challenges weve heard about many times from customers is the challenge of load balancing between tiers of an application. While Elastic Load Balancing addresses many of the complexities of building a highly available application, it doesnt help when you need to balance the load between multiple back-end instances. Until now. As of today, you can create an internal load balancer in your VPC and place your non-internet-facing instances behind the internal load balancer. Heres a simple overview:

The internet-facing load balancer has public IP addresses and the usual Elastic Load Balancer DNS name. Your web servers can use private IP addresses and restrict traffic to the requests coming from the internet-facing load balancer. The web servers in turn will make requests to the internal load balancer, using private IP addresses that are resolved from the internal load balancers DNS name, which begins with internal-. The internal load balancer will route requests to the application servers, which are also using private IP addresses and only accept requests from the internal load balancer.

With this change, all of your infrastructure can use private IP addresses and security groups so the only part of your architecture that has public IP addresses is the internet-facing load balancer. Because the DNS record is publicly resolvable, you could also use a VPN connection and address the internal load balancer from your on-premise environment through the VPN tunnel.

Getting started is easy. Using the AWS Console, simply select the checkbox to make your new load balancer an internal load balancer. Everything else stays the same.

As part of this change, weve also relaxed the constraints on the size of the subnet you need to attach the load balancer to. You can now attach a load balancer to your subnets that have a /27 or larger size.

Im looking forward to hearing about the new scenarios this enables for you. Let us know what you think!

— Spencer

EC2 Spot Instance Updates – Auto Scaling and CloudFormation Integration, New Sample App

I have three exciting news items for Amazon EC2 users:

  1. You can now use Auto Scaling to place bids on the EC2 Spot Market on your behalf.
  2. Similarly, you can now place Spot Market bids from within an AWS CloudFormation template.
  3. We have a new article to show you how to track Spot instance activity with notifications to an Amazon Simple Notification Service (SNS) topic.

Unless you are intimately familiar with the entire AWS product lineup, you may have found the preceding list of items just a bit mysterious. Before I get to the heart of today’s announcement, let’s review the fundamentals of each product:

You probably know all about Amazon EC2. You can launch servers on an as-needed basis and pay for only the resources that you consume. You can pay the on-demand prices, or you can bid for unused EC2 capacity on the Spot Market, taking advantage of prices that vary in response to changes in supply and demand.

AWS CloudFormation allows you to create and manage a collection of AWS resources, all specified by a single declarative template expressed in JSON format.

The Simple Notification Service lets you create any number of message topics and to publish messages to the topics.

Together, these features should make it much easier to use Spot Instances, which in turn can help you run EC2 instances more cost-effectively.

With that out of the way, let’s dig in!

Spot +Auto Scaling
You can now set up Auto Scaling to make Spot bids on your behalf. As you may know, you must create an Auto Scaling Group and associate a launch configuration with it in order to make use of Auto Scaling. The Auto Scaling group lists the desired Availability Zones, the minimum and maximum size of the group, health checks, and other properties. The launch configuration includes a number of important parameters including the EC2 AMI to launch, the instance type to use, user data to pass to the newly launched instances, and so forth.

You can now include a bid price in your launch configuration if you want to use Spot Instances. Auto Scaling will use that price to continually place bids in an effort to keep the Auto Scaling group at the desired size. You can use this to soak up background capacity at a price point that is economically viable for your application. For example, let’s say that you can make good use of up to 10 m1.large instances. You consult the Spot Instance Pricing History in the AWS Management Console, and decide that a bid of $0.12 (twelve cents) per hour will work well for you:

Your Auto Scaling Group would have a minimum and a maximum size of 10, and the launch configuration would set the bid price at $0.12. When sufficient capacity is available at or below or your bid price, your group will expand up to the maximum size, and you’ll pay the market price (which could be lower than your bid). The group will contract if there are other demands on the capacity that cause the market price to exceed your bid price. You can alter the bid price at any time by creating a new launch configuration and attaching it to the Auto Scaling Group. Of course, if you want to use On-Demand instances instead, you can simply omit the bid price from your launch configuration.

For even more flexibility, you can use Auto Scaling’s scaling policies feature to change the minimum and maximum group sizes at a predetermined future time or dynamically based on your applications requirements. You could increase your group size at times when your workload is highest or when spot prices are historically low (this is subject to change, of course).

Spot + CloudFormation
You can now create CloudFormation templates that include a bid for Spot capacity as part of an Auto Scaling group (as described above).

The template can describe the construction of an entire application stack. AWS resources in the stack will be created in dependency-based order. The spot bid will be activated after the Auto Scaling group has been created. Here’s an example taken directly from a template’s definition of an Auto Scaling group:

You can also specify the bid price as a parameter to the template:

In this case, the AWS Management Console will prompt for the price (and the other parameters specified in the template) when you use the template to create a stack

The parameter value can be used directly in the template, or it can be used in other ways. For example, our StarCluster template has been updated to include the spot bid price as a parameter and to pass it in to the starcluster command:

/usr /bin /starcluster -c /home /ec2-user /.starcluster /config start -b “, { “Ref ” : “SpotPrice ” }, “ ec2-cluster\n

In addition to the Starcluster template that I mentioned above, we are also releasing two other templates today:

  • The Bees With Machine Guns template gives you the power to create a swarm of bees (EC2 micro instances) to load test your web site.
  • The Asynchronous Processing template adjusts the number of workers (EC2 instances) that are pulling data from an SQS queue, increasing the number of workers when the queue depth rises above a certain level and reducing it when the number of empty polls on the queue starts to grow. Even though it is of modest size, this template illustrates a number of clever techniques. It installs some packages, configures a crontab entry, loads some Perl code, and uses CloudWatch alarms for scaling.

My advice? Spend some time digging in to these templates to get a better understanding of how you can use the very potent combination of Spot Instances, Auto Scaling groups, and CloudFormation to design complex, parameterized application stacks that can be instantiated transactionally (all or nothing) with just a few clicks. Print them out, draw some diagrams, and gain a better appreciation for how they work — I guarantee you that it will be time well spent, and that you will walk away with some really good ideas!

Notifications Application / Tutorial
We’ve written a new article to show you how to track Spot instance activity programmatically. Along with this article, we’re distributing a sample application in source code form. As described in the article, the application uses the EC2 APIs to track three types of items, all within a designated region:

  • The list of EC2 instances currently running in your account (Spot and On-Demand).
  • Your current Spot Instance requests.
  • Current prices for Spot Instances.

When the application detects a change in any of the items that it monitors, it uses the Simple Notification Service to send a notification. You can use this notification-based model to decouple your application’s bid generation mechanism from the actual processing logic, and you can also do a better job of dealing with processing interruptions if you are outbid and some of your Spot Instances are terminated.

The notification is sent as a simple XML document; here’s a sample:

<PostNotification xmlns=“”>
  <accountId>455364113843 </accountId>
  <resourceId>sir-aca7a011 </resourceId>
  <type>Amazon.EC2.Request.StateTransition </type>
  <code>FROM: open TO: cancelled </code>
  <message>Your Amazon EC2 Spot Request has had a state transition. </message>

The application was written in Java using the AWS SDK for Java. Because the applications stores all of its configuration information and persistent data in Amazon SimpleDB, you can make configuration changes (e.g. updating notification thresholds) by storing new values in the appropriate items in the application’s SimpleDB domain.

Here’s Dave
I interviewed Dave Ward of the EC2 Spot Instance team for The AWS Report. Here’s what Dave had to say:

One small correction — It turns out that the Chocolate + Peanut Butter analogy that I used is out of date. All of the cool kids on the EC2 team now use term Crazy Delicious to refer to the unholy combination of Mr. Pibb and Red Vines.

Talk to Us
We would love to see what kinds of CloudFormation templates you come up with for use with Spot Instances. Please feel free to post them in the CloudFormation forum or leave a note on this post. Also, if you have thoughts on the features you want next on Spot please let us know at or via a note below.

— Jeff;


Behind the Scenes of the Amazon Appstore Test Drive

The Amazon Appstore for Android has a really cool Test Drive feature. It allows you to instantly try select applications through your computer browser or Android phone before you elect to install them.

There’s some interesting technology behind the Test Drive, and I’d like to tell you a little bit more about it. Let’s start with this diagram:

The app product page in the Amazon Appstore hosts a Player application. The Player application connects to an Amazon EC2 instance which runs multiple independent copies of an Android emulator. The emulator hosts and runs the Android application in a protected environment. The Player application has two primary responsibilities. First, it forwards player input to the Android application within the emulator. Second, it plays the audio and video streams produced by the Android application.

The emulator is supported in multiple AWS Regions. In order to deliver the best possible experience, the Player application is routed to the optimal AWS Region. Each EC2 instance runs multiple emulators and this supports multiple users simultaneously test-driving different applications. My diagram shows four, but this is just for illustrative purposes.

Since the Test Drives are hosted on Amazon EC2, the Amazon Appstore team can easily add additional capacity as needed, and they can add it where it makes the most sense with respect to the incoming traffic. You can easily imagine a traffic surge that moves westward from Region to Region as the daily news cycle makes people aware of the new Test Drive feature at the beginning of their day. This ability to add, shift, and remove capacity as needed is an essential part of every AWS service, and one that many developers take advantage of on a daily basis.

I spent some time talking to the developers behind the Test Drive feature earlier this week. They told me that there are currently a couple of limitations which prevent it from being enabled for every application in the Amazon Appstore. For example, Android applications that attempt to access non-existent hardware such as a phone’s camera would currently fail if they were enabled. The team is working to enable additional applications; check in at the Amazon Appstore Developer Blog to learn more.

If you are planning to build an Android application, check out the AWS SDK for Android. If you’ve already built an Android application and you’d like to submit your application to our store, check out the Amazon Appstore Developer Program.

— Jeff;

Lots of SAP News to Start the Week

Last week one of my colleagues stopped me in the hall and said “Jeff, you have to tell our customers about all of the great work that we are doing with SAP.” I asked him for some details and he was happy to oblige.

SAP Business All-in-One is an important piece of enterprise software, a package that is mission critical for many companies. It includes Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Supplier Relationship Management (SRM), and Business Intelligence (BI) functions.

The first big piece of news is that we are expanding the range of SAP certified solutions on AWS. This includes:

  • SAP Business All-in-One on Linux.
  • SAP Business All-in-One on Microsoft Windows.
  • Expanded certification for SAP Rapid Deployment solutions on Windows Server 2008 R2.
  • Expanded certification for SAP Business Objects on Windows Server 2008 R2.

Second, AWS partner VMS (a German management consultant staffed by a number of ex-SAP executives), has published a new SAP TCO analysis. Their research shows that running SAP on AWS can results in an infrastructure cost savings of up to 69%, when compared to on-premises or colo-based hosting. You can read the executive summary to learn more.

VMS computed a CWI (Cloud Worthiness Index) value of 59 for SAP running on AWS. The CWI was designed to quantify the economic value of the cloud, and is based on VMS’s measurements of over 2,600 SAP systems. It accounts for TCO and best practices, both with and without the cloud. You can read more about the CWI here.


Third, we have announced a number of other SAP offerings on AWS:

  • You can now process Big Data on SAP on AWS using their new HANA in-memory database. SAP has published a comprehensive getting started guide and they are also offering a 30-day free trial for testing and evaluation.
  • SAP Afaria makes it easy to build and deploy mobile applications that connect mobile workers to busines data. Afaria handles a number of important aspects of this including password and certificate management, an application portal, and a management console. You can launch Afaria from the AWS Marketplace (register for the 14-day free trial if you don’t have a license).

You can find case studies and technical papers on our SAP microsite.

— Jeff;

VM Export Service For Amazon EC2

The AWS VM Import service gives you the ability to import virtual machines in a variety of formats into Amazon EC2, allowing you to easily migrate from your on-premises virtualization infrastructure to the AWS Cloud. Today we are adding the next element to this service. You now have the ability to export previously imported EC2 instances back to your on-premises environment.

You can initiate and manage the export with the latest version of the EC2 command line (API) tools. Download and install the tools, and then export the instance of your choice like this:

ec2-create-instance-export-task e vmware -b NAME-OF-S3-BUCKET INSTANCE-ID

Note that you need to specify the Instance ID and the name of an S3 bucket to store the exported virtual machine image.

You can monitor the export process using ec2-describe-export-tasks and you can cancel unfinished tasks using ec2-cancel-export-task.

Once the export task has completed you need only download the exported image to your local environment.

The service can export Windows Server 2003 (R2) and Windows Server 2008 EC2 instances to VMware ESX-compatible VMDK, Microsoft Hyper-V VHD or Citrix Xen VHD images. We plan to support additional operating systems, image formats and virtualization platforms in the future.

Let us know what you think, and what other features, platforms and operating systems you would like us to support.

— Jeff;

Elastic Load Balancer – Console Updates and IPv6 Support for 2 Additional Regions

You can now manage the listeners, SSL certificates, and SSL ciphers for an existing Elastic Load Balancer from within the AWS Management Console. This enhancement makes it even easier to get started with Elastic Load Balancing and simpler to maintain a highly available application using Elastic Load Balancing. While this functionality has been available via the API and command line tools, many customers told us that it was critical to be able to use the AWS Console to manage these settings on an existing load balancer.

With this update, you can add a new listener with a front-end protocol/port and back-end protocol/port:

If the listener uses encryption (HTTPS or SSL listeners), then you can create or select the SSL certificate:

In addition to selecting or creating the certificate, you can now update the SSL protocols and ciphers presented to clients:

We have also expanded IPv6 support for Elastic Load Balancing to include the US West (Northern California) and US West (Oregon) regions.

— Jeff;


AWS Elastic Beanstalk Now Available in Europe

Today we are expanding the availability of AWS Elastic Beanstalk to the EU (Ireland) region. This comes hot on the heels of our recent announcement of .NET support and our launch in Japan, and gives you the ability to choose any one of three regions for deployment (see the AWS Global Infrastructure map for more information).

With Elastic Beanstalk, you retain full control over the resources running your application and you can easily manage and adjust settings to meet your needs. Because Elastic Beanstalk leverages services like Amazon EC2 and Amazon S3, you can run your application on the same highly durable and highly available infrastructure.

Elastic Beanstalk automatically scales your application up and down based on default Auto Scaling settings. You can easily adjust Auto Scaling settings based on your specific application’s needs.

You have your choice of three separate development languages and tools when you use Elastic Beanstalk:

To get started with Elastic Beanstalk, visit the AWS Elastic Beanstalk Developer Guide.

I should mention that we are looking for highly motivated software developers and product managers who want to work on leading edge, highly scaled services such as AWS CloudFormation and AWS Elastic Beanstalk. If you are interested, send us your resume to aws-cloudformation-jobs at or aws-elasticbeanstalk-jobs at Come and help us make history!

— Jeff;

PS – I recently interviewed Saad Ladki of the Elastic Beanstalk team on The AWS Report. You can watch the video to learn more about Elastic Beanstalk and how our customers are putting it to use.

Amazon CloudFront – Support for Dynamic Content

Amazon CloudFront‘s network of edge locations (currently 30, with more in the works) gives you the ability to distribute static and streaming content to your users at high speed with low latency.

Today we are introducing a set of features that, taken together, allow you to use CloudFront to serve dynamic, personalized content more quickly.

What is Dynamic Personalized Content?
As you know, content on the web is identified by a URL, or Uniform Resource Locator such as . A URL like this always identifies a unique piece of content.

A URL can also contain a query string. This takes the form of a question mark  (“?”) and additional information that the server can use to personalize the request. Suppose that we had a server at, and that can return information about a particular user by invoking a PHP script that accepts a user name as an argument, with URLs like or

Up until now, CloudFront did not use the query string as part of the key that it uses to identify the data that it stores in its edge locations.

We’re changing that today, and you can now use CloudFront to speed access to your dynamic data at our current low rates, making your applications faster and more responsive, regardless of where your users are located.

With this change (and the others that I’ll tell you about in a minute), Amazon CloudFront will become an even better component of your global applications. We’ve put together a long list of optimizations that will each increase the performance of your application on their own, but will work even better when you use them in conjunction with other AWS services such as Route 53, Amazon S3, and Amazon EC2.

Tell Me More
Ok, so here’s what we’ve done:

Persistent TCP Connections – Establishing a TCP connection takes some time because each new connection requires a three-way handshake between the server and the client. Amazon CloudFront makes use of persistent connections to each origin for dynamic content. This obviates the connection setup time that would otherwise slow down each request. Reusing these “long-haul” connections back to the server can eliminate hundreds of milliseconds of connection setup time. The connection from the client to the CloudFront edge location is also kept open whenever possible.

Support for Multiple Origins – You can now reference multiple origins (sources of content) from a single CloudFront distribution. This means that you could, for example, serve images from Amazon S3, dynamic content from EC2, and other content from third-party sites, all from a single domain name. Being able to serve your entire site from a single domain will simplify implementation, allow the use of more relative URLs within the application, and can even get you past some cross-site scripting limitations.

Support for Query Strings – CloudFront now uses the query string as part of its cache key. This optional feature gives you the ability to cache content at the edge that is specific to a particular user, city (e.g. weather or traffic), and so forth. You can enable query string support for your entire website or for selected portions, as needed.

Variable Time-To-Live (TTL) – In many cases, dynamic content is either not cacheable or cacheable for a very short period of time, perhaps just a few seconds. In the past, CloudFront’s minimum TTL was 60 minutes since all content was considered static. The new minimum TTL value is 0 seconds. If you set the TTL for a particular origin to 0, CloudFront will still cache the content from that origin. It will then make a GET request with an If-Modified-Since header, thereby giving the origin a chance to signal that CloudFront can continue to use the cached content if it hasn’t changed at the origin.

Large TCP Window – We increased the initial size of CloudFront’s TCP window to 10 back in February, but we didn’t say anything at the time. This enhancement allows more data to be “in flight” across the wire at a given time, without the usual waiting time as the window grows from the older value of 2.

API and Management Console Support – All of the features listed above are accessible from the CloudFront APIs and the CloudFront tab of the AWS Management Console. You can now use URL patterns to exercise fine-grained control over the caching and delivery rules for different parts of your site.

Of course, all of CloudFront’s existing static content delivery features will continue to work as expected. GET and HEAD requests, default root object, invalidation, private content, access logs, IAM integration, and delivery of objects compressed by the origin.

Working Together
Let’s take a look at the ways that various AWS services work together to make delivery of static and dynamic content as fast, reliable, and efficient and possible (click on the diagram at right for an even better illustration):

  • From Application / Client to CloudFront – CloudFronts request routing technology ensures that each client is connected to the nearest edge location as determined by latency measurements that CloudFront continuously takes from internet users around the world. Route 53 may be optionally used as a DNS service to create a CNAME from your custom domain name to your CloudFront distribution. Persistent connections expedite data transfer.
  • Within the CloudFront Edge Locations – Multiple levels of caching at each edge location speed access to the most frequently viewed content and reduce the need to go to your origin servers for cacheable content.
  • From Edge Location to Origin – The nature of dynamic content requires repeated back and forth calls to the origin server. CloudFront edge locations collapse multiple concurrent requests for the same object into a single request. They also maintain persistent connections to the origins (with the large window size). Connections to other parts of AWS are made over high-quality networks that are monitored by Amazon for both availability and performance. This monitoring has the beneficial side effect of keeping error rates low and window sizes high.

Cache Behaviors
In order to give you full control over query string support, TTL values, and origins you can now associate a set of Cache Behaviors with each of your CloudFront distributions. Each behavior includes the following elements:

  • Path Pattern – A pattern (e.g. “*.jpg”) that identifies the content subject to this behavior.
  • Origin Identifier -The identifier for the origin where CloudFront should forward user requests that match this path pattern.
  • Query String – A flag to enable support for query string processing for URLs that match the path pattern.
  • Trusted Signers – Information to enable other AWS accounts to create signed URLs for this URL path pattern.
  • Protocol Policy – Either allow-all or https-only, also applied only to this path pattern.
  • MinTTL – The minimum time-to-live for content subject to this behavior.

Tool Support
Andy from CloudBerry Lab sent me a note to let me know that they have added dynamic content support to the newest free version of the CloudBerry Explorer for Amazon S3.  In Andy’s words:

I’d like to let you know that CloudBerry Explorer is ready to support new CloudFront features by the time of release.  We have added the ability to manage multiple origins for a distribution, configure cache behavior for each origin based on URL path patterns and configure CloudFront to include query string parameters.

You can read more about this in their new blog post, How to configure CloudFront Dynamic Content with CloudBerry S3 Explorer .

Andy also sent some screen shots to show us how it works. The first step is to specify the Origins and CNAMEs associated with the distribution:

The next step is to specify the Path Patterns:

With the Origins and Path Patterns established, the final step is to configure the Path Patterns:

Update: Tej from Bucket Explorer wrote in to tell me that they are now supporting this feature:

Hi, I am one of the developer of Bucket Explorer. I am excited to announce that Bucket Explorer new version is supporting CloudFront Dynamic Content feature. Try Its 30 day trial version with full featured. Dynamic Content (Steps and Images).

And Here You Go
Together with CloudFront’s cost-effectiveness (no minimum commits or long-term contracts), these features add up to a content distribution system that is fast, powerful, and easy to use.

So, what do you think? What kinds of applications can you build with these powerful new features?

— Jeff;

PS – Read more about this new feature in Werner’s new post: Dynamic Content Support in Amazon CloudFront.

Monitor Estimated Charges Using Billing Alerts

Because the AWS Cloud operates on a pay-as-you-go model, your monthly bill will reflect your actual usage. In situations where your overall consumption can vary from hour to hour, it is always a good idea to log in to the AWS portal and check your account activity on a regular basis. We want to make this process easier and simpler because we know that you have more important things to do.

To this end, you can now monitor your estimated AWS charges with our new billing alerts, which use Amazon CloudWatch metrics and alarms.

What’s Up?
We regularly estimate the total monthly charge for each AWS service that you use. When you enable monitoring for your account, we begin storing the estimates as CloudWatch metrics, where they’ll remain available for the usual 14 day period. The following variants on the billing metrics are stored in CloudWatch:

  • Estimated Charges: Total
  • Estimated Charges: By Service
  • Estimated Charges: By Linked Account (if you are using Consolidated Billing)
  • Estimated Charges: By Linked Account and Service (if you are using Consolidated Billing)

You can use this data to receive billing alerts (which are simply Amazon SNS notifications triggered by CloudWatch alarms) to the email address of your choice. Since the notifications use SNS, so you can also route them to your own applications for further processing.

It is important to note that these are estimates, not predictions. The estimate approximates the cost of your AWS usage to date within the current billing cycle and will increase as you continue to consume resources. It includes usage charges for things like Amazon EC2 instance-hours and recurring fees for things like AWS Premium Support. It does not take trends or potential changes in your AWS usage pattern into account.

So, what can you do with this? You can start by using the billing alerts to let you know when your AWS bill will be higher than expected. For example, you can set up an alert to make sure that your AWS usage remains within the Free Usage Tier or to find out when you are approaching a budget limit. This is a very obvious and straightforward use case, and I’m sure it will be the most common way to use this feature at first. However, I’m confident that our community will come up with some more creative and more dynamic applications.

Here are some ideas to get you started:

  • Relate the billing metrics to business metrics such as customer count, customer acquisition cost, or advertising spending (all of which you could also store in CloudWatch, as custom metrics) and use them to track the relationship between customer activity and resource consumption. You could (and probably should) know exactly how much you are spending on cloud resources per customer per month.
  • Update your alerts dynamically when you change configurations to add or remove cloud resources. You can use the alerts to make sure that a regression or a new feature hasn’t adversely affected your operational costs.
  • Establish and monitor ratios between service costs. You can establish a baseline set of costs, and set alarms on the total charges and on the individual services. Perhaps you know that your processing (EC2) cost is generally 1.5x your database (RDS) cost, which in turn is roughly equal to your storage (S3) cost. Once you have established the baselines, you can easily detect changes that could indicate a change in the way that your system is being used (perhaps your newer users are storing, on average, more data than than the original ones).

Enabling and Setting a Billing Alert
To get started, visit your AWS Account Activity page and enable monitoring of your AWS charges. Once you’ve done that, you can set your first billing alert on your total AWS charges. Minutes later (as soon as the data starts to flow in to CloudWatch) you’ll be able to set alerts for charges related to any of the AWS products that you use.

We’ve streamlined the process to make setting up billing alerts as easy and quick as possible. You don’t need to be familiar with CloudWatch alarms; juts fill out this simple form, which you can access from the Account Activity Page:

(click for full-sized image)

You’ll receive a subscription notification email from Amazon SNS; be sure to confirm it by clicking the included link to make sure you receive your alerts. You can then access your alarms from the Account Activity page or the CloudWatch dashboard in the AWS Management Console.

Going Further
If you have already used CloudWatch, you are probably already thinking about some even more advanced ways to use this new information. Here are a few ideas to get you started:

  • Publish the alerts to an SNS queue, and use them to recalculate your business metrics, possibly altering your Auto Scaling parameters as a result. You’d probably use the CloudWatch APIs to retrieve the billing estimates and to set new alarms.
  • Use two separate AWS accounts to run two separate versions of your application, with dynamic A/B testing based on cost and ROI.

I’m sure that your ideas are even better than mine. Feel free to post them, or (better yet), implement them!

— Jeff;