Category: Amazon EC2

New IDC White Paper – Business Value of Amazon Web Services Accelerates Over Time

Update (September 23, 2015): The 2012-era report mentioned in the blog post has been superseded by a newer one published in 2015. The post has been updated accordingly.

Analysts Randy Perry and Stephen Hendrick of IDC have just published a commissioned white paper titled The Business Value of Amazon Web Services Accelerates Over Time. You can download the paper now by clicking on the image at right.

Earlier this year, IDC interviewed 11 organizations that use AWS in an effort to understand the long-term economic implications of moving their workloads to the cloud. As part of the study they also looked for changes in developer productivity, business agility, and the ability to deliver new applications that could be attributed to AWS. The AWS customers that they talked to included Samsung, BankInter, Fox, Netflix, Tomlinson Real Estate Group, United States Tennis Association, and Cycle Computing.

The paper contains a complete recitation of their findings. To summarize:

  • The five-year TCO of developing, deploying, and managing critical applications on AWS represents a 70% savings compared to deploying the same resources on-premises or in hosted environments.
  • The average five-year ROI from using AWS is 626%. Interestingly enough, the return grows (when measured in dollars of benefit for each dollar invested) over time. After 36 months, the organizations interviewed were realizing $3.50 in benefits for each $1 invested in AWS. After 60 months, the benefit grew to $8.40 for every $1 invested.
  • Over a five year period, the companies saw cumulative savings that averaged over $2.5 million per application. This included savings in development and deployment costs (reduced by 80%), application management costs (reduced by 52%), and infrastructure support costs (reduced by 56%). Again on average, these organizations were able to replace $1.6 million in infrastructure with $302,000 in AWS costs.

Our customers ran (and measured) both steady-state and variable-state workloads. They ranked these workloads as very critical (4.5 out of 5). In addition to costs savings, they were able to increase their business agility, and brought their applications to market far more quickly.

Enjoy the paper, and leave a comment if you like it!


AWS Management Console Improvements (EC2 Tab)

We recently made some improvements to the EC2 tab of the AWS Management Console. It is now easier to access the AWS Marketplace and to configure attached storage (EBS volumes and ephemeral storage) for EC2 instances.

Marketplace Access
This one is really simple, but definitely worth covering. You can now access the AWS Marketplace from the Launch Instances Wizard:

After you enter your search terms and click the Go button, the Marketplace results page will open in a new tab. Here’s what happens when I search for wordpress:

Storage Configuration
You can now control the storage configuration of each of your EC2 instances at launch time. This new feature is available in the Console’s Classic Wizard:

There’s a whole lot of power hiding behind that seemingly innocuous Edit button! You can edit the size of the root EBS volume for the instance:

You can create EBS volumes (empty and of any desired size, or from a snapshot) and you can attach them to the device of your choice:

You can also attach the instance storage volumes to the device of your choice:

These new features are available now and you can use them today!

– Jeff;

Amazon CloudWatch Monitoring Scripts for Microsoft Windows

Update (January 6, 2016) – The Windows scripts described in this blog post have been deprecated and are no longer available.

For updated information on how to perform the same tasks in a more modern fashion, please take a look at Sending Performance Counters to CloudWatch and Logs to CloudWatch Logs, Configuring a Windows Instance Using the EC2Config Service, and Monitoring Memory and Disk Statistics for Amazon EC2 Linux Instances.

A number of AWS services collect and then report various metrics to Amazon CloudWatch. The metrics are stored for two weeks and can be viewed in the AWS Management Console. They can also be used to drive alarms and notifications.

Applications can use CloudWatch’s custom metrics facility to store any desired metrics. These metrics are also stored for two weeks and can be used as described above.

Each Amazon EC2 instance reports a number of metrics to CloudWatch. These metrics are collected and reported by the hypervisor, and as such reflect only the data that the hypervisor can see — CPU load, network traffic, and so forth. In order to report on items that are measured by the guest operating system (Linux or Windows) you need to run a monitoring script on the actual system.

Today we are introducing a set of monitoring scripts for EC2 instances running any supported version of Microsoft Windows Server. The scripts are implemented in Windows PowerShell and are provided in sample form so that you can examine and customize them as needed.

Four scripts are available (download for Linux or read more about the CloudWatch Monitoring Scripts for Windows):

  • mon-put-metrics-mem.ps1 collects metrics related to system memory usage and sends them to CloudWatch.
  • mon-put-metrics-disk.ps1 collects metrics related to disk usage and sends them to CloudWatch.
  • mon-put-metrics-perfmon.ps1 collects metrics from PerfMon counters and sends them to CloudWatch.
  • mon-get-instance-stats.ps1 queries CloudWatch and displays the most recent utilization statistics for the instance it was run on.

You will need to install and configure the AWS SDK for .NET in order to make use of the scripts.

— Jeff;

PS – We also have similar monitoring scripts for Linux.

New High I/O EC2 Instance Type – hi1.4xlarge – 2 TB of SSD-Backed Storage

The Plot So Far
As the applications that you build with AWS grow in scale, scope, and complexity, you haven’t been shy about asking us for more locations, more features, more storage, or more speed.

Modern web and mobile applications are often highly I/O dependent. They need to store and retrieve lots of data in order to deliver a rich, personalized experience, and they need to do it as fast as possible in order to respond to clicks and gestures in real time.

In order to meet this need, we are introducing a new family of EC2 instances that are designed to run low-latency, I/O-intensive applications, and are an exceptionally good host for NoSQL databases such as Cassandra and MongoDB.

High I/O EC2 Instances
The first member of this new family is the High I/O Quadruple Extra Large (hi1.4xlarge in the EC2 API) instance. Here are the specs:

  • 16 virtual cores, clocking in at a total of 35 ECU (EC2 Compute Units).
  • HVM and PV virtualization.
  • 60.5 GB of RAM.
  • 10 Gigabit Ethernet connectivity with support for cluster placement groups.
  • 2 TB of local SSD-backed storage, visible to you as a pair of 1 TB volumes.

The SSD storage is local to the instance. Using PV virtualization, you can expect 120,000 random read IOPS (Input/Output Operations Per Second) and between 10,000 and 85,000 random write IOPS, both with 4K blocks. For HVM and Windows AMIs, you can expect 90,000 random read IOPS and 9,000 to 75,000 random write IOPS. By way of comparison, a high-performance disk drive spinning at 15,000 RPM will deliver 175 to 210 IOPS.

Why the range? Write IOPS performance to an SSD is dependent on something called the LBA (Logical Block Addressing) span. As the number of writes to diverse locations grows, more time must be spent updating the associated metadata. This is (very roughly speaking) the SSD equivalent of seek time for a rotating device, and represents per-operation overhead.

This is instance storage, and it will be lost if you stop and then later start the instance. Just like the instance storage on the other EC2 instance types, this storage is failure resilient, and will survive a reboot, but you should back it up to Amazon S3 on a regular basis.

You can launch these instances alone, or you can create a Placement Group to ensure that two or more of them are connected with non-blocking bandwidth. However, you cannot currently mix instance types (e.g. High I/O and Cluster Compute) within a single Placement Group.

If you want to run Micorosft Windows on this new instance type, be sure to use one of the Microsoft Windows AMIs that are designed for use with Cluster Instances:

You can launch High I/O Quadruple Extra Large instances in US East (Northern Virginia) and EU West (Ireland) today, at an On-Demand cost of $3.10 and $3.41, respectively. You can also purchase Reserved Instances, but you cannot acquire them via the Spot Market. We plan to make this new instance type available in several other AWS Regions before the end of the year.

Watch and Learn
I interviewed Deepak Singh, Product Manager for EC2, to learn more about this new instance type. Here’s what he had to say:

And More
Here are some other resources that you might enjoy:


EC2 Instance Status Metrics

We introduced a set of EC2 instance status checks at the beginning of the year. These status checks are the results of automated tests performed by EC2 on every running instance that detect hardware and software issues.  As described in my original post, there are two types of tests: system status checks and instance status checks.  The test results are available in the AWS Management Console and can also be accessed through the command line tools and the EC2 APIs.

New Metrics
In order to make it even easier for you to monitor and respond to the status checks, we are now making them available as Amazon CloudWatch metrics at no charge. There are three metrics for each instance, each updated at 5 minute intervals:

  • StatusCheckFailed_Instance is “0” if the instance status check is passing and “1” otherwise.
  • StatusCheckFailed_System is “0” if the system status check is passing and “1” otherwise.
  • StatusCheckFailed is “0” if neither of the above values is “0”, or “1” otherwise.

For more information about the tests performed by each check, read about Monitoring Instances with Status Checks in the EC2 documentation.

Setting Alarms
You can  create alarms on these new metrics when you launch a new EC2 instance from the AWS Management Console. You can also create alarms for them on any of your existing instances. To create an alarm on a new EC2 instance, simply click the “Create Status Check Alarm” button on the final page of the launch wizard. This will let you configure the alarm and the notification:

To create an alarm on one of your existing instances, select the instance and then click on the Status Check tab. Then click on the “Create Status Check Alarm” button:

Viewing Metrics
You can view the metrics in the AWS Management Console, as usual:

And there you have it! The metrics are available now and you can use them to keep closer tabs on the status of your EC2 instances now.


PS – If you would like to work on cool features like this, the EC2 Instance Status team is hiring. Check out their complete list of jobs.

Multiple IP Addresses for EC2 Instances (in a Virtual Private Cloud)

Amazon EC2 instances within a Virtual Private Cloud (VPC) can now have multiple IP addresses. This oft-requested feature builds upon several other parts of AWS including Elastic IP Addresses and Elastic Network Interfaces.

Use Cases
Here are some of the things that you can do with multiple IP addresses:

  • Host multiple SSL websites on a single instance. You can install multiple SSL certificates on a single instance, each associated with a distinct IP address.
  • Build network appliances. Network appliances such as firewalls and load balancers generally work best when they have access to multiple IP addresses on a network interface.
  • Move private IP addresses between interfaces or instances. Applications that are bound to specific IP addresses can be moved between instances.

The Details
When we launched the Elastic Network Interface (ENI) feature last December, you were limited to a maximum of two ENI’s per EC2 instance, each with a single IP address. With today’s release we are raising these limits, allowing you to have up to 30 IP addresses per interface and 8 interfaces per instance on the m2.4xl and cc2.8xlarge instances, with proportionally smaller limits for the less powerful instance types. Inspect the limits with care if you plan to use lots of interfaces or IP addresses and expect to switch between different instance sizes from time to time.

When you launch an instance or create an interface, a private IP address is created at the same time. We now refer to this as the “primary private IP address.” Amazingly enough, the other addresses are called “secondary private IP addresses.” Because the IP addresses are assigned to an interface (which is, in turn attached to an EC2 instance), attaching the interface to a new instance will also bring all of the IP addresses (primary and secondary) along for the ride.

You can also allocate Elastic IP addresses and associate them with the primary or secondary IP addresses of an interface. Logically enough, the Elastic IP’s also come along for the ride when the interface is attached to a new instance. You will, of course, need to create an Internet Gateway in order to allow Internet traffic into your VPC.

In addition to moving interfaces to other instances, you can also move secondary private IP addresses between interfaces or instances. The Elastic IP associated to the secondary private IP will move with the private IP to its new home.

As I mentioned when we launched the ENI feature, each ENI has its own MAC Address, Security Groups, and a number of other attributes. With today’s release, these attributes apply to all of the IP addresses associated with the ENI.

In order to make use of multiple interfaces and IP addresses, you will need to configure your operating system accordingly. We are planning to publish additional documentation and some scripts to show you how to do this. Code and scripts running on your instance can consult the EC2 instance metadata to ascertain the current ENI and IP address configuration.

Console Support
The VPC tab of the AWS Management Console includes full support for this feature. You can manage the IP addresses associated with each interface of a running instance:

You can associate IP addresses with network interfaces:

You can set up interfaces and IP addresses when you launch a new instance:

The number of supported IP addresses varies by instance type. Consult the Elastic Network Interfaces documentation to see how many addresses are supported by the instance types that you use. Depending on your use case, you may also need to configure your EC2 network interfaces to avoid asymetric routing.

You can use one Elastic IP Address per instance at no charge (as long as it is mapped to an EC2 instance), as has always been the case. We are reducing the price for Elastic IP Addresses not mapped to running instances to $0.005 (half of a penny) per hour in both EC2 and VPC.

Each additional Elastic IP Address on an instance will also cost you $0.005 per hour. We have also changed the billing model for Elastic IP Addresses to prorate usage, so that you’ll be charged for partial hours as appropriate.

There is no charge for private IP addresses.

I hope that you have some ideas for this important new feature, and that you are able to make good use of it.

— Jeff;



New From Netflix – Asgard for Cloud Management and Deployment

Our friends at Netflix have embraced AWS whole-heartedly. They have shared much of what they have learned about how they use AWS to build, deploy, and host their applications. You can read the Netflix Tech Blog benefit from what they have learned.

Earlier this week they released Asgard, a web-based cloud management and deployment tool, in open source form on GitHub. According to Norse mythology, Asgard is the home of the god of thunder and lightning, and therefore controls the clouds! This is the same tool that the engineers at Netflix use to control their applications and their deployments.

Asgard layers two additional abstractions on top of AWS — Applications and Clusters.

An Application contains one or more Clusters, some Auto Scaling Groups, an Elastic Load Balancer, a Launch Configuration, possibly some Security Groups, an AMI, and some EC2 Instances. Each application also has an owner and an email address to connect the objects and the person responsible for creating and managing them.

Each Cluster of an Application contains one or more Auto Scaling Groups. Asgard assigns incrementing version numbers to newly created Auto Scaling Groups. 

Asgard tracks the components of each application by using object naming conventions (including some limits on the characters allowed in names to allow for simple parsing), tracking the comings and goings in a SimpleDB domain.

There are two distinct ways to deploy new code through Asgard:

  • The cluster-based deployment model creates a new cluster and starts to route traffic to it via an Elastic Load Balancer. The old cluster is disabled but remains available in case a quick rollback becomes necessary.
  • The rolling push deployment model launches instances with new code, gracefully deleting and replacing old instances one or two at a time.

Both of these models count on the fact that the AWS components that they use are dynamic and can be created programatically. This, to me, is one of the fundamental aspects of the cloud. If you can’t call APIs to create and manipulate infrastructure components such as servers, networks, and load balancers, then you don’t have a cloud!

Asgard also provides a simplified graphical user interface for setting up and managing Auto Scaling Groups.

I would like to thank Netflix for opening up this important part of their technology stack to the rest of the world. Nicely done!

Read more (and see some screen shots) in the Netflix blog post, Asgard Web-Based Cloud Management and Deployment.


High Performance Computing Heads East – EC2 CC2.8XL Instances in EU West (Ireland)

The Cluster Compute Eight Extra Large (cc2.8xl) instance type is now available in our EU West (Ireland) Region.

These instances are perfect for your compute and memory intensive HPC jobs. Each instance includes a pair of Intel Xeon processors, 60.5 GB of RAM, and 3.37 TB of instance storage. Each processor has 8 cores and Hyper-Threading is enabled, so you can execute up to 32 threads in parallel. Because these instances are members of our Cluster Compute family they are connected to a 10 Gigabit network, and can be members of a Placement Group for low latency connectivity to other CC2 instances, with full bisection bandwidth between them. You can launch these instances on demand, or you can bid for Spot Instances. You can also purchase Reserved Instances.

A cluster of 1064 cc2.8xl instances clocked in at 240 Teraflops and resides on position 72 of the Top500 list for June 2012. This cluster contained 17,024 cores and 65.968 TB of RAM.

How can you take advantage of all of this compute power? MIT StarCluster makes it easy to launch and manage a cluster of EC2 instances. CloudFlu lets you attack CFD (Computational Fluid Dynamics) problems using the popular OpenFOAM package.

Watch the following video to see just how easy it is to launch a compute cluster on EC2:

You can find more information about the ways that our customers are putting EC2 and Cluster Compute instances to use on the High Performance Computing on AWS page.



EC2 Spot Instance Updates – Auto Scaling and CloudFormation Integration, New Sample App

I have three exciting news items for Amazon EC2 users:

  1. You can now use Auto Scaling to place bids on the EC2 Spot Market on your behalf.
  2. Similarly, you can now place Spot Market bids from within an AWS CloudFormation template.
  3. We have a new article to show you how to track Spot instance activity with notifications to an Amazon Simple Notification Service (SNS) topic.

Unless you are intimately familiar with the entire AWS product lineup, you may have found the preceding list of items just a bit mysterious. Before I get to the heart of today’s announcement, let’s review the fundamentals of each product:

You probably know all about Amazon EC2. You can launch servers on an as-needed basis and pay for only the resources that you consume. You can pay the on-demand prices, or you can bid for unused EC2 capacity on the Spot Market, taking advantage of prices that vary in response to changes in supply and demand.

AWS CloudFormation allows you to create and manage a collection of AWS resources, all specified by a single declarative template expressed in JSON format.

The Simple Notification Service lets you create any number of message topics and to publish messages to the topics.

Together, these features should make it much easier to use Spot Instances, which in turn can help you run EC2 instances more cost-effectively.

With that out of the way, let’s dig in!

Spot +Auto Scaling
You can now set up Auto Scaling to make Spot bids on your behalf. As you may know, you must create an Auto Scaling Group and associate a launch configuration with it in order to make use of Auto Scaling. The Auto Scaling group lists the desired Availability Zones, the minimum and maximum size of the group, health checks, and other properties. The launch configuration includes a number of important parameters including the EC2 AMI to launch, the instance type to use, user data to pass to the newly launched instances, and so forth.

You can now include a bid price in your launch configuration if you want to use Spot Instances. Auto Scaling will use that price to continually place bids in an effort to keep the Auto Scaling group at the desired size. You can use this to soak up background capacity at a price point that is economically viable for your application. For example, let’s say that you can make good use of up to 10 m1.large instances. You consult the Spot Instance Pricing History in the AWS Management Console, and decide that a bid of $0.12 (twelve cents) per hour will work well for you:

Your Auto Scaling Group would have a minimum and a maximum size of 10, and the launch configuration would set the bid price at $0.12. When sufficient capacity is available at or below or your bid price, your group will expand up to the maximum size, and you’ll pay the market price (which could be lower than your bid). The group will contract if there are other demands on the capacity that cause the market price to exceed your bid price. You can alter the bid price at any time by creating a new launch configuration and attaching it to the Auto Scaling Group. Of course, if you want to use On-Demand instances instead, you can simply omit the bid price from your launch configuration.

For even more flexibility, you can use Auto Scaling’s scaling policies feature to change the minimum and maximum group sizes at a predetermined future time or dynamically based on your applications requirements. You could increase your group size at times when your workload is highest or when spot prices are historically low (this is subject to change, of course).

Spot + CloudFormation
You can now create CloudFormation templates that include a bid for Spot capacity as part of an Auto Scaling group (as described above).

The template can describe the construction of an entire application stack. AWS resources in the stack will be created in dependency-based order. The spot bid will be activated after the Auto Scaling group has been created. Here’s an example taken directly from a template’s definition of an Auto Scaling group:

You can also specify the bid price as a parameter to the template:

In this case, the AWS Management Console will prompt for the price (and the other parameters specified in the template) when you use the template to create a stack

The parameter value can be used directly in the template, or it can be used in other ways. For example, our StarCluster template has been updated to include the spot bid price as a parameter and to pass it in to the starcluster command:

/usr /bin /starcluster -c /home /ec2-user /.starcluster /config start -b “, { “Ref ” : “SpotPrice ” }, “ ec2-cluster\n

In addition to the Starcluster template that I mentioned above, we are also releasing two other templates today:

  • The Bees With Machine Guns template gives you the power to create a swarm of bees (EC2 micro instances) to load test your web site.
  • The Asynchronous Processing template adjusts the number of workers (EC2 instances) that are pulling data from an SQS queue, increasing the number of workers when the queue depth rises above a certain level and reducing it when the number of empty polls on the queue starts to grow. Even though it is of modest size, this template illustrates a number of clever techniques. It installs some packages, configures a crontab entry, loads some Perl code, and uses CloudWatch alarms for scaling.

My advice? Spend some time digging in to these templates to get a better understanding of how you can use the very potent combination of Spot Instances, Auto Scaling groups, and CloudFormation to design complex, parameterized application stacks that can be instantiated transactionally (all or nothing) with just a few clicks. Print them out, draw some diagrams, and gain a better appreciation for how they work — I guarantee you that it will be time well spent, and that you will walk away with some really good ideas!

Notifications Application / Tutorial
We’ve written a new article to show you how to track Spot instance activity programmatically. Along with this article, we’re distributing a sample application in source code form. As described in the article, the application uses the EC2 APIs to track three types of items, all within a designated region:

  • The list of EC2 instances currently running in your account (Spot and On-Demand).
  • Your current Spot Instance requests.
  • Current prices for Spot Instances.

When the application detects a change in any of the items that it monitors, it uses the Simple Notification Service to send a notification. You can use this notification-based model to decouple your application’s bid generation mechanism from the actual processing logic, and you can also do a better job of dealing with processing interruptions if you are outbid and some of your Spot Instances are terminated.

The notification is sent as a simple XML document; here’s a sample:

<PostNotification xmlns=“”>
  <accountId>455364113843 </accountId>
  <resourceId>sir-aca7a011 </resourceId>
  <type>Amazon.EC2.Request.StateTransition </type>
  <code>FROM: open TO: cancelled </code>
  <message>Your Amazon EC2 Spot Request has had a state transition. </message>

The application was written in Java using the AWS SDK for Java. Because the applications stores all of its configuration information and persistent data in Amazon SimpleDB, you can make configuration changes (e.g. updating notification thresholds) by storing new values in the appropriate items in the application’s SimpleDB domain.

Here’s Dave
I interviewed Dave Ward of the EC2 Spot Instance team for The AWS Report. Here’s what Dave had to say:

One small correction — It turns out that the Chocolate + Peanut Butter analogy that I used is out of date. All of the cool kids on the EC2 team now use term Crazy Delicious to refer to the unholy combination of Mr. Pibb and Red Vines.

Talk to Us
We would love to see what kinds of CloudFormation templates you come up with for use with Spot Instances. Please feel free to post them in the CloudFormation forum or leave a note on this post. Also, if you have thoughts on the features you want next on Spot please let us know at or via a note below.

— Jeff;


Behind the Scenes of the Amazon Appstore Test Drive

The Amazon Appstore for Android has a really cool Test Drive feature. It allows you to instantly try select applications through your computer browser or Android phone before you elect to install them.

There’s some interesting technology behind the Test Drive, and I’d like to tell you a little bit more about it. Let’s start with this diagram:

The app product page in the Amazon Appstore hosts a Player application. The Player application connects to an Amazon EC2 instance which runs multiple independent copies of an Android emulator. The emulator hosts and runs the Android application in a protected environment. The Player application has two primary responsibilities. First, it forwards player input to the Android application within the emulator. Second, it plays the audio and video streams produced by the Android application.

The emulator is supported in multiple AWS Regions. In order to deliver the best possible experience, the Player application is routed to the optimal AWS Region. Each EC2 instance runs multiple emulators and this supports multiple users simultaneously test-driving different applications. My diagram shows four, but this is just for illustrative purposes.

Since the Test Drives are hosted on Amazon EC2, the Amazon Appstore team can easily add additional capacity as needed, and they can add it where it makes the most sense with respect to the incoming traffic. You can easily imagine a traffic surge that moves westward from Region to Region as the daily news cycle makes people aware of the new Test Drive feature at the beginning of their day. This ability to add, shift, and remove capacity as needed is an essential part of every AWS service, and one that many developers take advantage of on a daily basis.

I spent some time talking to the developers behind the Test Drive feature earlier this week. They told me that there are currently a couple of limitations which prevent it from being enabled for every application in the Amazon Appstore. For example, Android applications that attempt to access non-existent hardware such as a phone’s camera would currently fail if they were enabled. The team is working to enable additional applications; check in at the Amazon Appstore Developer Blog to learn more.

If you are planning to build an Android application, check out the AWS SDK for Android. If you’ve already built an Android application and you’d like to submit your application to our store, check out the Amazon Appstore Developer Program.

— Jeff;