Category: Compute*


New IDC White Paper – Business Value of Amazon Web Services Accelerates Over Time

Update (September 23, 2015): The 2012-era report mentioned in the blog post has been superseded by a newer one published in 2015. The post has been updated accordingly.


Analysts Randy Perry and Stephen Hendrick of IDC have just published a commissioned white paper titled The Business Value of Amazon Web Services Accelerates Over Time. You can download the paper now by clicking on the image at right.

Earlier this year, IDC interviewed 11 organizations that use AWS in an effort to understand the long-term economic implications of moving their workloads to the cloud. As part of the study they also looked for changes in developer productivity, business agility, and the ability to deliver new applications that could be attributed to AWS. The AWS customers that they talked to included Samsung, BankInter, Fox, Netflix, Tomlinson Real Estate Group, United States Tennis Association, and Cycle Computing.

The paper contains a complete recitation of their findings. To summarize:

  • The five-year TCO of developing, deploying, and managing critical applications on AWS represents a 70% savings compared to deploying the same resources on-premises or in hosted environments.
  • The average five-year ROI from using AWS is 626%. Interestingly enough, the return grows (when measured in dollars of benefit for each dollar invested) over time. After 36 months, the organizations interviewed were realizing $3.50 in benefits for each $1 invested in AWS. After 60 months, the benefit grew to $8.40 for every $1 invested.
  • Over a five year period, the companies saw cumulative savings that averaged over $2.5 million per application. This included savings in development and deployment costs (reduced by 80%), application management costs (reduced by 52%), and infrastructure support costs (reduced by 56%). Again on average, these organizations were able to replace $1.6 million in infrastructure with $302,000 in AWS costs.

Our customers ran (and measured) both steady-state and variable-state workloads. They ranked these workloads as very critical (4.5 out of 5). In addition to costs savings, they were able to increase their business agility, and brought their applications to market far more quickly.

Enjoy the paper, and leave a comment if you like it!

Jeff;

AWS Management Console Improvements (EC2 Tab)

We recently made some improvements to the EC2 tab of the AWS Management Console. It is now easier to access the AWS Marketplace and to configure attached storage (EBS volumes and ephemeral storage) for EC2 instances.

Marketplace Access
This one is really simple, but definitely worth covering. You can now access the AWS Marketplace from the Launch Instances Wizard:

After you enter your search terms and click the Go button, the Marketplace results page will open in a new tab. Here’s what happens when I search for wordpress:

Storage Configuration
You can now control the storage configuration of each of your EC2 instances at launch time. This new feature is available in the Console’s Classic Wizard:

There’s a whole lot of power hiding behind that seemingly innocuous Edit button! You can edit the size of the root EBS volume for the instance:

You can create EBS volumes (empty and of any desired size, or from a snapshot) and you can attach them to the device of your choice:

You can also attach the instance storage volumes to the device of your choice:

These new features are available now and you can use them today!

– Jeff;

Amazon CloudWatch Monitoring Scripts for Microsoft Windows

Update (January 6, 2016) – The Windows scripts described in this blog post have been deprecated and are no longer available.

For updated information on how to perform the same tasks in a more modern fashion, please take a look at Sending Performance Counters to CloudWatch and Logs to CloudWatch Logs, Configuring a Windows Instance Using the EC2Config Service, and Monitoring Memory and Disk Statistics for Amazon EC2 Linux Instances.


A number of AWS services collect and then report various metrics to Amazon CloudWatch. The metrics are stored for two weeks and can be viewed in the AWS Management Console. They can also be used to drive alarms and notifications.

Applications can use CloudWatch’s custom metrics facility to store any desired metrics. These metrics are also stored for two weeks and can be used as described above.

Each Amazon EC2 instance reports a number of metrics to CloudWatch. These metrics are collected and reported by the hypervisor, and as such reflect only the data that the hypervisor can see — CPU load, network traffic, and so forth. In order to report on items that are measured by the guest operating system (Linux or Windows) you need to run a monitoring script on the actual system.

Today we are introducing a set of monitoring scripts for EC2 instances running any supported version of Microsoft Windows Server. The scripts are implemented in Windows PowerShell and are provided in sample form so that you can examine and customize them as needed.

Four scripts are available (download for Linux or read more about the CloudWatch Monitoring Scripts for Windows):

  • mon-put-metrics-mem.ps1 collects metrics related to system memory usage and sends them to CloudWatch.
  • mon-put-metrics-disk.ps1 collects metrics related to disk usage and sends them to CloudWatch.
  • mon-put-metrics-perfmon.ps1 collects metrics from PerfMon counters and sends them to CloudWatch.
  • mon-get-instance-stats.ps1 queries CloudWatch and displays the most recent utilization statistics for the instance it was run on.

You will need to install and configure the AWS SDK for .NET in order to make use of the scripts.

— Jeff;

PS – We also have similar monitoring scripts for Linux.

New High I/O EC2 Instance Type – hi1.4xlarge – 2 TB of SSD-Backed Storage

The Plot So Far
As the applications that you build with AWS grow in scale, scope, and complexity, you haven’t been shy about asking us for more locations, more features, more storage, or more speed.

Modern web and mobile applications are often highly I/O dependent. They need to store and retrieve lots of data in order to deliver a rich, personalized experience, and they need to do it as fast as possible in order to respond to clicks and gestures in real time.

In order to meet this need, we are introducing a new family of EC2 instances that are designed to run low-latency, I/O-intensive applications, and are an exceptionally good host for NoSQL databases such as Cassandra and MongoDB.

High I/O EC2 Instances
The first member of this new family is the High I/O Quadruple Extra Large (hi1.4xlarge in the EC2 API) instance. Here are the specs:

  • 16 virtual cores, clocking in at a total of 35 ECU (EC2 Compute Units).
  • HVM and PV virtualization.
  • 60.5 GB of RAM.
  • 10 Gigabit Ethernet connectivity with support for cluster placement groups.
  • 2 TB of local SSD-backed storage, visible to you as a pair of 1 TB volumes.

The SSD storage is local to the instance. Using PV virtualization, you can expect 120,000 random read IOPS (Input/Output Operations Per Second) and between 10,000 and 85,000 random write IOPS, both with 4K blocks. For HVM and Windows AMIs, you can expect 90,000 random read IOPS and 9,000 to 75,000 random write IOPS. By way of comparison, a high-performance disk drive spinning at 15,000 RPM will deliver 175 to 210 IOPS.

Why the range? Write IOPS performance to an SSD is dependent on something called the LBA (Logical Block Addressing) span. As the number of writes to diverse locations grows, more time must be spent updating the associated metadata. This is (very roughly speaking) the SSD equivalent of seek time for a rotating device, and represents per-operation overhead.

This is instance storage, and it will be lost if you stop and then later start the instance. Just like the instance storage on the other EC2 instance types, this storage is failure resilient, and will survive a reboot, but you should back it up to Amazon S3 on a regular basis.

You can launch these instances alone, or you can create a Placement Group to ensure that two or more of them are connected with non-blocking bandwidth. However, you cannot currently mix instance types (e.g. High I/O and Cluster Compute) within a single Placement Group.

If you want to run Micorosft Windows on this new instance type, be sure to use one of the Microsoft Windows AMIs that are designed for use with Cluster Instances:

You can launch High I/O Quadruple Extra Large instances in US East (Northern Virginia) and EU West (Ireland) today, at an On-Demand cost of $3.10 and $3.41, respectively. You can also purchase Reserved Instances, but you cannot acquire them via the Spot Market. We plan to make this new instance type available in several other AWS Regions before the end of the year.

Watch and Learn
I interviewed Deepak Singh, Product Manager for EC2, to learn more about this new instance type. Here’s what he had to say:

And More
Here are some other resources that you might enjoy:

Jeff;

EC2 Instance Status Metrics

We introduced a set of EC2 instance status checks at the beginning of the year. These status checks are the results of automated tests performed by EC2 on every running instance that detect hardware and software issues.  As described in my original post, there are two types of tests: system status checks and instance status checks.  The test results are available in the AWS Management Console and can also be accessed through the command line tools and the EC2 APIs.

New Metrics
In order to make it even easier for you to monitor and respond to the status checks, we are now making them available as Amazon CloudWatch metrics at no charge. There are three metrics for each instance, each updated at 5 minute intervals:

  • StatusCheckFailed_Instance is “0” if the instance status check is passing and “1” otherwise.
  • StatusCheckFailed_System is “0” if the system status check is passing and “1” otherwise.
  • StatusCheckFailed is “0” if neither of the above values is “0”, or “1” otherwise.

For more information about the tests performed by each check, read about Monitoring Instances with Status Checks in the EC2 documentation.

Setting Alarms
You can  create alarms on these new metrics when you launch a new EC2 instance from the AWS Management Console. You can also create alarms for them on any of your existing instances. To create an alarm on a new EC2 instance, simply click the “Create Status Check Alarm” button on the final page of the launch wizard. This will let you configure the alarm and the notification:

To create an alarm on one of your existing instances, select the instance and then click on the Status Check tab. Then click on the “Create Status Check Alarm” button:


Viewing Metrics
You can view the metrics in the AWS Management Console, as usual:

And there you have it! The metrics are available now and you can use them to keep closer tabs on the status of your EC2 instances now.

Jeff;

PS – If you would like to work on cool features like this, the EC2 Instance Status team is hiring. Check out their complete list of jobs.

AWS Elastic Beanstalk – Two Additional Regions Supported

We’ve brought AWS Elastic Beanstalk to both of the US West regions, bringing the total to five:

  • US East (Northern Virginia)
  • Asia Pacific (Tokyo)
  • EU (Ireland)
  • US West (Oregon)
  • US West (Northern California)

I have recently spent some time creating and uploading some PHP applications to Elastic Beanstalk using Git and the new ‘eb’ command. The process is very efficient and straightforward. I edit and test my code locally (which, for me, means an EC2 instance), commit it to my Git repository, and then push it (using the command git aws.push) to my Elastic Beanstalk environment. I can focus on my code while Elastic Beanstalk handles all of the deployment and management tasks including capacity provisioning, load balancing, auto-scaling, and health monitoring. I wrote an entire blog post on Git-based deployment to Elastic Beanstalk.

 

In addition to running PHP applications on Linux using the Apache HTTP server, Elastic Beanstalk also supports Java applications running on the Apache Tomcat stack on Linux and .NET applications running on IIS 7.5. Each environment is supported by the appropriate AWS SDK (PHP, Java, or .NET).

You can get started with Elastic Beanstalk at no charge by taking advantage of the AWS Free Usage Tier.

— Jeff;

 

Multiple IP Addresses for EC2 Instances (in a Virtual Private Cloud)

Amazon EC2 instances within a Virtual Private Cloud (VPC) can now have multiple IP addresses. This oft-requested feature builds upon several other parts of AWS including Elastic IP Addresses and Elastic Network Interfaces.

Use Cases
Here are some of the things that you can do with multiple IP addresses:

  • Host multiple SSL websites on a single instance. You can install multiple SSL certificates on a single instance, each associated with a distinct IP address.
  • Build network appliances. Network appliances such as firewalls and load balancers generally work best when they have access to multiple IP addresses on a network interface.
  • Move private IP addresses between interfaces or instances. Applications that are bound to specific IP addresses can be moved between instances.

The Details
When we launched the Elastic Network Interface (ENI) feature last December, you were limited to a maximum of two ENI’s per EC2 instance, each with a single IP address. With today’s release we are raising these limits, allowing you to have up to 30 IP addresses per interface and 8 interfaces per instance on the m2.4xl and cc2.8xlarge instances, with proportionally smaller limits for the less powerful instance types. Inspect the limits with care if you plan to use lots of interfaces or IP addresses and expect to switch between different instance sizes from time to time.

When you launch an instance or create an interface, a private IP address is created at the same time. We now refer to this as the “primary private IP address.” Amazingly enough, the other addresses are called “secondary private IP addresses.” Because the IP addresses are assigned to an interface (which is, in turn attached to an EC2 instance), attaching the interface to a new instance will also bring all of the IP addresses (primary and secondary) along for the ride.

You can also allocate Elastic IP addresses and associate them with the primary or secondary IP addresses of an interface. Logically enough, the Elastic IP’s also come along for the ride when the interface is attached to a new instance. You will, of course, need to create an Internet Gateway in order to allow Internet traffic into your VPC.

In addition to moving interfaces to other instances, you can also move secondary private IP addresses between interfaces or instances. The Elastic IP associated to the secondary private IP will move with the private IP to its new home.

As I mentioned when we launched the ENI feature, each ENI has its own MAC Address, Security Groups, and a number of other attributes. With today’s release, these attributes apply to all of the IP addresses associated with the ENI.

In order to make use of multiple interfaces and IP addresses, you will need to configure your operating system accordingly. We are planning to publish additional documentation and some scripts to show you how to do this. Code and scripts running on your instance can consult the EC2 instance metadata to ascertain the current ENI and IP address configuration.

Console Support
The VPC tab of the AWS Management Console includes full support for this feature. You can manage the IP addresses associated with each interface of a running instance:

You can associate IP addresses with network interfaces:

You can set up interfaces and IP addresses when you launch a new instance:

The number of supported IP addresses varies by instance type. Consult the Elastic Network Interfaces documentation to see how many addresses are supported by the instance types that you use. Depending on your use case, you may also need to configure your EC2 network interfaces to avoid asymetric routing.

Pricing
You can use one Elastic IP Address per instance at no charge (as long as it is mapped to an EC2 instance), as has always been the case. We are reducing the price for Elastic IP Addresses not mapped to running instances to $0.005 (half of a penny) per hour in both EC2 and VPC.

Each additional Elastic IP Address on an instance will also cost you $0.005 per hour. We have also changed the billing model for Elastic IP Addresses to prorate usage, so that you’ll be charged for partial hours as appropriate.

There is no charge for private IP addresses.

I hope that you have some ideas for this important new feature, and that you are able to make good use of it.

— Jeff;

 

 

AWS Elastic Beanstalk – Simplified Command Line Access with EB

I would like to welcome eb (pronounced ee-bee) to the family of Elastic Beanstalk command line tools! Eb simplifies the development and deployment tasks from the terminal on Linux, Mac OS, and Microsoft Windows. Getting started using Elastic Beanstalk from the command line is now as simple as:

  • eb init to set up credentials, choose the AWS Region, and the Elastic Beanstalk solution stack (operating system + application server + language environment).
  • eb start to create the Elastic Beanstalk application and launch an environment within it.
  • git aws.push to deploy code.

You can give eb a try by downloading the latest Elastic Beanstalk Command Line Tools. To learn more about eb, visit the AWS Elastic Beanstalk Developer Guide.

Here is how I use it to manage my Elastic Beanstalk applications…

Get Started
First, download the updated Elastic Beanstalk Command Line Tools and unzip it to a directory on disk. For quick access to the eb command, I recommend that you add this directory to your PATH. You are all set up and ready to go!

Create Your Application
In your applications directory, initialize your Git repository and run the following commands to create an Elastic Beanstalk application:

<devserver >: git init
Initialized empty Git repository in /Users /jeff /blog /.git

<devserver>: eb init
To get your AWS Access Key ID and Secret Access Key, visit “https://aws-portal.amazon.com/gp/aws/securityCredentials”.
Enter your AWS Access Key ID: AB…78
Enter your AWS Secret Access Key: abcd…1234
Select an AWS Elastic Beanstalk service region.
Available service regions are:
1) “US East (Virginia)”
2) “EU West (Ireland)”
3) “Asia Pacific (Tokyo)”
Select (1 to 3): 1
Enter an AWS Elastic Beanstalk application name (auto-generated value is “jeffblog”):
Enter an AWS Elastic Beanstalk environment name (auto-generated value is “jeffblog-env”):
Select a solution stack.
Available solution stacks are:
1) “32bit Amazon Linux running Tomcat 7”
2) “64bit Amazon Linux running Tomcat 7”
3) “32bit Amazon Linux running Tomcat 6”
4) “64bit Amazon Linux running Tomcat 6”
5) “32bit Amazon Linux running PHP 5.3”
6) “64bit Amazon Linux running PHP 5.3”
7) “64bit Windows Server 2008 R2 running IIS 7.5”
Select (1 to 7): 6
Successfully updated AWS Credential file at “C:\Users\jeff\.elasticbeanstalk\aws_credential_file”.

<devserver>: eb start
Now creating application “jeffblog”.
Waiting for environment “jeffblog-env” to launch.
2012-06-26 21:23:05     INFO    createEnvironment is starting.
2012-06-26 21:23:14     INFO    Using elasticbeanstalk-eu-west-1123456789012 as Amazon S3 storage bucket for environment data.
2012-06-26 21:23:15     INFO    Created Auto Scaling launch configuration named: awseb-jeffblog-env-65rxDnIDWV.
2012-06-26 21:23:16     INFO    Created load balancer named: awseb-jeffblog-env.
2012-06-26 21:23:16     INFO    Created Auto Scaling group named: awseb-jeffblog-env-Bub8kxJmPP.
2012-06-26 21:23:17     INFO    Created Auto Scaling trigger named: awseb-jeffblog-env-Bub8kxJmPP.
2012-06-26 21:23:19     INFO    Waiting for an EC2 instance to launch.
2012-06-26 21:24:05     INFO    Adding EC2 instances to the load balancer. This may take a few minutes.
2012-06-26 21:27:00     INFO    Application available at jeffblog-env-3tqewduwvb.elasticbeanstalk.com.
2012-06-26 21:27:04     INFO    Successfully launched environment: jeffblog-env
Application is available at “jeffblog-env-3tqewduwvb.elasticbeanstalk.com”.

In the example above, eb init walks you through a few questions and configures your settings so you can easily create and manage your application. It also configures your Git repository so you can directly push to Elastic Beanstalk. The command eb start creates the resources and launches a sample application on Elastic Beanstalk. The application is accessible at the URL shown above.

Deploy Your Code
To deploy your code to Elastic Beanstalk, you simply use git aws.push like this:

<devserver >: echo “<html><body><h1><?=’Hello eb!’?></h1></body></html>” > index.php

<devserver>: git add index.php

<devserver>:  git commit m v1
[master (root-commit) 23159c7] v1
 1 files changed, 1 insertions(+), 0 deletions()
 create mode 100644 index.php

<devserver>:  git aws.push
Counting objects: 3, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 249 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
remote:
To https://abcd:1234@git.elasticbeanstalk.us-east-1.amazonaws.com/repos/6a656666626c6f67/jeffblog-env
 * [new branch]      HEAD –> master

To test your uploaded application, browse to the applications URL:

Update your Configuration Settings
Eb stores configuration settings in a file called .optionsettings inside the .elasticbeanstalk directory. To update your configuration settings, simply open the .optionsettings file, make a change, and then run eb update.

For example, to update my instance type from t1.micro to m1.small, I simply change the value of instancetype to m1.small and then run the following commands:

 

<devserver >: eb update
Updating environment “jeffblog-env”. This may take a few minutes.
2012-06- 26 21: 31: 41     INFO    Switched configuration attached to environment.
Environment “jeffblog-env” has been successfully updated.

Get Information about your Application
To get information about your Elastic Beanstalk application, you can use the eb status command:

<devserver >: eb status
URL     : jeffblog-env-3tqewduwvb.elasticbeanstalk.com
Status  : Ready
Health  : Green

Cleaning Up
Eb
provides 2 mechanisms to clean up: eb stop and eb delete.

The eb stop command deletes the AWS resources that are running your application (such as the ELB and the EC2 instances). It however leaves behind all of the application versions and configuration settings that you had deployed, so you can quickly get started again. Eb stop is ideal when you are developing and testing your application and dont need the AWS resources running over night. You can get going again by simply running eb start.

The eb delete command deletes the AWS resources as well as all application versions and configuration settings associated with your application. Eb delete is ideal when youre cleaning up a test application and want to start working on a new application from scratch.

<devserver >: eb stop
Are you sure? [y /n ]:y
Stopping environment “jeffblog-env”. This may take a few minutes.
$   2012-06- 26 21: 36: 48     INFO    terminateEnvironment is starting.
2012-06- 26 21: 36: 53     INFO    Deleted Auto Scaling trigger named: awseb-jeffblog-env-Bub8kxJmPP.
2012-06- 26 21: 36: 55     INFO    Set Auto Scaling group named: awseb-jeffblog-env-Bub8kxJmPP to zero ec2 instances.
2012-06- 26 21: 36: 58     INFO    Deleted load balancer named: awseb-jeffblog-env.
2012-06- 26 21: 37:02     INFO    Terminating Environment jeffblog-env.
2012-06- 26 21: 37:05     INFO    Waiting for Auto Scaling groups to terminate ec2 instances.
2012-06- 26 21: 38: 12     INFO    Waiting for Auto Scaling groups to terminate ec2 instances.
2012-06- 26 21: 39:04     INFO    Waiting for Auto Scaling groups to terminate ec2 instances.
2012-06- 26 21: 39:04     INFO    Auto Scaling groups terminated all ec2 instances
Environment “jeffblog-env” has been successfully stopped.

As you can see, eb gives you the power to set up, manage, and update your Elastic Beanstalk applications from the command line. Give it a try and let me know what you think.

Jeff;

New From Netflix – Asgard for Cloud Management and Deployment

Our friends at Netflix have embraced AWS whole-heartedly. They have shared much of what they have learned about how they use AWS to build, deploy, and host their applications. You can read the Netflix Tech Blog benefit from what they have learned.

Earlier this week they released Asgard, a web-based cloud management and deployment tool, in open source form on GitHub. According to Norse mythology, Asgard is the home of the god of thunder and lightning, and therefore controls the clouds! This is the same tool that the engineers at Netflix use to control their applications and their deployments.

Asgard layers two additional abstractions on top of AWS — Applications and Clusters.

An Application contains one or more Clusters, some Auto Scaling Groups, an Elastic Load Balancer, a Launch Configuration, possibly some Security Groups, an AMI, and some EC2 Instances. Each application also has an owner and an email address to connect the objects and the person responsible for creating and managing them.

Each Cluster of an Application contains one or more Auto Scaling Groups. Asgard assigns incrementing version numbers to newly created Auto Scaling Groups. 

Asgard tracks the components of each application by using object naming conventions (including some limits on the characters allowed in names to allow for simple parsing), tracking the comings and goings in a SimpleDB domain.

There are two distinct ways to deploy new code through Asgard:

  • The cluster-based deployment model creates a new cluster and starts to route traffic to it via an Elastic Load Balancer. The old cluster is disabled but remains available in case a quick rollback becomes necessary.
  • The rolling push deployment model launches instances with new code, gracefully deleting and replacing old instances one or two at a time.

Both of these models count on the fact that the AWS components that they use are dynamic and can be created programatically. This, to me, is one of the fundamental aspects of the cloud. If you can’t call APIs to create and manipulate infrastructure components such as servers, networks, and load balancers, then you don’t have a cloud!

Asgard also provides a simplified graphical user interface for setting up and managing Auto Scaling Groups.

I would like to thank Netflix for opening up this important part of their technology stack to the rest of the world. Nicely done!

Read more (and see some screen shots) in the Netflix blog post, Asgard Web-Based Cloud Management and Deployment.

Jeff;

High Performance Computing Heads East – EC2 CC2.8XL Instances in EU West (Ireland)

The Cluster Compute Eight Extra Large (cc2.8xl) instance type is now available in our EU West (Ireland) Region.

These instances are perfect for your compute and memory intensive HPC jobs. Each instance includes a pair of Intel Xeon processors, 60.5 GB of RAM, and 3.37 TB of instance storage. Each processor has 8 cores and Hyper-Threading is enabled, so you can execute up to 32 threads in parallel. Because these instances are members of our Cluster Compute family they are connected to a 10 Gigabit network, and can be members of a Placement Group for low latency connectivity to other CC2 instances, with full bisection bandwidth between them. You can launch these instances on demand, or you can bid for Spot Instances. You can also purchase Reserved Instances.

A cluster of 1064 cc2.8xl instances clocked in at 240 Teraflops and resides on position 72 of the Top500 list for June 2012. This cluster contained 17,024 cores and 65.968 TB of RAM.

How can you take advantage of all of this compute power? MIT StarCluster makes it easy to launch and manage a cluster of EC2 instances. CloudFlu lets you attack CFD (Computational Fluid Dynamics) problems using the popular OpenFOAM package.

Watch the following video to see just how easy it is to launch a compute cluster on EC2:

You can find more information about the ways that our customers are putting EC2 and Cluster Compute instances to use on the High Performance Computing on AWS page.

Jeff;