Category: Amazon EC2


New EC2 Second Generation Standard Instances and Price Reductions

We launched Amazon EC2 with a single instance type (the venerable m1.small) in 2006. Over the years we have added many new instance types in order to allow our customers to run a very wide variety of applications and workloads.

The Second Generation Standard Instances
Today we are continuing that practice, with the addition of a second generation to the Standard family of instances. These instances have the same CPU to memory ratio as the existing Standard instances. With up to 50% higher absolute CPU performance, these instances are optimized for applications such as media encoding, batch processing, caching, and web serving.

There are two second generation Standard instance types, both of which are 64-bit platforms:

  • The Extra Large Instance (m3.xlarge) has 15 GB of memory and 13 ECU (EC2 Compute Units) spread across 4 virtual cores, with moderate I/O performance.
  • The Double Extra Large Instance (m3.2xlarge) has 30 GB of memory and 26 ECU spread across 8 virtual cores, with high I/O performance.

The instances are now available in the US East (Northern Virginia) region; we plan to support them in the other regions in early 2013.

On Demand pricing in the region for an instance running Linux starts at $0.58 (Extra Large) and $1.16 (Double Extra Large). Reserved Instances are available, and the instances can also be found on the EC2 Spot Market.

Price Reductions
As part of this launch, we are reducing prices for the first generation Standard (m1) instances running Linux in the US East (Northern Virginia) and US West (Oregon) regions by over 18% as follows:

Instance Type New On Demand Price Old On Demand Price
Small $0.065/hour $0.08/hour
Medium $0.13/hour $0.16/hour
Large $0.26/hour $0.32/hour
Extra Large $0.52/hour $0.64/hour

There are no changes to the Reserved Instance or Windows pricing.

Meet the Family
With the launch of the m3 Standard instances, you can now choose from seventeen instance types across seven families. Let’s recap just so that you are aware of all of your options (details here):

  • The first (m1) and second (m3) generation Standard (1.7 GB to 30 GB of memory) instances are well suited to most applications. The m3 instances are for applications that can benefit from higher CPU performance than offered by the m1 instances.
  • The Micro instance (613 MB of memory) is great for lower throughput applications and web sites.
  • The High Memory instances (17.1 to 68.4 GB of memory) are designed for memory-bound applications, including databases and memory caches.
  • The High-CPU instances (1.7 to 7 GB of memory) are designed for scaled-out compute-intensive applications, with a higher ratio of CPU relative to memory.
  • The Cluster Compute instances (23 to 60.5 GB of memory) are designed for compute-intensive applications that require high-performance networking.
  • The Cluster GPU instances (22 GB of memory) are designed for compute and network-intensive workloads that can also make use of a GPGPU (general purpose graphics processing unit) for highly parallelized processing.
  • The High I/O instance (60.5 GB of memory) provides very high, low latency, random I/O instance performance.

With this wide variety of instance types at your fingertips, you might want to think about benchmarking each component of your application on every applicable instance type in order to find the one that gives you the best performance and the best value.

— Jeff;

The AWS Report – Matt Lull of Citrix

In this episode of The AWS Report, I spoke with Matt Lull, Managing Director, Global Strategic Alliances, for Citrix to learn more about their cloud strategy. We talked about their line of virtualization products including Xen, XenServer, CloudBridge, and the Citrix NetScaler.

After that we talked about the concept of desktop virtualization, and Matt told me “Work isn’t a place you go anymore, it is a thing that you do.” From there we wrapped up with a discussion about AWS re:Invent.

— Jeff;

Launch EC2 Micro Instances in a Virtual Private Cloud

Judging from the number of requests that I have had for this particular combination of EC2 features, I’m expecting this to be a very popular post.

You can now launch EC2 micro (t1.micro) instances within a Virtual Private Cloud (VPC). The AWS Free Usage Tier now extends to t1.micro instances running inside of a VPC.

The micro instances provide a small amount of consistent CPU power, along with the ability to increase it in short burst when additional cycles are available. They are a good match for lower throughput applications and web sites that require additional compute cycles from time to time.

With this release, you now have everything that you need to create and experiment with your very own Virtual Private Cloud at no cost. This is pretty cool and I’m sure you’ll make good use of it.

— Jeff;

 

SAP HANA One – Now Available for Production Use on AWS

Earlier this year I briefly mentioned SAP HANA and the fact that it was available for developer use on AWS.

Today, SAP announced HANA One, a deployment option for HANA that is certified for production use on AWS available now in the AWS Marketplace. You can run this powerful, in-memory database on EC2 for just $0.99 per hour.

Because you can now launch HANA in the cloud, you don’t need to spend time negotiating an enterprise agreement, and you don’t have to buy a big server. If you are running your startup from a cafe or commanding your enterprise from a glass tower, you get the same deal. No long-term commitment and easy access to HANA, on an hourly, pay-as-you-go basis, charged through your AWS account.

What’s HANA?
SAP HANA is an in-memory data platform well suited for performing real-time analytics, and developing and deploying real-time applications.

I spent some time watching the videos on the Experience HANA site as I was getting ready to write this post. SAP founder Hasso Plattner described the process that led to the creation of HANA, starting with a decision to build a new enterprise database in December of 2006. He explained that he wanted to capitalize on two industry trends — the availability of multi-core CPUs and the growth in the amount of RAM per system. Along with this, he wanted to exploit parallelism within the confines of a single application. Here’s what they came up with:

Putting it all together, SAP HANA runs entirely in memory, eschewing spinning disk entirely except for backup. Traditional disk-based data management solutions are optimized for transactional or analytic processing, but not both. Transactional processing is oriented around and optimized for row-base operations: inserts, updates, and deletes. In contrast, analytic processing is tuned for complex queries, often involving subsets of the columns in a particular table (hence the rise of column-oriented databases). All of this specialization and optimization is needed due to the fact that accessing data stored on a disk is 10,000 to 1,000,000 times slower than accessing data stored in memory. In addition to this bottleneck, disk-based systems are unable to take full advantage of multi-core CPUs.

At the base, SAP HANA is a complete, ACID-compliant relational database with support for most of SQL-92. At the top,  you’ll find an analytical interface using Multi-Dimensional Expressions (MDX) and support for SAP BusinessObjects. Between the two is a parallel data flow computing engine designed to scale across cores. HANA also includes a Business Function Library, a Predictive Analysis Library, and the “L” imperative language.

So, what is HANA good for? Great question! Here are some applications:

Real-time analytics such as data warehousing, predictive analysis on Big Data, and operational (sales, finance, or shipping) reporting.

Real-time applications such as core process (e.g. ERP) acceleration, planning and optimization, and sense and response (smart meters, point of sale, and the like).

As an example of what can be done, SAP Expense Insight uses HANA and it is also available in the AWS Marketplace. It offers budget visibility to department managers in real-time, across any time horizon.

The folks at Taulia are building a dynamic discounting platform around HANA One. They’re already using AWS to streamline their deployment and operations; HANA One will allow them to make their platform even more responsive.

This is an enterprise-class product (but one that’s accessible to everyone) and I’ve barely scratched the surface. You can read this white paper to learn more (you may have to give the downloaded file a “.pdf” extension in order to open it).

Deploy HANA Now
As I mentioned earlier, SAP has certified HANA for production use on AWS. You can launch it today and you can get started now.

You don’t have to spend a lot of money. You don’t need to buy and install high-end hardware in you data center and you don’t need to license HANA. Instead, you can launch HANA from the AWS Marketplace and pay for the hardware and the software on an hourly, pay-as-you-go basis.

You’ll pay $0.99 per hour to run HANA One on AWS, plus another $2.50 per hour for an EC2 Cluster Compute Eight Extra Large instance with 60.5 GB of RAM and dual Intel Xeon E5 processors, bringing the total software and hardware cost to just $3.49 per hour, plus standard AWS fees for EBS and data transfer.

To get started, visit the SAP HANA page in the AWS Marketplace.

— Jeff;

Amazon EC2 Spot Instance Bid Status

We want to make EC2 Spot Instances even easier to use. One way we are doing this is by making the bidding and processing more open and more transparent.

You probably know that you can use Spot Instances to bid for unused capacity, allowing you to obtain compute capacity at a price that is based on supply and demand.

When you submit a bid for Spot capacity, your request includes a number of parameters and constraints. The constraints provide EC2 with the information that it needs to satisfy your bid (and the other bids that it is competing with) as quickly as possible. EC2 stores and then repeatedly evaluates the constraints until it is able to satisfy your bid. The following constraints (some mandatory and some optional) affect the evaluation process:

  • Max Price – The maximum bid price you are willing to pay per instance hour.
  • Instance Type – The desired EC2 instance type.
  • Persistent – Whether your request is one-time or persistent.
  • Request Validity Period – The length of time that your request will remain valid.
  • Launch Group – A label that groups a set of requests together so that they are started or terminated as a group.
  • Availability Zone Group – A label that groups a set of requests together so that the instances they start will launch in the same Availability Zone.
  • Availability Zone – An Availability Zone target for the request.

Spot Life Cycle
Each bid has a life cycle with multiple states. Transitions between the states occur when constraints are fulfilled. Here’s the big picture:

We want to give you additional information so that you can do an even better job of making Spot Bids and managing the running instances. You might find yourself wondering:

  • Why hasn’t my Spot Bid been fulfilled yet?
  • Can I change something in my Spot Bid to get it fulfilled faster?
  • Why did my Spot Instance launch fail?
  • Is my Spot Instance about to be interrupted?
  • Why was my Spot Instance terminated?

Spot Instance Bid Status
In order to give you additional insight in to the evaluation process, we are making the Spot Bid Instance Status visible through the AWS Management Console and the EC2 APIs. The existing DescribeSpotInstanceRequests function will now return two additional pieces of information – bidStatusCode and bidStatusMessage.This infomation is updated every time the Spot Bid’s provisioning status changes or is re-evaluated (typically a few seconds, but sometimes up to 3 minutes).

  • bidStatusCode is designed to be both machine-readable and human-readable.
  • bidStatusMessage is human-readable. Each bidStatusCode has an associated message:

You can find the complete set of codes and messages in the Spot Instance documentation. Here are some of the more interesting codes:

  • pending-evaluation – Your Spot request has been submitted for review and is pending evaluation.
  • fulfilled – Your Spot request is fulfilled and the requested instances are running.
  • marked-for-termination – Your Spot Instance is marked for termination because the request price is lower than the fulfillment price for the given instance type in the specified Availability Zone.

You can click on the Bid Status message in the AWS Management Console to see a more verbose message in the tooltip:

What is $100 Worth of Spot Good For?
If you are wondering about the value of Spot Instances, the new post, Data Mining the Web: $100 Worth of Priceless, should be helpful. The developers at Lucky Oyster used the Common Crawl public data set, EC2 Spot Instances, and a few hundred lines of Ruby to data mine 3.4 billion Web pages and extract close to a Terabyte of structured data. All in 14 hours for about $100.

Learn About Spot
I recently interviewed Stephen Elliott, Senior Product Manager on the EC2 team, to learn more about the Spot Instances concept. Here’s our video:

Stephen and his team are interested in your feedback on this and other Spot Instance features. You can email them at spot-instance-feedback@amazon.com .

If you are new to Spot Instances, get started now by signing up for EC2 and watching our HOWTO video. To learn even more, visit our EC2 Spot Instance Curriculum page.

— Jeff;

 

Amazon Linux AMI 2012.09 Now Available

Max Spevack of the Amazon EC2 team brings news of the latest Amazon Linux AMI.

— Jeff;


The Amazon Linux AMI 2012.09 is now available.

After we removed the Public Beta tag from the Amazon Linux AMI last September, weve been on a six month release cycle focused on making sure that EC2 customers have a stable, secure, and simple Linux-based AMI that integrates well with other AWS offerings.

There are several new features worth discussing, as well as a host of general updates to packages in the Amazon Linux AMI repositories and to the AWS command line tools. Here’s what’s new:

  • Kernel 3.2.30: We have upgraded the kernel to version 3.2.30, which follows the 3.2.x kernel series that we introduced in the 2012.03 AMI.
  • Apache 2.4 & PHP 5.4: This release supports multiple versions of both Apache and PHP, and they are engineered to work together in specific combinations.  The first combination is the default, Apache 2.2 in conjunction with PHP 5.3, which are installed by running yum install httpd php. Based on customer requests, we support Apache 2.4 in conjunction with PHP 5.4 in the package repositories.  These packages are accessed by running yum install httpd24 php54.
  • OpenJDK 7: While OpenJDK 1.6 is still installed by default on the AMI, OpenJDK 1.7 is included in the package repositories, and available for installation.  You can install it by running yum install java-1.7.0-openjdk.
  • R 2.15: Also coming from your requests, we have added the R language to the Amazon Linux AMI.  We are here to serve your statistical analysis needs!  Simply yum install R and off you go.
  • Multiple Interfaces & IP Addresses: Additional network interfaces attached while the instance is running are configured automatically. Secondary IP addresses are refreshed during DHCP lease renewal, and the related routing rules are updated.
  • Multiple Versions of GCC: The default version of GCC that is available in the package repositories is GCC 4.6, which is a change from the 2012.03 AMI in which the default was GCC 4.4 and GCC 4.6 was shipped as an optional package.  Furthermore, GCC 4.7 is available in the repositories.  If you yum install gcc you will get GCC 4.6.  For the other versions, either run yum install gcc44 or yum install gcc47.

The Amazon Linux AMI 2012.09 is available for launch in all regions. Users of 2012.03, 2011.09, and 2011.02 versions of the Amazon Linux AMI can easily upgrade using yum.

The Amazon Linux AMI is a rolling release, configured to deliver a continuous flow of updates that allow you to roll from one version of the Amazon Linux AMI to the next.  In other words, Amazon Linux AMIs are treated as snapshots in time, with a repository and update structure that gives you the latest packages that we have built and pushed into the repository.  If you prefer to lock your Amazon Linux AMI instances to a particular version, please see the Amazon Linux AMI FAQ for instructions.

As always, if you need any help with the Amazon Linux AMI, dont hesitate to post on the EC2 forum, and someone from the team will be happy to assist you.

— Max

PS – Help us to build the Amazon Linux AMI! We are actively hiring Linux Systems Engineer, Linux Software Development Engineer, and Linux Kernel Engineer positions:

Scaling Science: 1 Million Compute Hours in 1 week

For many scientists, the computer has become as important as the test tube, the centrifuge or the grad student in delivering ground breaking research. Whether screening for active cancer treatments or colliding atoms, the availability of compute cycles can significantly affect the time it takes for scientists to crunch their numbers. Indeed, compute resources are often so constrained that researchers often have to scale back the scope of their work to fit the capacity available.

Not so with Amazon EC2, where the general purpose, utility computing model is a perfect fit for scientific workloads of any scale. Researchers (and their grad students), can access the computational resources they need to deliver on their scientific vision, while staying focused on their analysis and results.

Scaling up at the Morgridge Instutute
Victor Ruotti faced this exact problem. His team at the Morgridge Institute at the University of Wisconsin-Madison are looking at the genes expressed as template cells, stem cells, start to take on the various special functions our tissues need, such as absorbing nutrients or conducting nervous impulses. Impressive and important work, which has large computational requirements: millions of RNA sequence reads and a data footprint of 78 TB.

Victor’s research was selected as the winner of Cycle Computing’s inaugural Big Science Challenge, and using’s Cycle’s software ran through the 15,376 alignment runs on Amazon EC2, clocking up over a million compute hours in a week, for just $116 an hour.

A Century of Compute
Over 1,000,000 compute hours, 115 years of work for a single processor, were used to build the genetic map the team needed to quickly identify which regions of the genome are important for establishing cell types which have clinical importance. The entire analysis started running on Spot instances in just 20 minutes, on high memory instance types (the M2 class), meaning that the team could use Cycle Server to stretch their budget further and build an extremely high resolution genetic map. The spot price was typically 12 times lower than the equivalent on-demand price, and their cluster ran across an average of 5000 instances (8000 at peak), for a total cost of $19,555. That’s less than the price of 20 lab pipettes.

Cycle Computing on the AWS Report
Our very own Jeff Barr was lucky enough to spend a few minutes chatting with Cycle Computing CEO, Jason Stowe for the AWS Report. Here is the episode they recorded:

Cycle also have a blog post with some more information on this, and the 2012 Big Science Challenge.

We’re very happy to see the utility computing platform of AWS be used for such ground breaking work. If you’re working with data and would like to discuss how to get up and running at this, or any other scale, I do hope you’ll get in touch.

Upcoming Webinar
If you would like to know more I’ll be hosting a webinar on big data and HPC on the 16th of October. We’ll discuss some customer success stories and common best practices for using tools such as Elastic MapReduce, DynamoDB and the broad range of services on the AWS Marketplace to accelerate your own applications and analytics.

Registration is free. See you there.

~ Matt

AWS Growth – Adding a Third Availability Zone in Tokyo

We announced an AWS Region in Tokyo about 18 months ago. In the time since the launch, our customers have launched all sorts of interesting applications and businesses there. Here are a few examples:

  • Cookpad.com is the top recipe site in Japan. They are hosted entirely on AWS, and handle more than 15 million users per month.
  • KAO is one of Japan’s largest manufacturers of cosmetics and toiletries. They recently migrated their corporate site to the AWS cloud.
  • Fukoka City launched the Kawaii Ward project to promote tourism to the virtual city. After a member of the popular Japanese idol group AKB48 raised awareness of this site, virtual residents flocked to the site to sign up for an email newsletter. They expected 10,000 registrations in the first week and were pleasantly surprised to receive over 20,000.

Demand for AWS resources in Japan has been strong and steady, and we’ve been expanding the region accordingly. You might find it interesting to know that an AWS region can be expanded in two different ways. First, we can add additional capacity to an existing Availability Zone, spanning multiple datacenters if necessary. Second, we can create an entirely new Availability Zone. Over time, as we combine both of these approaches, a single AWS region can grow to encompass many datacenters. For example, the US East (Northern Virginia) region currently occupies more than ten datacenters structured as multiple Availability Zones.

Today, we are expanding the Tokyo region with the addition of a third Availability Zone. This will add capacity and will also provide you with additional flexibility. As is always the case with AWS, untargeted launches of EC2 instances will now make use of this zone with no changes to existing applications or configurations. If you are currently targeting specific Availability Zones, please make sure that your code can handle this new option.

— Jeff;

 

Amazon EC2 Reserved Instance Marketplace

EC2 Options
I often tell people that cloud computing is equal parts technology and business model. Amazon EC2 is a good example of this; you have three options to choose from:

  • You can use On-Demand Instances, where you pay for compute capacity by the hour, with no upfront fees or long-term commitments. On-Demand instances are recommended for situations where you don’t know how much (if any) compute capacity you will need at a given time.
  • If you know that you will need a certain amount of capacity, you can buy an EC2 Reserved Instance. You make a low, one-time upfront payment, reserve it for a one or three year term, and pay a significantly lower hourly rate. You can choose between Light Utilization, Medium Utilization, and Heavy Utilization Reserved Instances to further align your costs with your usage.
  • You can also bid for unused EC2 capacity on the Spot Market with a maximum hourly price you are willing to pay for a particular instance type in the Region and Availability Zone of your choice. When the current Spot Price for the desired instance type is at or below the price you set, your application will run.

Reserved Instance Marketplace
Today we are increasing the flexibility of the EC2 Reserved Instance model even more with the introduction of the Reserved Instance Marketplace. If you have excess capacity, you can list it on the marketplace and sell it to someone who needs additional capacity. If you need additional capacity, you can compare the upfront prices and durations of Reserved Instances on the marketplace to the upfront prices of one and three year Reserved Instances available directly from AWS. The Reserved Instances in the Marketplace are functionally identical to other Reserved Instances and have the then-current hourly rates, they will just have less than a full term and a different upfront price. Transactions in the Marketplace are always between a buyer and a seller; the Reserved Instance Marketplace hosts the listings and allows buyers and sellers to locate and transact with each other.

You can use this newfound flexibility in a variety of ways. Here are a few ideas:

  1. Switch Instance Types. If you find that your application has put on a little weight (it happens to the best of us), and you need a larger instance type, sell the old RIs and buy new ones from the Marketplace or from AWS. This also applies to situations where we introduce a new instance type that is a better match for your requirements.
  2. Buy Reserved Instances on the Marketplace for your medium-term needs. Perhaps you are running a cost-sensitive marketing promotion that will last for 60-90 days. Purchase the Reserved Instances (which we sometimes call RIs), use it until the promotion is over, and then sell it. You’ll benefit from RI pricing without the need to own them for the full one or three year term. Keep the RIs as long as they continue to save you money.
  3. Relocate. Perhaps you started to run your application in one AWS Region, only to find out later that another one would be a better fit for the majority of your customers. Again, sell the old ones and buy new ones.

In short, you get the pricing benefit of Reserved Instances and the flexibility to make changes as your application and your business evolves, grows, or (perish the thought) shrinks.

Dave Tells All
I interviewed Dave Ward of the EC2 Spot Instances team to learn more about this feature and how it will benefit our users. Watch and learn:

The Details
Now that I’ve whet your appetite, let’s take a look at the details. All of the functions described below are supported by the AWS Management Console, the EC2 API (command line) tools, and the EC2 APIs.

After registration, any AWS customer (US or non-US legal entity) can buy and sell Reserved Instances. Sellers will need to have a US bank account, and will need to complete an online tax interview before they reach 200 transactions or $20,000 in sales. You will need to verify your bank account as part of the registration process; this may take up to two weeks depending on your bank. You will not be able to receive funds until the verification process has succeeded.

Reserved Instances can be listed for sale after you have owned them for at least 30 days, and after we have received and processed your payment for them. The RI’s state must be displayed as Active in the Reserved Instance section of the AWS Management Console:

You can list the remainder of your Reserved Instance term, rounded down to the nearest month. If you have 11 months and 13 days remaining on an RI, you can list the 11 months. You can set the upfront payment that you are willing to accept for your RI, and you can also customize the month-over-month price adjustment for the listing. You will continue to own (and to benefit from) the Reserved Instance until it is sold.

As a seller, you will receive a disbursement report if you have activity on a particular day. This report is a digest of all Reserved Instance Marketplace activity associated with your account and will include new Reserved Instance listings, listings that are fully or partially fulfilled, and all sales proceeds, along with details of each transaction.

When your Reserved Instance is sold, funds will be disbursed to your bank account after the payment clears, less a 12% seller fee. You will be informed of the purchaser’s city, state, country, and zip code for tax purposes. As a seller, you are responsible for calculating and remitting any applicable transaction taxes such as sales tax or VAT.

As a buyer, you can search and browse the Marketplace for Reserved Instances that best suit your needs with respect to location, instance type, price, and remaining time. Once acquired, you will automatically gain the pricing and capacity assurance benefits of the instance. You can later turn around and resell the instance on the Marketplace if your needs change.

When you purchase a Reserved Instance through the Marketplace, you will be charged for Premium Support on the upfront fee. The upfront fees will also count toward future Reserved Instance purchases using the volume discount tiers, but the discounts do not apply to Marketplace purchases.

Visual Tour for Sellers
Here is a visual tour of the Reserved Instance Marketplace from the seller’s viewpoint, starting with the process of registering as a seller and listing an instance for sale. The Sell Reserved Instance button initiates the process:


The console outlines the entire selling process for you:

Here’s how you set the price for your Reserved Instances. As you can see, you have the ability to set the price on a month-by-month basis to reflect the declining value of the instance over time:


You will have the opportunity to finalize the listing, and it will become active within a few minutes. This is the perfect time to acquire new Reserved Instances to replace those that you have put up for sale:

Your listings are visible within the Reserved Instances section of the Console:

Here’s a video tutorial on the selling process:

Visual Tour for Buyers
Here is a similar tour for buyers. You can purchase Reserved Instances in the Console. You start by adding searching for instances with the characteristics that you need, and adding the most attractive ones to your cart:

You can then review the contents of your cart and complete your purchase:

Here’s a video tutorial on the buying process:

I hope that you enjoy (and make good use of) the additional business flexibility of the Reserved Instance Marketplace.

Jeff;