Category: Compute*


Cost Savings in the Cloud – foursquare and Global Blue

Weve received great feedback from customers on the recent AWS price reductions. Ive personally received a number of great stories from customers who are using AWS to reduce the cost of running their business.

Weve written up two of these recent cost savings stories as AWS case studies. foursquare Labs, Inc. and Global Blue may be two dramatically different companies in terms of longevity (3 years versus 3 decades) as well as industry focus, but they do have one thing in common they are both saving time and money  by using the Amazon Web Services as an alternative to on-premises infrastructure. Here’s the scoop:

foursquare (new AWS case study) is a location-based social network. Over 10 million foursquare users check in via a smartphone app or SMS to exchange travel tips and to share their location with friends. By checking in frequently, users earn points and virtual badges. To perform analytics across more than 5 million daily check-ins, foursquare uses Amazon Elastic MapReduce, Amazon EC2 Spot Instances, Amazon S3, and open-source technologies MongoDB and Apache Flume. By taking advantage of new Amazon EC2 price reductions and using a combination of On-Demand and Reserved Instances, foursquare saves 53% in costs over self-hosting while maintaining their scalability needs.

Global Blue (new AWS case study) is a multi-national firm that has been instrumental in delivering tax-free shopping and refund points to international travelers for nearly 30 years. The companys network has helped 270,000 retailers, shopping brands, and hotels in 40 countries. In 2010, Global Blue handled over 20 million transactions worldwide and an estimated 55,000 travelers use their services everyday. To help track the transactions occurring between merchants, banks, and international travelers, the company needed to create more capacity for their business intelligence (BI) needs. As a result of moving to AWS, Global Blue has increased speed, capacity, and scalabilityall while avoiding $800,000 in CapEx and $78,000 in OpEx costs that would have been spent self-hosting. Thats nearly $1M in cost savings by moving to the cloud!

Companies like foursquare and Global Blue help illustrate not only the cost savings that can be achieved with cloud computing, but also the diverse approaches that can be taken to maximize their performance and scalability.

Jeff;

 

 

New AMI Catalog and New AMI Launch Button Are Now Available

A few days ago we launched a simple, yet very important new feature on our AWS website: the self-service AWS AMI catalog.

As you know, AMIs are disk images that contain a pre-configured operating system and virtual application software, and they serve as the required base to launch an EC2 instance.

Before this change in the catalog, customers had to spend considerable time navigating the site, without the possibility to efficiently filter results by multiple categories or the ability to easily find details about a specific AMI. In fact, some of them sent us feedback about this, and as we always listen to customer feedback, we decided it was time to improve the catalog.

With the new catalog, already available today, customers can easily search AMIs specifying desired categories, such as Provider, Region, Architecture, Root device type, and Operating System (Platform), and they can sort the results by date or title (A-Z).

This is an example of the results you get when you search for a 64-bit, EBS boot, South America AMI :

Figure1

You can launch the AMI directly with the new Launch button that you see in the figure above, or you can click on the link, and take a look at the details, before deciding in which region you want to launch it:

Figure2

The Launch button is particularly useful, since you no longer need to copy and paste AMI IDs from the old catalog to your Management Console.

We also made it easier for customers to publish their own AMIs into the catalog. Simply find the relevant AMI ID on your Management Console, and then submit it as shown below.

Figure3

The system will verify the ID and then submit it; you can then check the status of your submission on the appropriate page, and check out your community contributions where you can see what you own and edit it.

As a final note, always remember to check out our security guidelines on how to use Shared AMIs (also called Community AMIs).

Let us know how you like the new catalog.

– Simone (@simon)

AWS Elastic Beanstalk – Build PHP Apps Using Git-Based Deployment

I’m pleased to be able to tell you that AWS Elastic Beanstalk now supports PHP and Git deployment.

Elastic Beanstalk and PHP
AWS Elastic Beanstalk makes it easy for you to quickly deploy and manage applications in the AWS cloud. You simply upload your application, and Elastic Beanstalk automatically handles all of the details associated with deployment including provisioning of EC2 instances, load balancing, auto scaling, and application health monitoring. Even though it does all of this for you, you retain full control over the AWS resources powering your application and you can access them at any time if necessary. There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to run your applications. AWS Elastic Beanstalk supports PHP applications that run on the familiar Apache HTTP server and PHP 5.3. AWS Elastic Beanstalk also supports Java web applications running on Apache Tomcat 6 and 7.

Under the hood, Elastic Beanstalk leverages AWS services such Amazon EC2, Elastic Load Balancing and Auto Scaling to provide a highly reliable, scalable and cost effective infrastructure for PHP applications. To get started, you can use the AWS Management Console or the Elastic Beanstalk command line tools to create applications and environments.

Git-Based Deployment
The newly released Git interface provides faster deployments based on the popular Git version control system. You can now set up your Git repositories to directly deploy changes to your AWS Elastic Beanstalk environments. Git speeds up deployments by only pushing your modified files to AWS Elastic Beanstalk. In seconds, PHP applications get updated on a set of Amazon EC2 instances. To learn more about how to leverage Git deployment, go to Deploying PHP Applications Using Git in the AWS Elastic Beanstalk Developer Guide.

<jeff wordpress>: git aws.push
Counting objects: 1035, done.
Delta compression using up to 2 threads.
Compressing objects: 100 % ( 1015 / 1015 ), done.
Writing objects: 100 % ( 1035 / 1035 ), 3.84 MiB | 229 KiB /s, done. | 229 KiB /s
Total 1035 (delta 72 ), reused 0 (delta 0 )
remote:
To https: //<url >.us-east- 1.amazonaws.com /repos /phptestapp /phptestenv
   683a95c..f8caebc  master – > master

If you make a change to a configuration file in your application, Git pushes the incremental changes to Elastic Beanstalk and your deployment completes in seconds:

<jeff wordpress>: git aws.push
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100 % ( 3 / 3 ), done.
Writing objects: 100 % ( 3 / 3 ), 287 bytes, done.
Total 3 (delta 2 ), reused 0 (delta 0 )
remote:
To https: //<url >.us-east- 1.amazonaws.com /repos /phptestapp /phptestenv
   c24a736..4df4dad  master – > master

And More
AWS Elastic Beanstalk allows you to directly modify both the infrastructure and software to match the requirements of your applications. You can connect your PHP applications to any database of your choice. If your application needs a MySQL database, Amazon RDS provides a highly available and scalable MySQL database and frees you from time-consuming database administration tasks. For a limited time, Amazon RDS is offering a 60-day free trial to new Amazon RDS customers. To learn more about your eligibility for the 60-day free trial and to sign up, visit aws.amazon.com/rds/free-trial. If youre looking for a database that offers fast and predictable performance with seamless scalability, use the AWS SDK for PHP to access the fully managed Amazon DynamoDB NoSQL database service.

As a PHP developer myself, I cant wait to start using the new PHP runtime and super fast Git deployment to manage my PHP applications on AWS. To learn more about AWS Elastic Beanstalk, go to the AWS Elastic Beanstalk Developer Guide.

— Jeff;

P.S: We are hiring software development engineers and product managers. If you are passionate about building the best developer experience, get in touch with us at aws-elasticbeanstalk-jobs@amazon.com.

 

Two New AWS Getting Started Guides

We’ve put together a pair of new Getting Started Guides for Linux and Microsoft Windows. Both guides will show you how to use EC2, Elastic Load Balancing, Auto Scaling, and CloudWatch to host a web application.

The Linux version of the guide (HTML, PDF) is built around the popular Drupal content management system. The Windows version (HTML, PDF) is built around the equally popular DotNetNuke CMS.

These guides are comprehensive. You will learn how to:

  • Sign up for the services
  • Install the command line tools
  • Find an AMI
  • Launch an Instance
  • Deploy your application
  • Connect to the Instance using the MindTerm SSH Client or PuTTY
  • Configure the Instance
  • Create a custom AMI
  • Create an Elastic Load Balancer
  • Update a Security Group
  • Configure and use Auto Scaling
  • Create a CloudWatch Alarm
  • Clean up

Other sections cover pricing, costs, and potential cost savings.

We also have Getting Started Guides for Web Application Hosting, Big Data, and Static Website Hosting.

— Jeff;

 

The Next Type of EC2 Status Check: EBS Volume Status

Weve gotten great feedback on the EC2 Instance Status checks that we introduced back in January. As I said at the time, we expect to add more of these checks throughout the year. Our goal is to get you the information that you need in order to understand when your EC2 resources are impaired.

Status checks help identify problems that may impair an instances ability to run your applications. These status checks show the results of automated tests performed by EC2 on every running instance that detect hardware and software issues. Today we are happy to introduce the first Volume Status check for EBS volumes. In rare cases, bad things happen to good volumes. The new status check is updated when the automated tests detect a potential inconsistency in a volumes data. In addition, weve added API and Console support so you can control how a potentially inconsistent volume will be processed.

Here’s what’s new:

  • Status Checks and Events – The new DescribeVolumeStatus API reflects the status of the volume and lists an event when a potential inconsistency is detected. The event tells you why a volumes status is impaired and when the impairment started. By default, when we detect a problem, we disable I/O on the volume to prevent application exposure to potential data inconsistency.
  • Re-Enabling I/O The IO Enabled status check fails when I/O is blocked. You can re-enable I/O by calling the new EnableVolumeIO API.
  • Automatically Enable I/O Using the ModifyVolumeAttribute/DescribeVolumeAttribute APIs you can configure a volume to automatically re-enable I/O. We provide this for cases when you might favor immediate volume availability over consistency. For example, in the case of an instances boot volume where youre only writing logging information, you might choose to accept possible inconsistency of the latest log entries in order to get the instance back online as quickly as possible.

Console Support
The status of each of your instances is displayed in the volume list: (you may have to add the Status Checks column to the table using the selections accessed via the Show/Hide button):

(I don’t have that many volumes; this screen shot came from a colleague’s test environment).

The console displays detailed information about the status check when a volume is selected:

And you can set the volume attribute to auto-enable I/O by accessing this option in the volume actions drop-down list:

To learn more, go to the Monitoring Volume Status section of the Amazon EC2 User Guide.

Were happy to be delivering another EC2 resource status check to provide you with information on impaired resources and the tools to take rapid action on them. As I noted before, we look forward to providing more of these status checks over time.

Help Wanted
If you are interested in helping us build systems like EBS, wed love to hear from you! EBS is hiring software engineers, product managers, and experienced engineering managers. For more information about positions please contact us at ebs-jobs at amazon.com.

— Jeff;

EC2 Updates: New Medium Instance, 64-bit Ubiquity, SSH Client

Big News
I have three important announcements for EC2 users:

  1. We have introduced a new instance type, the Medium (m1.medium).
  2. You can now launch 64-bit operating systems on the m1.small and c1.medium instances.
  3. You can now log in to an EC2 instance from the AWS Management Console using an integrated SSH client.

New Instance Type
The new Medium instance type fills a gap in the m1 family of instance types, splitting the difference, price and performance-wise, between the existing Small and Large types and bringing our instance count to thirteen (other EC2 instance types). Here are the specs:

  • 3.75 GB of RAM
  • 1 virtual core running at 2 ECU (EC2 Compute Unit)
  • 410 GB of instance storage
  • 32 and 64-bit
  • Moderate I/O performance

The Medium instance type is available now in all 8 of the AWS Regions. See the EC2 Pricing page for more information on the On Demand and Reserved Instance pricing (you can also acquire Medium instances in Spot form).

64-bit Ubiquity
You can now launch 64-bit operating systems on the Small and Medium instance types. This means that you can now create a single Amazon Machine Image (AMI) and run it on an extremely wide range of instance types, from the Micro all the way up to the High-CPU Extra Large and and the High-Memory Quadruple Extra Large, as you can see from the console menu:

This will make it easier for you to scale vertically (to larger and smaller instances) without having to maintain parallel (32 and 64-bit) AMIs.

SSH Client
We’ve integrated the MindTerm SSH client into the AWS Management console to simplify the process of connecting to an EC2 instance. There’s now  second option on the Connect window:

And there you have it! What do you think?

— Jeff;

 

New Amazon CloudWatch Monitoring Scripts

Update (January 6, 2016) – The scripts described in this blog post have been deprecated and are no longer available.

For updated information on how to perform the same tasks in a more modern fashion, please take a look at Sending Performance Counters to CloudWatch and Logs to CloudWatch Logs, Configuring a Windows Instance Using the EC2Config Service, and Monitoring Memory and Disk Statistics for Amazon EC2 Linux Instances.


The Amazon CloudWatch team has just released new sample scripts for monitoring memory and disk space usage on your Amazon EC2 instances running Linux.

You can run these scripts on your instances and configure them to report memory and disk space usage metrics to Amazon CloudWatch. Once the metrics are submitted to CloudWatch, you can view graphs, calculate statistics and set alarms on them in the CloudWatch console or via the CloudWatch API.

Available metrics include:

  • Memory Utilization (%)
  • Memory Used (MB)
  • Memory Available (MB)
  • Swap Utilization (%)
  • Swap Used (MB)
  • Disk Space Utilization (%)
  • Disk Space Used (GB)
  • Disk Space Available (GB)

The instance memory and disk space usage metrics are reported as Amazon CloudWatch Custom Metrics. Standard Amazon CloudWatch free tier quantities and pricing apply. This is an unsupported sample but we appreciate all feedback, comments and questions you post to the AWS forums.

To learn more about how to use the scripts, including installation, setup and configuration, please visit “Amazon CloudWatch Monitoring Scripts for Linux” in the Amazon CloudWatch Developer Guide.

— Henry Hahn, Product Manager, Amazon CloudWatch.

Resource Level IAM for Elastic Beanstalk

Today’s guest post comes to you from Saad Ladki, Product Manager for AWS Elastic Beanstalk.

— Jeff;


We are excited to announce that AWS Elastic Beanstalk now supports resource permissions through AWS Identity and Access Management (IAM). AWS Elastic Beanstalk provides a quick way to deploy and manage applications in the AWS cloud. IAM enables you to manage permissions for multiple users within your AWS account.

In this blog post, we will walk you through an example of how you can leverage resource permissions for Elastic Beanstalk. Lets assume that a consulting firm is developing multiple applications for different customers and that Jack is one of the developers working on an Elastic Beanstalk application App1. Jill tests the changes for App1 and for a second application App2. John is the manager overseeing the two applications and owns the AWS resources. John helps out with development and testing, and only he can update their production environments for App1 and App2. The following matrix describes the different levels of access needed for Jack, Jill, and John:

    Jack Jill John
  Operation      
App1 View application, application versions, environments and configuration
Create application versions and deploy them to the staging environment  
Update the production environment    
Create and terminate environments    
App2 View application, application versions, environments and configuration  
Create application versions and deploy them to the staging environment    
Update the production environment    
Create and terminate environments    

To create the IAM users and assign policies, do the following:

  1. Log into the IAM tab of the AWS Management Console.
  2. In the Navigation pane, click Users.
  3. Click Create new Users. You can also create groups of users if you have multiple users with the same permissions.
  4. In the Create User page, type the name of the user(s) and click Create.
  5. In the Create User confirmation page, click Download Credentials to store the credentials for each user.
  6. In the user list, select the user John, and then click the Permissions tab.
  7. In the Permissions tab, click Attach User Policy.
  8. Click Custom Policy, and then click Select.
  9. In the Manage User Permissions page, type a name for each policy and then copy/paste the policies below. Notice that Jack has permissions to perform all Describe operations on application App1, and he can update only the app1-staging environment. This prevents him from being able to deploy to the production environment, app1-prod.

    Jack has three associated policies. Replace with your AWS account ID.

  10. Follow steps 7 through 10 to assign policies to John and Jill using policies modeled after the following:

If Jack attempts to deploy an application version to the app1-prod environment, he will receive the following error:

To learn more about AWS Elastic Beanstalk or to get started setting up resource permissions for AWS Elastic Beanstalk, go to the Creating Policies to Control Access to Specific AWS Elastic Beanstalk Resources in the AWS Elastic Beanstalk Developer Guide.

 — Saad

 

Surprise! The EC2 CC2 Instance Type uses a Sandy Bridge Processor…

We like to distinguish our Cluster Compute instances from the other instance types by providing details about the actual processor (CPU) inside. When we launched the CC2 (Cluster Compute Eight Extra Large) instance type last year we were less specific than usual, stating only that each instance contained a pair of 8-core Xeon processors, each Hyper-Threaded, for a total of 32 parallel execution units.

If you are a student of the Intel architectures, you probably realized pretty quickly that Intel didn’t actually have a processor on the market with these particular specs and wondered what was going on.

Well, therein lies a tale. We work very closely with Intel and were able to obtain enough pre-release Xeon E5 (“Sandy Bridge“) chips last fall to build, test, and deploy the CC2 instance type. We didn’t publicize or expose this information and simply announced the capabilities of the instance.

Earlier today, Intel announced that the Xeon E5 is now in production and that you can now buy the same chips that all EC2 users have had access to since last November. You can now write and make use of code that takes advantage of Intel’s Advanced Vector Extensions (AVX) including vector and scalar operations on 256-bit integer and floating-point values. These capabilities were a key in an cluster of 1064 cc2.8xlarge instances making it to the 42nd position at last Novembers Top500 supercomputer list clocking in at over 240 Teraflops.

I am happy to say that we now have plenty of chips, and there is no longer any special limit on the number of CC2 instances that you can use (just ask if you need more). Customers like InhibOx are using cc2.8xlarge instances for building extremely large customized virtual libraries for their customers – to support computational chemistry in drug discovery.  In addition to computational chemistry customers have been using this instance type for a variety of applications ranging from image processing to in-memory databases.

On a personal note, my son Stephen is working on a large-scale dynamic problem solver as part of his PhD research. He recently ported his code from Python to C++ to take advantage of the Intel Math Kernel Library (MKL) and some other parallel programming tools. I was helping him to debug an issue that prevented him from fully exploiting all of the threads. Once we had fixed it, it was pretty cool to see his CC2 instance making use of all 32 threads (image via htop):

And what are you doing with your CC2? Leave us a comment and share….

— Jeff;

Dropping Prices Again– EC2, RDS, EMR and ElastiCache

AWS works hard to lower our costs so that we can pass those savings back to our customers. We look to reduce hardware costs, improve operational efficiencies, lower power consumption and innovate in many other areas of our business so we can be more efficient. The history of AWS bears this out — in the past six years, weve lowered pricing 18 times, and today were doing it again. Were lowering pricing for the 19th time with a significant price decrease for Amazon EC2, Amazon RDS, Amazon ElastiCache and Amazon Elastic Map Reduce.

Amazon EC2 Price Drop
First, a quick refresher.  You can buy EC2 instances by the hour. You have no commitment beyond an hour and can come or go as you please. That is our On-Demand model.

If you have predictable, steady-state workloads, you can save a significant amount of money by buying EC2 instances for a term (one year or three year).  In this model, you purchase your instance for a set period of time and get a lower price. These are called Reserved Instances, and this model is the equivalent to buying or leasing servers, like folks have done for years, except EC2 passes its benefit of substantial scale to its customers in the form of low prices. When people try to compare EC2 costs to doing it themselves, the apples to apples comparison is to Reserved Instances (although with EC2, you don’t have to staff all the people to build / grow / manage the Infrastructure, and instead, get to focus your scarce resources on what really differentiates your business or mission).

Todays Amazon EC2 price reduction varies by instance type and by Region, with Reserved Instance prices dropping by as much as 37%, and On-Demand instance prices dropping up to 10%. In 2006, the cost of running a small website with Amazon EC2 on an m1.small instance was $876 per year. Today with a High Utilization Reserved Instance, you can run that same website for less than 1/3 of the cost at just $250 per year – an effective price of less than 3 cents per hour. As you can see below, we are lowering both On-Demand and Reserved Instances prices for our Standard, High-Memory and High-CPU instance families. The chart below highlights the price decreases for Linux instances in our US-EAST Region, but we are lowering prices in nearly every Region for both Linux and Windows instances.

For a full list of our new prices, go to the Amazon EC2 pricing page.

We have a few flavors of Reserved Instances that allow you to optimize your cost for the usage profile of your application. If you run your instances steady state, Heavy Utilization Reserved Instances are the least expensive on a per hour basis. Other variants cost a little more per hour in exchange for the flexibility of being able to turn them off and save on the usage costs when you are not using them. This can save you money if you dont need to run your instance all of the time. For more details on which type of Reserved Instances are best for you, see the EC2 Reserved Instances page.

Save Even More on EC2 as You Get Bigger
One misperception we sometimes hear is that while EC2 is a phenomenal deal for smaller businesses, the cost benefit may diminish for large customers who achieve scale.  We have lots of customers of all sizes, and those who take the time to rigorously run the numbers see significant cost advantages in using EC2 regardless of the size of their operations.

Today, were enabling customers to save even more as they scale — by introducing Reserved Instance volume tiers.  In order to determine what tier you qualify for, you add up all of the upfront Reserved Instance payments for any Reserved Instances that you own. If you own more than $250,000 of Reserved Instances, you qualify for a 10% discount on any additional Reserved Instances you buy (that discount applies to both the upfront and the usage prices).  If you own more than $2 Million of Reserved Instances, you qualify for a 20% discount on any new Reserved Instances you buy.  Once you cross $5 Million in Reserved Instance purchases, give us a call and we will see what we can do to reduce prices for you even further we look forward to speaking with you!

Price Reductions for Amazon RDS, Amazon Elastic MapReduce and Amazon ElastiCache
These price reductions dont just apply to EC2 though, as Amazon Elastic MapReduce customers will also benefit from lower prices on the EC2 instances they use.  In addition, we are also lowering prices for Amazon Relational Database Service (Amazon RDS).  Prices for new RDS Reserved Instances will decrease by up to 42%, with On-Demand Instances for RDS and ElastiCache decreasing by up to 10%.

Heres a quick example of how these price reductions will help customers save money. If you are a game developer using a Quadruple Extra Large RDS MySQL 1-year Heavy Utilization Reserved Instance to power a new game, the new pricing will save you over $550 per month (or 39%) for each new database instance you run. If you run an e-commerce application on AWS using an Extra Large multi-AZ RDS MySQL instance for your always-on database you will save more than $445 per month (or 37%) by using a 3-year Heavy Utilization Reserved Database Instance.  If you added a two node Extra Large ElastiCache cluster for better performance, you will save an additional $80 per month (or 10%).  For a full list of the new prices, go to the Amazon RDS pricing page, Amazon ElastiCache pricing page, and the Amazon EMR pricing page.

Real Customer Savings
Lets put these cost savings into context. One of our fast growing customers was primarily running Amazon EC2 On-Demand instances, running 360,000 hours last month using a mix of M1.XL, M1.large, M2.2XL and M2.4XL instances.  Without this customer changing a thing, with our new EC2 pricing, their bill will drop by over $25,000 next month, or $300,000 per year an 8.6% savings in their On-Demand spend. This customer was in the process of switching to 3-year Heavy Utilization Reserved Instances (seeing most of their instances are running steady state) for a whopping savings of 55%. Now, with the new EC2 price drop we’re announcing today, this customer will save another 37% on these Reserved Instances.  Additionally, with the introduction of our new volume tiers, this customer will add another 10% discount on top of all that. In all, this price reduction, the new volume discount tiers, and the move to Reserved Instances will save the customer over $215,000 per month, or $2.6 million per year over what they are paying today, reducing their bill by 76%!

Many of our customers were already saving significant amounts of money before this price drop, simply by running on AWS.  Samsung uses AWS to power its smart hub application which powers the apps you can use through their TVs and they recently shared with us that by using AWS they are saving $34 million in capital expenses over 2 years and reducing their operating expenses by 85%.  According to their team, with AWS, they met reliability and performance objectives at a fraction of the cost they would have otherwise incurred.

Another customer example is foursquare Labs, Inc., They use AWS to perform analytics across more than 5 million daily check-ins.  foursquare runs Amazon Elastic MapReduce clusters for their data analytics platform, using a mix of High Memory and High CPU instances.  Previously, this EMR analytics cluster was running On-Demand EC2 Instances, but just recently, foursquare decided they would buy over $1 million of 1-year Heavy Utilization Reserved Instances, reducing their costs by 35% while still using some On-Demand instances to provide them with the flexibility to scale up or shed instances as needed.  However, the new EC2 price drop lowers their costs even further.  This price reduction will help foursquare save another 22%, and their overall EC2 Reserved Instance usage for their EMR cluster qualifies them for the additional 10% volume tier discount on top of that.  This price drop combined with the move to Reserved Instances will help foursquare reduce their EC2 instance costs by over 53% from last month without sacrificing any of the scaling provided by EC2 and Elastic MapReduce.

As we continue to find ways to lower our own cost structure, we will continue to pass these savings back to our customers in the form of lower prices.  Some companies work hard to lower their costs so they can pocket more margin.  Thats a strategy that a lot of the traditional technology companies have employed for years, and its a reasonable business model.  Its just not ours.  We want customers of all sizes, from start-ups to enterprises to government agencies, to be able to use AWS to lower their technology infrastructure costs and focus their scarce engineering resources on work that actually differentiates their businesses and moves their missions forward.  We hope this is another helpful step in that direction.

You can use the AWS Simple Monthly Calculator to see the savings first-hand.

— Jeff;