Category: Amazon EC2*

AWS Management Console Support for Reserved Instances

The AWS Management Console now has support for our new Reserved Instances feature, previously announced in this very blog. You can now purchase new Reserved Instances and see your existing holdings with point-and-click ease.

The EC2 tab of the console has a new button:

You can see your existing set of Reserved Instances:

And you can purchase additional Reserved Instances:

This new feature should make it easier than ever for you to enjoy the cost benefit that comes with the use of one or more Reserved Instances.

— Jeff;

EC2 and Wowza Media Support Belgium’s Largest Live Streaming Event

Imagine if you need to prepare the internet infrastructure needed to support a live event that:

  • Will host a streaming video,
  • Will start at a time that you can’t control,
  • Will be of an unknown duration,
  • May attract a worldwide audience, and
  • Happens once in a blue moon.

You can’t buy the infrastructure, since you’ll need it just once. Even then you wouldn’t know how much to get. Traditional hosting would require you to make a long term commitment and you still wouldn’t know how much to reserve. Cloud computing, once again, turns out to solve these problems and to enable hundreds of thousands of people to witness a relatively rare event — the birth of an elephant in captivity!

On May 16th and 17th, over 350,000 unique visitors were able to watch the birth of Kai-Mook, the first elephant ever born at the Antwerp Zoo in Belgium. This amazing event was streamed live from a number of Amazon EC2 servers running the Wowza Media Server Pro product.

The statistics for this event were themselves elephantine! In advance of the event, about 50,000 people registered to receive an SMS alert when the birth was immminent. When the alert went out, the system scaled up quickly and was soon streaming live video to over 30,000 concurrent users, helped by some good media coverage including a BBC article. The users watched for an average of 1 hour and 35 minutes and the live event lasted for a total of 42 hours. Behind the scenes, 170 EC2 Large instances handled the streaming.

Note: The original version of the preceding paragraph claimed that the event pumped out 34 Gbps (gigabits per second) of data. It turns out that this was an optimistic and somewhat fuzzy estimate.

Videos, photos, and more are available at the Baby-Oliphant site, developed by interactive agency Boondoggle. CDN provider Rambla used a combination of AWS and their own infrastructure for this project. The original video is here. There’s also a Flickr Photostream. Naturally enough, there’s also a Kai-Mook blog and a complete genealogy. Weighing in at 80 kilograms, Kai-Mook has 12 siblings on his father’s side and 3 more on his mother’s.

As you can see, EC2 and Wowza Media Server Pro were able to support this event in fine style. Billing for the Wowza product is handled through Amazon DevPay so they didn’t have to pay an arm and a leg (or a trunk?) for an excessive number of software licenses to support this unique event.

— Jeff;

New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch

We are working to make it even easier for you to build sophisticated, scalable, and robust web applications using AWS. As soon as you launch some EC2 instances, you want visibility into resource utilization and overall performance. You want your application to be able to scale on demand based on traffic and system load. You want to spread the incoming traffic across multiple web servers for high availability and better performance. You want to focus on building an application that takes advantage of the powerful infrastructure available in the cloud, while avoiding system administration and operational burdens (“The Muck,” as Jeff Bezos once called it).

Today, we are bringing you a lot closer to that world! The load balancing, auto scaling, and cloud monitoring features that I talked about earlier are now available. The features work together to help you to build highly scalable and highly available applications. Amazon CloudWatch monitors your Amazon EC2 capacity, Auto Scaling dynamically scales it based on demand, and Elastic Load Balancing distributes load across multiple instances in one or more Availability Zones. The measurements collected by Amazon CloudWatch provide Auto Scaling with the information needed to run enough Amazon EC2 instances to deal with the traffic load. Auto Scaling updates the Elastic Load Balancing service when new instances are launched or terminated to automatically scale the load-balanced capacity. You can instantiate, configure, and deploy these important system architecture components in seconds.

Amazon CloudWatch tracks and stores a number of per-instance performance metrics including CPU load, Disk I/O rates, and Network I/O rates. The metrics are rolled-up at one minute intervals and are retained for two weeks. Once stored, you can retrieve metrics across a number of dimensions including Availability Zone, Instance Type, AMI ID, or Auto Scaling Group. Because the metrics are measured inside Amazon EC2 you do not have to install or maintain monitoring agents on every instance that you want to monitor. You get real-time visibility into the performance of each of your Amazon EC2 instances and can quickly detect underperforming or underutilized instances.

Auto Scaling lets you define scaling policies driven by metrics collected by Amazon CloudWatch. Your Amazon EC2 instances will scale automatically based on actual system load and performance but you won’t be spending money to keep idle instances running. The service maintains a detailed audit trail of all scaling operations. Auto Scaling uses a concept called an Auto Scaling Group to define what to scale, how to scale, and when to scale. Each group tracks the status of an application running across one or more EC2 instances. A set of rules or Scaling Triggers associated with each group define the system conditions under which additional EC2 instances will be launched or unneeded EC2 instances terminated. Each group includes an EC2 launch configuration to allow for specification of an AMI ID, instance type, and so forth.

Finally, the Elastic Load Balancing feature makes it easy for you to distribute web traffic across Amazon EC2 instances residing in one or more Availability Zones. You can create a new Elastic Load Balancer in minutes. Each one contains a list of EC2 instance IDs, a public-facing URL, and a port number. You will need to use a CNAME record in your site’s DNS entry to associate your this URL with your application. You can use Health Checks to ascertain the health of each instance via pings and URL fetches, and stop sending traffic to unhealthy instances until they recover.

Here’s how the services fit together:

All of this functionality is provided in web service and command-line form:

  • You can call ListMetrics to get a list of statistics collected by Amazon CloudWatch, and then call GetMetricStatistics to retrieve them. Your call to GetMetricStatistics can include a number of parameters to specify the date range, desired metrics and statistics, metric granularity, and more. You can also use mon-list-metrics and mon-get-stats from the command line. There’s a lot more info in the Developer Guide (HTML or PDF) and the Quick Reference Card.
  • On the load balancing side, you start out by calling CreateLoadBalancer to create an Elastic Load Balancer, and will receive a DNS name in return. You can include a list of Availability Zones in the call or you can add them later using EnableAvailabilityZonesForLoadBalancer. From there you can add any number of health checks using ConfigureHealthCheck. A call to RegisterInstancesWithLoadBalancer will add your Amazon EC2 instances to the Elastic Load Balancer, and load balancing will commence. You can use elb-create-lb, elb-enable-zones-for-lb, elb-configure-healthcheck, and elb-register-instances-with-lb from the command line. Again, there’s a lot more info in the Developer Guide (HTML or PDF) and the Quick Reference Card.
  • For Auto Scaling you begin by calling CreateAutoScalingGroup, naming the group and providing the information needed to launch suitably configured Amazon EC2 instances. You then establish the scaling parameters using the CreateOrUpdateScalingTrigger function. The service will then launch Amazon EC2 instances as indicated by the scaling parameters. You can call DescribeScalingActivities at any point to fetch a list of recent scaling activities (instance launches and terminations). Command line equivalents are as-create-autoscaling-group, as-create-or-update-trigger, and as-describe-scaling-activities. Again, there’s a lot ore info in the Developer Guide (HTML or PDF) and the Quick Reference Card.

If you’re signed up for the Amazon EC2 service, you’re already registered to use all of these new features and can begin using them via the web service APIs or Command Line tools. These new features are currently available in the U.S. region with EU region availability coming in the next few months.

You can use these services to make your AWS applications perform better without sacrificing application control freedom of development, choice of tools, speed of deployment, or any other kind of flexibility. You can be up and running with these new services in a matter of minutes. All of these new features are supported through our public forums and also through AWS Premium Support.

Morning Update:As always, a few interesting things came up after I put this post out last night:

  1. Amazon CTO Werner Vogels wrote about these new features in his blog post, Automating the management of Amazon EC2 using Amazon CloudWatch, Auto Scaling and Elastic Load Balancing.
  2. RightScale founder Thorsten von Eiken also wrote about them in his post, Amazon adds Load balancing, Monitoring, and Auto-Scaling..
  3. There’s a good discussion taking place on a Hacker News thread.

— Jeff;

Quetzall CloudCache

Marc from Quetzall sent me a note about their new CloudCache product. CloudCache is a fast, lightweight key-value caching system designed for use within the cloud. Each key can optionally have an associated TTL (time to live). Once the TTL is reached the key and the associated value are removed from the cache.

Running on Amazon EC2 via DevPay, CloudCache is fast, with latency measured at just 1.5 ms. It can be run in multiple EC2 regions to minimize latency concerns. Customers can start small (1 cache) and grow large (1000 caches). CloudCache returns data in Ajax (JSON) or XML format. Bindings are available for Ruby, Java, PHP, and Python. Read the API documentation to learn more.

Since CloudCache is accessible via DevPay, you can sign up here and start using it right away.All pricing and usage charges are available on that page.

— Jeff;

New AWS Public Data Sets – Anthrokids, Twilio/, Sparse Matrices, USA Spending, Tiger

We’ve added some important new community features to our Public Data Sets and we’ve also added some new and intriguing data to our collection. I’m writing this post to bring you up to date on this unique AWS feature and thought I would also show you how to instantiate and use an actual public data set.

If the concept is new to you, allow me to give you a brief introduction. We have set up a centralized repository for large (tens or hundreds of gigabytes) public data sets, which we host at no charge. We currently have public data sets in a number of categories including Biology, Chemistry, Economics, Encyclopedic, Geographic, and Mathematics. The data sets are stored in the form of EBS (Elastic Block Storage) snapshots. These snapshots are used to create an EBS volume from scratch in a matter of seconds. Most data sets are available in formats suitable for use with both Linux and Windows. Once created, the volume is then mounted on an EC2 instance for processing. Once the processing is complete, the volume can be kept alive for further work, archived to S3 or simple deleted.


To make sure that you can get a lot of value from our Public Data Sets, we’ve added some new community features. Each set now has its own page within the AWS Resource Center. The page contains all of the information needed to start making use of the data, including submission information, creation date, update date, data source, and more. There’s a dedicated discussion forum for each data set, and even (in classic Amazon style) room to enter a review and a rating.


We’ve also added a number of rich and intriguing data sets to our collection. Here’s what’s new:

  • The Anthrokids data set includes the results of a pair of 1975 and 1977 studies which collected anthropomorphic data on children. This data can be used to help safety-conscious product designers build better products for children.
  • The Twilio / Street Vector data set provides a complete database of US street names and address ranges mapped to Zip Codes and latitude/longitude ranges, with DTMF key mappings for all street names. This data can be used to validate and normalize street addresses, find a list of street addresses in a zip code, locate the latitude and longitude of an address, and so forth. This data is made available as a set of MySQL data files.
  • The University of Florida Sparse Matrix Collection contains a large and ever-growing set of sparse matrices which arise in real-world problems in structural engineering, computational fluid dynamics, electromagnetics, acoustics, robotics, chemistry, and much more. The largest matrix in the collection has a dimension of almost 29 million, with over 760 million nonzero entries. Graphic representations of some of this data are shown at right, in images produced by Yifan Hu of AT&T Labs. The data is available in MATLAB, Rutherford-Boeing, and Matrix Market formats.
  • The data set contains a dump of all federal contracts from the Federal Procurement Data Center. This data summarizes who bought what, from whom, and where. The data was extracted by and is available in Apache CouchDB format.
  • The 2008 Tiger/Line Shapefiles data set is a complete set of shapefiles for American states, counties, districts, places, and areas, along with associated metadata. This data is a product of the US Census Bureau.

We’ll continue to add additional public data sets to our collection over the coming months. Please feel free to submit your own data sets for consideration, or to propose inclusion of data sets owned by others.

It is really easy to instantiate an instance of a public data set. I wanted to process the 2003-2006 US Economic Data. Here’s what I need to do:

  1. Launch a fresh EC2 instance and note its availability one.
  2. Visit the home page for the data set and note the Snapshot ID (snap-0bdf3f62 for Linux in the US) and the Size (220 GB).
  3. Create a new EBS volume using the parameters from the first two steps. I’ll use the AWS Management Console:

    I hit the “Create” button, waited two seconds, and then hit “Refresh.” The volume status changed from “creating” to “available” so I knew that my data was ready.

  4. Attach the volume to my EC2 instance, again using the console:
  5. Create a mount point and then mount the volume on my instance. This has to be done from the Linux command line:
  6. Now I have access to the data, and can do anything I want with it. Here’s a snippet of a directory listing:

Once I am done I can simply unmount the volume, shut down the instance, and delete the volume. No fuss, no muss, and a total cost of 11 cents (10 cents for an hour of EC2 time and a penny or so for the actual EBS volume).


Amazon EC2 Running IBM

Earlier this year I talked about our partnership with IBM and their commitment to the creation of licensing models that are a good match for dynamic cloud-computing environments. At that time we released a set of development AMIs (Amazon Machine Images), giving you the ability to create applications using IBM products such as DB2, WebSphere sMash, WebSphere Portal, Lotus Web Content Management, and Informix.

The response to our announcement has been good; developers, integrators, and IT shops have all been asking us for information on pricing and for access to the actual AMIs. We’ve been working with IBM to iron out all of the details and I’m happy to be able to share them with you now!

Starting today you now have development and production access to a number of IBM environments including:

  • Amazon EC2 running IBM DB2 Express – starting at $0.38 per hour.
  • Amazon EC2 running IBM DB2 Workgroup – starting at $1.31 per hour.
  • Amazon EC2 running IBM Informix Dynamic Server Express – starting at $0.38 per hour.
  • Amazon EC2 running IBM Informix Dynamic Server Workgroup – starting at $1.31 per hour.
  • Amazon EC2 running IBM WebSphere sMash – starting at $0.50 per hour.
  • Amazon EC2 running IBM Lotus Web Content Management – starting at $2.48 per hour.
  • Amazon EC2 running IBM WebSphere Portal Server and IBM Lotus Web Content Management Server – starting at $6.39 per hour.

These prices include on-demand licenses for each product. The AMIs are available in the US and EU regions, but you currently can not use Amazon EC2 running IBM with Reserved Instances. However, if you already have licenses from IBM you can install and run the software yourself and pay the usual EC2 rate for On-Demand or Reserved Instances. You can, of course, use other EC2 features such as Elastic IP Addresses and Elastic Block Storage.

You can find the IBM AMIs in the AWS Management Console‘s Community AMI List (search for “paid-ibm”), or you can search for “paid-ibm” in ElasticFox.

Because products like the WebSphere Portal Server and IBM Lotus Web Content Management Server can now be accessed on an hourly basis, you can now think about deploying them in new ways. If you are running a big conference or other event, you can spin up an instance for the duration of the event and only pay a couple of hundred dollars. If you need to do more than one event at the same time, just spin up a second instance. This is all old-hat to true devotees of cloud computing, but I never tire of pointing it out!

Each AMI includes a detailed Getting Started guide. For example, the guide for the WebSphere Portal Server and IBM Lotus Web Content Management Server is 30 pages long. The guide provides recommendations on instance sizes (Small and Large are fine for development; a 64-bit Large or Extra Large is required for production), security groups, and access via SSH And remote desktop (VNC). There’s information about entering license credentials (needed if you bring your own), EBS configuration, and application configuration. The guide also details the entire process of bundling a customized version of the product for eventual reuse.

Additional information on products and pricing is available on the IBM partner page.

And there you have it. With this release, all of the major database products — Oracle, MySQL, DB2, Informix, and SQL Server — are available in production form on EC2.

— Jeff;

How To Purchase an EC2 Reserved Instance

Update: You can now make this purchase using the AWS Management Console. Click here to learn more.

I thought that it would be worthwhile to outline the steps needed to purchase an EC2 Reserved Instance. Here’s what you need to do:

  1. Choose a Region.
  2. Choose an Availability Zone.
  3. Locate the Reserved Instance offering.
  4. Make the purchase.
  5. Enjoy.

This blog post assumes that you have the latest version of the EC2 Command Line tools installed and that you have set the proper environment variables (JAVA_HOME, EC2_HOME, EC2_PRIVATE_KEY, and EC2_CERT) All commands are to be typed in to a Windows Command (cmd.exe) window.

Choose a Region

Per the announcement, you can now purchase Reserved Instances in either the US or in Europe. If you already have an EC2 instance running in a particular region and you want to convert it to a reserved instance, then choose that region. Otherwise, choose the region that is best suited to your needs over the term (1 year or 3 year) of the Reserved Instance.

Based on your chosen region, set your EC2 endpoint appropriately:



Choose an Availability Zone

If you already have an On-Demand instance running and you want to convert it to a Reserved Instance, or if you have an EBS volume in a particular Availability Zone, then your choice is clear. You can use the ec2-describe-instances command to figure out the availability zone and instance type if necessary. In the screen shot below, I have highlighted the instance type in yellow and the availability zone in purple to make it clear where to find them:

Locate The Reserved Instance Offering

Now that you know the instance type and Availability Zone, you need to decide if you want to purchase a Reserved Instance for 1 year or for 3 years. You can consult the EC2 Pricing Chart and make a decision based on your needs. Considerations might include the expected lifetime of your site or application, plans for growth, degree of variability expectd in your usage patterns, and so forth.

The next step is to run ec2-describe-reserved-instances-offerings and select the appropriate offering. Each offering is identified by an alphanumeric id such as e5a2ff3b-f6eb-4b4e-83f8-b879d7060257 (highlighted in yellow below):

You can also get fancy and run a search pipeline. Here’s how I found an m1.small instance in us-east-1a with a 1 year term:

Make the Purchase

The next step is to actually make the purchase using ec2-purchase-reserved-instances-offering. This command requires an offering id from the previous step and an instance count, allowing purchase of more than one reserved instance at a time. Needless to say, you should use this command with caution since you are spending hundreds or thousands of dollars! Here’s what happened when I made the purchase:


Since I already had an instance running, all further instance hours that it consumes will be billed at the lower rate. As of this fall three of my five offspring will be in college ( Washington, Maryland, and Rochester), so the extra pennies per hour will definitely come in handy!

— Jeff;

EC2 Reserved Instances – Now In Europe, Too!

I wrote about the exciting and economical EC2 Reserved Instances just a few weeks ago. The response to that announcement has been really good, with positive feedback from our customers who are enjoying this new option.

Today I am happy to let you know that you can now reserve EC2 instances in our European (EU) region. The one-time fee is the same as in the US and, as is the case for the On-Demand instances in Europe, the per-hour cost is slightly higher than it is in the US. You can use the same API and command-line tools; just remember to use the proper endpoint and you’ll be all set.

I use an EC2 instance to host my personal blog and a number of other projects. I converted it to a Reserved Instance just last week and am already enjoying the savings. After setting up the EC2 API tools on my desktop Windows machine, I ran ec2-describe-reserved-instance-offerings to find an offering in the right availability zone, and ec2-purchase-reserved-instances-offering to make the purchase. Here’s what my account looks like now:


Upcoming Webinars

There are a number of webinars scheduled over the next several weeks, and the purpose of this post is to make certain everyone is aware of the various options and opportunities:

  • Thursday, April 16 at 9:00am PST
    Cloud for Large Enterprise — Where to Start
    Hear Capgemini put AWS and cloud computing in context for large enterprises during this live webinar now taking place on April 16. You’ll learn key steps for creating a cloud strategy that works with your business and discover ways that cloud computing can lower costs while accelerating development. Speakers will include Amazon Web Service’s Terry Wise, who is a Director of Business Devleopment, Simon Plant, who is a Chief Enterprise Architect at Capgemini, and Andrew Gough, also with Capgemini. Register here. (Business level discussion)
  • Tuesday, April 21 at 9:00am PST
    ERP in the Cloud
    The AWS community includes a number of innovative ISVs. Compiere is one great example of the innovation, and has released ERP software running on Amazon Web Services. This one-hour non-technical session will include Compieres CEO Don Klaiss, a Compiere customer, and a few comments by me. Register here. (Business level discussion with a bit of light technical content)
  • Wednesday, April 22 at 9:00am PST
    Amazon SimpleDB Brown Bag As previously blogged, the development team will host this webinar. Several topics will be included; and I am excited that I will be able to present a proxy class that allows Visual Studio developers to integrate Amazon SimpleDB into Visual Studio 2008. Registration details are in the blog post, or here. (Developer-focused technical content)
  • Thursday, April 23 at 9:00am PST
    Introduction to Amazon Web Services for IT Professionals
    Attend this April 23 webinar for a live demonstration of how to get started using Amazon Web Services. AWS technical evangelist Mike Culver will present an IT-oriented overview of all AWS products, including an in-depth discussion on using Amazon S3 for cloud storage and Amazon EC2 for cloud computing. Register here.(Both business and technical content, erring on the side of “technical”)