Category: Amazon EC2


New AWS Public Data Sets – Anthrokids, Twilio/Wigle.net, Sparse Matrices, USA Spending, Tiger

by Jeff Barr | on | in Amazon EC2 |

We’ve added some important new community features to our Public Data Sets and we’ve also added some new and intriguing data to our collection. I’m writing this post to bring you up to date on this unique AWS feature and thought I would also show you how to instantiate and use an actual public data set.

If the concept is new to you, allow me to give you a brief introduction. We have set up a centralized repository for large (tens or hundreds of gigabytes) public data sets, which we host at no charge. We currently have public data sets in a number of categories including Biology, Chemistry, Economics, Encyclopedic, Geographic, and Mathematics. The data sets are stored in the form of EBS (Elastic Block Storage) snapshots. These snapshots are used to create an EBS volume from scratch in a matter of seconds. Most data sets are available in formats suitable for use with both Linux and Windows. Once created, the volume is then mounted on an EC2 instance for processing. Once the processing is complete, the volume can be kept alive for further work, archived to S3 or simple deleted.

 

To make sure that you can get a lot of value from our Public Data Sets, we’ve added some new community features. Each set now has its own page within the AWS Resource Center. The page contains all of the information needed to start making use of the data, including submission information, creation date, update date, data source, and more. There’s a dedicated discussion forum for each data set, and even (in classic Amazon style) room to enter a review and a rating.

 

We’ve also added a number of rich and intriguing data sets to our collection. Here’s what’s new:

  • The Anthrokids data set includes the results of a pair of 1975 and 1977 studies which collected anthropomorphic data on children. This data can be used to help safety-conscious product designers build better products for children.
  • The Twilio / Wigle.net Street Vector data set provides a complete database of US street names and address ranges mapped to Zip Codes and latitude/longitude ranges, with DTMF key mappings for all street names. This data can be used to validate and normalize street addresses, find a list of street addresses in a zip code, locate the latitude and longitude of an address, and so forth. This data is made available as a set of MySQL data files.
  • The University of Florida Sparse Matrix Collection contains a large and ever-growing set of sparse matrices which arise in real-world problems in structural engineering, computational fluid dynamics, electromagnetics, acoustics, robotics, chemistry, and much more. The largest matrix in the collection has a dimension of almost 29 million, with over 760 million nonzero entries. Graphic representations of some of this data are shown at right, in images produced by Yifan Hu of AT&T Labs. The data is available in MATLAB, Rutherford-Boeing, and Matrix Market formats.
  • The USASpending.gov data set contains a dump of all federal contracts from the Federal Procurement Data Center. This data summarizes who bought what, from whom, and where. The data was extracted by full360.com and is available in Apache CouchDB format.
  • The 2008 Tiger/Line Shapefiles data set is a complete set of shapefiles for American states, counties, districts, places, and areas, along with associated metadata. This data is a product of the US Census Bureau.

We’ll continue to add additional public data sets to our collection over the coming months. Please feel free to submit your own data sets for consideration, or to propose inclusion of data sets owned by others.

It is really easy to instantiate an instance of a public data set. I wanted to process the 2003-2006 US Economic Data. Here’s what I need to do:

  1. Launch a fresh EC2 instance and note its availability one.
  2. Visit the home page for the data set and note the Snapshot ID (snap-0bdf3f62 for Linux in the US) and the Size (220 GB).
  3. Create a new EBS volume using the parameters from the first two steps. I’ll use the AWS Management Console:

    I hit the “Create” button, waited two seconds, and then hit “Refresh.” The volume status changed from “creating” to “available” so I knew that my data was ready.

  4. Attach the volume to my EC2 instance, again using the console:
  5. Create a mount point and then mount the volume on my instance. This has to be done from the Linux command line:
  6. Now I have access to the data, and can do anything I want with it. Here’s a snippet of a directory listing:

Once I am done I can simply unmount the volume, shut down the instance, and delete the volume. No fuss, no muss, and a total cost of 11 cents (10 cents for an hour of EC2 time and a penny or so for the actual EBS volume).

–Jeff;

Amazon EC2 Running IBM

by Jeff Barr | on | in Amazon EC2 |

Earlier this year I talked about our partnership with IBM and their commitment to the creation of licensing models that are a good match for dynamic cloud-computing environments. At that time we released a set of development AMIs (Amazon Machine Images), giving you the ability to create applications using IBM products such as DB2, WebSphere sMash, WebSphere Portal, Lotus Web Content Management, and Informix.

The response to our announcement has been good; developers, integrators, and IT shops have all been asking us for information on pricing and for access to the actual AMIs. We’ve been working with IBM to iron out all of the details and I’m happy to be able to share them with you now!

Starting today you now have development and production access to a number of IBM environments including:

  • Amazon EC2 running IBM DB2 Express – starting at $0.38 per hour.
  • Amazon EC2 running IBM DB2 Workgroup – starting at $1.31 per hour.
  • Amazon EC2 running IBM Informix Dynamic Server Express – starting at $0.38 per hour.
  • Amazon EC2 running IBM Informix Dynamic Server Workgroup – starting at $1.31 per hour.
  • Amazon EC2 running IBM WebSphere sMash – starting at $0.50 per hour.
  • Amazon EC2 running IBM Lotus Web Content Management – starting at $2.48 per hour.
  • Amazon EC2 running IBM WebSphere Portal Server and IBM Lotus Web Content Management Server – starting at $6.39 per hour.

These prices include on-demand licenses for each product. The AMIs are available in the US and EU regions, but you currently can not use Amazon EC2 running IBM with Reserved Instances. However, if you already have licenses from IBM you can install and run the software yourself and pay the usual EC2 rate for On-Demand or Reserved Instances. You can, of course, use other EC2 features such as Elastic IP Addresses and Elastic Block Storage.

You can find the IBM AMIs in the AWS Management Console‘s Community AMI List (search for “paid-ibm”), or you can search for “paid-ibm” in ElasticFox.

Because products like the WebSphere Portal Server and IBM Lotus Web Content Management Server can now be accessed on an hourly basis, you can now think about deploying them in new ways. If you are running a big conference or other event, you can spin up an instance for the duration of the event and only pay a couple of hundred dollars. If you need to do more than one event at the same time, just spin up a second instance. This is all old-hat to true devotees of cloud computing, but I never tire of pointing it out!

Each AMI includes a detailed Getting Started guide. For example, the guide for the WebSphere Portal Server and IBM Lotus Web Content Management Server is 30 pages long. The guide provides recommendations on instance sizes (Small and Large are fine for development; a 64-bit Large or Extra Large is required for production), security groups, and access via SSH And remote desktop (VNC). There’s information about entering license credentials (needed if you bring your own), EBS configuration, and application configuration. The guide also details the entire process of bundling a customized version of the product for eventual reuse.

Additional information on products and pricing is available on the IBM partner page.

And there you have it. With this release, all of the major database products — Oracle, MySQL, DB2, Informix, and SQL Server — are available in production form on EC2.

— Jeff;

How To Purchase an EC2 Reserved Instance

by Jeff Barr | on | in Amazon EC2 |

Update: You can now make this purchase using the AWS Management Console. Click here to learn more.

I thought that it would be worthwhile to outline the steps needed to purchase an EC2 Reserved Instance. Here’s what you need to do:

  1. Choose a Region.
  2. Choose an Availability Zone.
  3. Locate the Reserved Instance offering.
  4. Make the purchase.
  5. Enjoy.

This blog post assumes that you have the latest version of the EC2 Command Line tools installed and that you have set the proper environment variables (JAVA_HOME, EC2_HOME, EC2_PRIVATE_KEY, and EC2_CERT) All commands are to be typed in to a Windows Command (cmd.exe) window.

Choose a Region

Per the announcement, you can now purchase Reserved Instances in either the US or in Europe. If you already have an EC2 instance running in a particular region and you want to convert it to a reserved instance, then choose that region. Otherwise, choose the region that is best suited to your needs over the term (1 year or 3 year) of the Reserved Instance.

Based on your chosen region, set your EC2 endpoint appropriately:

US:

Europe:

Choose an Availability Zone

If you already have an On-Demand instance running and you want to convert it to a Reserved Instance, or if you have an EBS volume in a particular Availability Zone, then your choice is clear. You can use the ec2-describe-instances command to figure out the availability zone and instance type if necessary. In the screen shot below, I have highlighted the instance type in yellow and the availability zone in purple to make it clear where to find them:

Locate The Reserved Instance Offering

Now that you know the instance type and Availability Zone, you need to decide if you want to purchase a Reserved Instance for 1 year or for 3 years. You can consult the EC2 Pricing Chart and make a decision based on your needs. Considerations might include the expected lifetime of your site or application, plans for growth, degree of variability expectd in your usage patterns, and so forth.

The next step is to run ec2-describe-reserved-instances-offerings and select the appropriate offering. Each offering is identified by an alphanumeric id such as e5a2ff3b-f6eb-4b4e-83f8-b879d7060257 (highlighted in yellow below):

You can also get fancy and run a search pipeline. Here’s how I found an m1.small instance in us-east-1a with a 1 year term:

Make the Purchase

The next step is to actually make the purchase using ec2-purchase-reserved-instances-offering. This command requires an offering id from the previous step and an instance count, allowing purchase of more than one reserved instance at a time. Needless to say, you should use this command with caution since you are spending hundreds or thousands of dollars! Here’s what happened when I made the purchase:

Enjoy

Since I already had an instance running, all further instance hours that it consumes will be billed at the lower rate. As of this fall three of my five offspring will be in college ( Washington, Maryland, and Rochester), so the extra pennies per hour will definitely come in handy!

— Jeff;

EC2 Reserved Instances – Now In Europe, Too!

by Jeff Barr | on | in Amazon EC2 |

I wrote about the exciting and economical EC2 Reserved Instances just a few weeks ago. The response to that announcement has been really good, with positive feedback from our customers who are enjoying this new option.

Today I am happy to let you know that you can now reserve EC2 instances in our European (EU) region. The one-time fee is the same as in the US and, as is the case for the On-Demand instances in Europe, the per-hour cost is slightly higher than it is in the US. You can use the same API and command-line tools; just remember to use the proper endpoint and you’ll be all set.

I use an EC2 instance to host my personal blog and a number of other projects. I converted it to a Reserved Instance just last week and am already enjoying the savings. After setting up the EC2 API tools on my desktop Windows machine, I ran ec2-describe-reserved-instance-offerings to find an offering in the right availability zone, and ec2-purchase-reserved-instances-offering to make the purchase. Here’s what my account looks like now:

–Jeff;

Upcoming Webinars

by Jeff Barr | on | in Amazon EC2, Amazon SDB |

There are a number of webinars scheduled over the next several weeks, and the purpose of this post is to make certain everyone is aware of the various options and opportunities:

  • Thursday, April 16 at 9:00am PST
    Cloud for Large Enterprise — Where to Start
    Hear Capgemini put AWS and cloud computing in context for large enterprises during this live webinar now taking place on April 16. You’ll learn key steps for creating a cloud strategy that works with your business and discover ways that cloud computing can lower costs while accelerating development. Speakers will include Amazon Web Service’s Terry Wise, who is a Director of Business Devleopment, Simon Plant, who is a Chief Enterprise Architect at Capgemini, and Andrew Gough, also with Capgemini. Register here. (Business level discussion)
     
  • Tuesday, April 21 at 9:00am PST
    ERP in the Cloud
    The AWS community includes a number of innovative ISVs. Compiere is one great example of the innovation, and has released ERP software running on Amazon Web Services. This one-hour non-technical session will include Compieres CEO Don Klaiss, a Compiere customer, and a few comments by me. Register here. (Business level discussion with a bit of light technical content)
     
  • Wednesday, April 22 at 9:00am PST
    Amazon SimpleDB Brown Bag As previously blogged, the development team will host this webinar. Several topics will be included; and I am excited that I will be able to present a proxy class that allows Visual Studio developers to integrate Amazon SimpleDB into Visual Studio 2008. Registration details are in the blog post, or here. (Developer-focused technical content)
     
  • Thursday, April 23 at 9:00am PST
    Introduction to Amazon Web Services for IT Professionals
    Attend this April 23 webinar for a live demonstration of how to get started using Amazon Web Services. AWS technical evangelist Mike Culver will present an IT-oriented overview of all AWS products, including an in-depth discussion on using Amazon S3 for cloud storage and Amazon EC2 for cloud computing. Register here.(Both business and technical content, erring on the side of “technical”)
     

Mike

What Do You Run?

by Jeff Barr | on | in Amazon EC2, Enterprise |

As more and more businesses run applications in the cloud, we’re starting to hear about mainstream software from the likes of IBM and Oracle running on Amazon EC2.

There are strong privacy and security controls in place around each AWS customer account, and accordingly there’s no way for us to gain a sense of who and how many organizations are doing this. If your organization fits this profile–especially if you run either IBM or Oracle–we’d love to hear from you. Please drop us a note at awseditor at amazon dot com, or simply leave a private comment here.

Mike

Announcing Amazon Elastic MapReduce

by Jeff Barr | on | in Amazon EC2 |

Today we are introducing Amazon Elastic MapReduce , our new Hadoop-based processing service. I’ll spend a few minutes talking about the generic MapReduce concept and then I’ll dive in to the details of this exciting new service.

Over the past 3 or 4 years, scientists, researchers, and commercial developers have recognized and embraced the MapReduce programming model. Originally described in a landmark paper, the MapReduce model is ideal for processing large data sets on a cluster of processors. It is easy to scale up a MapReduce application to jobs of arbitrary size by simply adding more compute power. Here’s a very simple overview of the data flow in a typical MapReduce job:

Given that you have enough computing hardware, MapReduce takes care of splitting up the input data into chunks of more or less equal size, spinning up a number of processing instances for the map phase (which must, by definition, be something that can be broken down into independent, parallelizable work units) apportioning the data to each of the mappers, tracking the status of each mapper, routing the map results to the reduce phase, and finally shutting down the mappers and the reducers when the work has been done. It is easy to scale up MapReduce to handle bigger jobs or to produce results in a shorter time by simply running the job on a larger cluster.

Hadoop is an open source implementation of the MapReduce programming model. If you’ve got the hardware, you can follow the directions in the Hadoop Cluster Setup documentation and, with some luck, be up and running before too long.

Developers the world over seem to think that the MapReduce model is easy to understand and easy to work in to their thought process. After a while they tend to report that they begin to think in terms of the new style, and then see more and more applications for it. Once they start to show that the model has a genuine business model (e.g. better results, faster) demand for hardware resources increases rapidly. Like any true viral success, one team shows great results and before too long everyone in the organization wants to do something similar. For example, Yahoo! uses Hadoop on a very large scale. A little over a year ago they reported that they were able to use the power of over 10,000 processor cores to generate a web map to power Yahoo! Search.

This is Rufus, the “first dog” of our AWS Developer Relations team. As you can see, he’s scaled up quite well since his debut on this very blog three years ago. Your problems may start out like the puppy-sized version of Rufus but will quickly grow into the full-scale 95 pound version.

Over the past year or two a number of our customers have told us that they are running large Hadoop jobs on Amazon EC2. There’s some good info on how to do this here and also here. AWS Evangelist Jinesh Varia covered the concept in a blog post last year, and also went into considerable detail in his Cloud Architectures white paper.

Given our belief in the power of the MapReduce programming style and the knowledge that many developers are already running Hadoop jobs of impressive size in our cloud, we wanted to find a way to make this important technology accessible to even more people.

Today we are rolling out Amazon Elastic MapReduce. Using Elastic MapReduce, you can create, run, monitor, and control Hadoop jobs with point-and-click ease. You don’t have to go out and buys scads of hardware. You don’t have to rack it, network it, or administer it. You don’t have to worry about running out of resources or sharing them with other members of your organization. You don’t have to monitor it, tune it, or spend time upgrading the system or application software on it. You can run world-scale jobs anytime you would like, while remaining focused on your results. Note that I said jobs (plural), not job. Subject to the number of EC2 instances you are allowed to run, you can start up any number of MapReduce jobs in parallel. You can always request an additional allocation of EC2 instances here.

Processing in Elastic MapReduce is centered around the concept of a Job Flow. Each Job Flow can contain one or more Steps. Each step inhales a bunch of data from Amazon S3, distributes it to a specified number of EC2 instances running Hadoop (spinning up the instances if necessary), does all of the work, and then writes the results back to S3. Each step must reference application- specific “mapper” and/or “reducer” code (Java JARs or scripting code for use via the Streaming model). We’ve also included the Aggregate Package with built-in support for a number of common operations such as Sum, Min, Max, Histogram, and Count. You can get a lot done before you even start to write code!

We’re providing three distinct access routes to Elastic MapReduce. You have complete control via the Elastic MapReduce API, you can use the Elastic MapReduce command-line tools, or you can go all point-and-click with the Elastic MapReduce tab within the AWS Management Console! Let’s take a look at each one.

The Elastic MapReduce API represents the fundamental, low-level entry point into the system. Action begins with the RunJobFlow function. This call is used to create a Job Flow with one or more steps inside. It accepts an EC2 instance type, an EC2 instance count, a description of each step (input bucket, output bucket, mapper, reducer, and so forth) and returns a Job Flow Id. This one call is equivalent to buying, configuring, and booting up a whole rack of hardware. The call itself returns in a second or two and the job is up and running in a matter of minutes. Once you have a Job Flow Id, you can add additional processing steps (while the job is running!) using AddJobFlowSteps. You can see what’s running with DescribeJobFlows, and you can shut down one or more jobs using TerminateJobFlows.

The Elastic MapReduce client is a command-line tool written in Ruby. The client can invoke each of the functions I’ve already described. You can create, augment, describe, and terminate Job Flows from the command line.

Finally, you can use the new Elastic MapReduce tab of the AWS Management Console to create, augment, describe, and terminate job flows from the comfort of your web browser! Here are a few screen shots to whet your appetite:

 

 

I’m pretty psyched about the fact that we are giving our users access to such a powerful programming model in a form that’s really easy to use. Whether you use the console, the API, or the command-line tools, you’ll be able to focus on the job at hand instead of spending your time wandering through dark alleys in the middle of the night searching for more hardware.

What do you think? Is this cool, or what?

— Jeff;

Up, Up, and Away – Cloud Computing Reaches for the Sky

by Jeff Barr | on | in Amazon EC2 |

Early this morning we launched a brand new cloud computing service. This revolutionary new technology will change the way you think about the cloud.

For a while the cloud was simply a metaphor meaning “a bunch of computers somewhere else.” Until now, somewhere else meant good old terra firma, the Earth itself. After extensive customer research we found that this rigid, antiquated way of thinking just won’t cut it in today’s post-capitalist world. They need locational flexibility, the ability to literally instantiate a cloud where they need it, when they need it.

To solve this problem, we have designed and are now introducing the Floating Amazon Cloud Environment, or FACE for short. Using the latest in airship technology, we’ve created a cloud that can come to you.

The FACE uses durable, unmanned helium-filled blimps with a capacity of 65,536 small EC2 instances, or a proportionate number of larger instances. The top of each blimp is coated in polycrystalline solar cells which supply approximately 40% of the power needed by the servers and the on-board navigation, communication, and defense systems.  The remainder of the power is produced by clean, efficient solid oxide fuel cells. There’s enough fuel onboard to last about a month under normal operating conditions. Waste heat from the fuel cells and from the servers is used to generate additional lift.

There are two options for ground communication, WiMAX and laser. The WiMAX option provides low latency and respectable bandwidth. If you have the ground facility and the line of sight access needed to support it, lasers are the way to go. The on-board laser doubles as a defense facility, keeping each FACE safe from harm. Using automated target detectors with human confirmation via the Mechanical Turk, competitors won’t have a chance.

Update: Based on popular demand, we will also implement RFC 1149.

FACE can operated in shared or dedicated mode. In dedicated mode, the FACE does its best to remain at a fixed position. In shared mode, each FACE constantly optimizes its position to provide the best possible service to everyone. As always, this amazing functionality is available via the EC2 API (You’ll need the new 2009-04-01 WSDL), the command line tools, and the AWS Console.

Derivative funds and large government-subsidized entities will be especially interested in FACEs transmodal operation. They can allocate a dedicated FACE, load it up with data, and then send it out to sea to perform advanced processing in safety. The government will have absolutely no chance of acting against them, because they will be too busy trying to decide which Federal Air Regulation (FAR) was violated, not to mention scheduling news conferences.

We believe that the FACE will be the perfect solution for LAN parties, tech conferences, and large-scale sporting events.

Availability is limited and this may be a one-time, perhaps even a one-day offer. Get your FACE now.

— Jeff;

New AWS Toolkit for Eclipse

by Jeff Barr | on | in Amazon EC2 |

We want to make the process of building, testing, and deploying applications on Amazon EC2 as simple and efficient as possible. Modern web applications typically run in clustered environments comprised of one or more servers. Unfortunately, setting up a cluster can involve locating, connecting, configuring and maintaining a significant amount of hardware. Once this has been done, keeping the operating system, middleware, and application code current and consistent across each server can add inefficiency and tedium to the development process. In recent years, Amazon Web Services has helped to ease much of this burden, trivializing the process of acquiring, customizing, and running server instances on demand.

Also, in the last couple of years, the Eclipse IDE ( Integrated Development Environment) has become very popular among developers. The modular nature of the Eclipse architecture opens the door to customization, extension, and continuous refinement via plug-ins (full directory here).

Today, we are introducing the AWS Toolkit for Eclipse. This free, open source plugin for the Eclipse IDE makes it easier and more efficient for you to develop, deploy, and debug Java applications on top of AWS. In fact, you can design an entire AWS-hosted Tomcat-based cluster from within Eclipse. You can design your cluster, specifying the number of EC2 instances and the instance type to run. You can select and even create security groups and keypairs and can associate an Elastic IP address with each instance.

The plugin will manage your cluster, starting up instances as needed and then keeping them alive as you develop, deploy, and debug. If you start your application in Debug mode, you can set remote breakpoints, inspect variables or stack frames, and even single-step through the remote code. You can see all of this great functionality in action here.

This is a first step for us, and we anticipate supporting additional languages and application servers (e.g. Glassfish, JBoss, WebSphere, and WebLogic) over time. As is the case with all of our services, customer input and feedback will help to shape the direction of the plugin.

As I noted before, the new AWS Toolkit for Eclipse is free and you can download it now. You can contribute your own enhancements to the toolkit by joining the SourceForge project .

— Jeff;

Announcing Amazon EC2 Reserved Instances

by Jeff Barr | on | in Amazon EC2 |

Earlier in my career, I thought that innovation was solely about technology. If you wanted to address a new market or to increase sales, writing more code was always a good option. Having gained some wisdom and experience over the years, I’ve finally figured out the obvious — that innovation can also take the form of a business model!

Since I first blogged about Amazon EC2 in the summer of 2006, developers and IT professionals have found all sorts of ways to put it to use. Many of those have been covered in this blog ; we’ve written a bunch of case studies about quite a few, and I’ve bookmarked many more on the AWS Buzz feed. As our customer’s use cases have grown, we’ve done our best to listen to their feedback, adding such features as additional instances types, multiple availability zones, multiple geographic regions, persistent disk storage, support for Microsoft Windows, and control over IP addresses.

The well-known pay-as-you-go EC2 pricing model is very similar to what an economist would call an on-demand or spot market. There’s no need make any up-front commitment; you simply pay for your processing an hour at a time. This model has served us well so far and it will continue to be a fundamental aspect of our strategy.

We’ve learned that some of our customers have needs which aren’t addressed by the spot pricing model. For example, some of them were looking for even lower prices, and were willing to make a commitment ahead of time in order to achieve this. Also, quite a few customers actually told us something even more interesting: they were interested in using EC2 but needed to make sure that we would have a substantial number of instances available to them at any time in order for them to use EC2 in a DR (Disaster Recovery) scenario. In a scenario like this, you can’t simply hope that your facility has sufficient capacity to accommodate your spot needs; you need to secure a firm resource commitment ahead of time.

Taking these requirements into account, we’ve created a new EC2 pricing model, which we call Reserved Instances. After you purchase such an instance for a one-time fee, you have the option to launch an EC2 instance of a certain instance type, in a particular availability zone, for a period of either 1 of 3 years. Your launch is guaranteed to succeed; there’s no chance of encountering any transient limitations in EC2 capacity. You have no obligation to run the instances full time, so you’ll pay even less if you choose to turn them off when you are not using them.

Steady-state usage costs, when computed on an hourly basis over the term of the reservation, are significantly lower than those for the on-demand model. For example, an on-demand EC2 Small instance costs 10 cents per hour. Here’s the cost breakdown for a reserved instance (also check out the complete EC2 pricing info):

Term One-time Fee Hourly Usage Effective 24/7 Cost
1 Year $325 $0.030 $0.067
3 Year $500 $0.030 $0.049

Every one of the EC2 instance types is available at a similar savings. We’ve preserved the flexibility of the on-demand model and have given you a new and more cost-effective way to use EC2. Think of the one-time fee as somewhat akin to acquiring hardware, and the hourly usage as similar to operating costs.

All of the launching, metering, and billing is fully integrated. Once you’ve purchased one or more reserved instances, the EC2 RunInstances call will draw upon your reserve before allocating on-demand capacity. This new feature is available for Linux and OpenSolaris instances in the US now, with the same support to follow in Europe in the near future.

We’ve added a number of new command-line (API) tools to support the Reserved Instances. Here’s what they do:

  • The ec2-describe-reserved-instance-offerings command lists the set of instance offerings that are available for purchase.
  • The ec2-purchase-reserved-instances-offering command makes the actual purchase of one or more reserved instances.
  • The ec2-describe-reserved-instances command displays a list of the instances that have been purchased.

Of course, all of this new functionality is fully programmable. We’ve added a number of new EC2 APIs:

  • DescribeReservedInstancesOfferings returns a list of Reserved Instance offerings that are available for purchase. This call enumerates the inventory within a particular availability zone.
  • PurchaseReservedInstancesOffering makes the actual purchase of a Reserved Instance within an availability zone. Up to 20 instances can be purchased with a single call, subject to availability and account limitations. This is like “buy a vowel” from Wheel of Fortune, but you get a server (much more useful) instead.
  • DescribeReservedInstances – returns a list of the instances that have been purchased for the account.

We’re planning to give the AWS Console full control over the Reserved Instances. I expect to see other tool vendors add support as well.

If you have any questions about the new Reserved Instances, check out the entries in in the newly revised EC2 FAQ.

I’m looking forward to receiving your feedback on this new and innovative business model for EC2. Please feel free to leave me a comment.

— Jeff;