Category: Amazon EC2

New: Amazon EC2 Running SUSE Linux Enterprise Server

by Jeff Barr | on | in Amazon EC2 |

A critical aspect of the value proposition for the Amazon Web Services revolves around choice. This takes many forms, each of which gives you the freedom to choose the best fit for your particular situation:

  • A wide variety of services that you can choose to use, or not, as determined by your needs.
  • Ten EC2 instance types, with instances spanning a very wide range of CPU power, RAM, instance storage, and network performance.
  • Five RDS DB instance classes, also spanning a wide range.
  • Four EC2 regions (US East, US West, EU, and Asia Pacfic).
  • Multiple EC2 pricing models (On-Demand, Spot, and Reserved).
  • Multiple Operating Systems including a number of Linux Distributions, two versions of Microsoft Windows, and OpenSolaris.

Today we are giving you an additional operating system choice – you can now run SUSE Linux Enterprise Server (version 10 or 11) on Amazon EC2 in any of our regions and on any of our instance types. You’ll also have access to a maintenance subscription that automatically installs the most current security patches, bug fixes, and new features from a repository hosted within AWS.

With more than 6,000 certified applications from over 1,500 independent software vendors, SUSE Linux Enterprise is a proven, commercially supported Linux platform that is ideal for development, test, and production workloads.

All of this is available on a pay as you go basis, with no long-term commitments and no minimum fees. Reserved Instances and Spot Instances are also available; you can run SUSE in the cloud using Reserved Instances very economically.

Pricing and other details can be found on our SUSE Linux Enterprise Server page. You can launch SLES from the Quick Start tab of the AWS Management Console; On-Demand, Reserved, and Spot Instances are available.

— Jeff;





Now Available: Host Your Web Site in the Cloud

by Jeff Barr | on | in Amazon CloudFront, Amazon CloudWatch, Amazon EC2, Amazon Elastic Load Balancer, Amazon S3, Amazon SDB, Amazon Simple Notification Service, Amazon SQS, Announcements |

I am very happy to announce that my first book, Host Your Web Site in the Cloud is now available! Weighing in at over 355 pages, this book is designed to show developers how to build sophisticated AWS applications using PHP and the CloudFusion toolkit.

Here is the table of contents:

  1. Welcome to Cloud Computing.
  2. Amazon Web Services Overview.
  3. Tooling Up.
  4. Storing Data with Amazon S3.
  5. Web Hosting with Amazon EC2.
  6. Building a Scalable Architecture with Amazon SQS.
  7. EC2 Monitoring, Auto Scaling, and Elastic Load Balancing.
  8. Amazon SimpleDB: A Cloud Database.
  9. Amazon Relational Database Service.
  10. Advanced AWS.
  11. Putting It All Together: CloudList.

After an introduction to the concept of cloud computing and a review of each of the Amazon Web Services in the first two chapters, you will set up your development environment in chapter three. Each of the next six chapters focuses on a single service. In addition to a more detailed look at each service, each of these chapters include lots of full-functional code. The final chapter shows you how to use AWS to implement a simple classified advertising system.

Although I am really happy with all of the chapters, I have to say that Chapter 6 is my favorite. In that chapter I show you how to use the Amazon Simple Queue Service to build a scalable multistage image crawling, processing, and rendering pipeline. I build the code step by step, creating a queue, writing the code for a single step, running it, and then turning my attention to the next step. Once I had it all up and running, I opened up five PuTTY windows, ran a stage in each, and watched the work flow through the pipeline with great rapidity. Here’s what the finished pipeline looks like:

I had a really good time writing this book and I hope that you will have an equally good time as you read it and put what you learn to good use in your own AWS applications.

Today (September 21) at 4 PM PT I will be participating in a webinar with the good folks from SitePoint. Sign up now if you would like to attend.

— Jeff;

PS – If you are interested in the writing process and how I stayed focused, disciplined, and organized while I wrote the book, check out this post on my personal blog.


Run Oracle Applications on Amazon EC2

by Jeff Barr | on | in Amazon EC2, Announcements |

A wide variety of Oracle applications have been certified for use on Amazon EC2 with virtualization provided by the Oracle VM (OVM). The following products are now fully certified and supported and you’ll be able to run them in the cloud on production workloads before too long:

These applications will be available in the form of Amazon Machine Images (AMIs) that you can launch from the AWS Management Console and from other EC2 management tools.

You can use your existing Oracle licenses at no additional license cost or you can acquire new licenses from Oracle. We implemented OVM support on Amazon EC2 with hard partitioning so Oracle’s standard partitioned processor licensing models apply.

Working together with Oracle, we will publish a set of pre-configured AMIs based on the Oracle VM Templates so that you can be up and running in a matter of minutes instead of weeks or even months.

We’ll start with Oracle Linux, Oracle Database 11gR2, Oracle E-Business Suite, and a number of Oracle Fusion Middleware technologies including Oracle Weblogic Server and Oracle Business Process Management. After that, we’ll add AMIs for PeopleSoft, Siebel, and JD Edwards.

You’ll be able to take advantage of notable EC2 features such as Elastic Load Balancing, Auto Scaling, Security Groups, Amazon CloudWatch and Reserved Instance pricing.

To learn more about running Oracle applications on EC2 and to register to be notified when application templates become available, visit the Oracle and Amazon Web Services page.

If you are at Oracle OpenWorld this week (September 19-23, 2010), please stop by the AWS booth and say hello to our team. 

— Jeff;


New Amazon EC2 Features: Resource Tagging, Idempotency, Filtering, Bring Your Own Keys

by Jeff Barr | on | in Amazon EC2 |

We’ve just introduced four cool new features for Amazon EC2. Instead of trying to squeeze all of the information in to one ridiculously long post, I’ve written four separate posts. Here’s what we introduced:

  • Resource Tagging -Tag the following types of resources: EC2 instances, Amazon Machine Images (AMIs), EBS volumes, EBS snapshots, and Amazon VPC resources such as VPCs, subnets, connections, and gateways.
  • Idempotent Instance Creation – Ensure that multiple EC2 instances are not accidentally created when you needed just one.
  • Filtering – Filter the information returned by an EC2 Describe call using one or more key/value pairs as filters.
  • Bring Your Own Keypair – Import your own RSA keypair for use with EC2.

The posts are linked to each other, so you can start at Resource Tagging and read each of them in turn.

— Jeff;

New Amazon EC2 Feature: Filtering

by Jeff Barr | on | in Amazon EC2 |

Many of our customers create large numbers of EC2 resources. Some of them run hundreds or thousands of EC2 instances, create thousands of EBS volumes, and retain tens of thousands of EBS volume snapshots.

This growth has meant that the corresponding Describe APIs (DescribeInstances, DescribeVolumes, and DescribeSnapshots, to name a few) can return results that are very long and somewhat tedious to process.

In order to make client applications simpler and more efficient, you can now specify filters when you call the EC2 “Describe” functions (except those having to do with attributes or datafeed subscriptions for Spot instances).

You can provide one or more filters as part of your call to a Describe function. Each filter consists of a case-sensitive name and a value. Values are text strings or XML text string representations of non-textual (e.g. Boolean) values. Filter values can use the “*” to match zero or more characters, or the “?” to match a single character.

You can also combine multiple filters. Multiple filters with the same name are OR-ed together and then AND-ed with the other filters. You could, for example, call DescribeInstances and ask for all of your m1.large instances that are running a Ubuntu AMI in the us-east-1a or us-east-1b Availability Zones.

The filters are also supported by the EC2 command-line tools via the “–filter name=value” option.

Tool vendors will be able to make use of this new flexibility to create faster and more powerful EC2 management tools.

Read more about filtering in the newest version of the EC2 User Guide.

Next feature: Bring Your Own Keypair.

— Jeff;

New Amazon EC2 Feature: Resource Tagging

by Jeff Barr | on | in Amazon CloudFront, Amazon EC2 |

It is really easy to start up that first Amazon EC2 instance, and then another, and another as you find more and more ways to put it to use. It is really easy to create some EBS volumes, attach them to your instances, and to store lots and lots of data on them. The same goes for other EC2 resources such as security groups, and EBS snapshots.

As your usage starts to grow from one instance and one application to many instances spanning multiple applications, it can be difficult to track which instances are assigned to which application, which EBS volumes store what data, and so forth.

We’ve just released a very powerful tagging feature to allow you to tag your EC2 resources (and also certain shared resources) using up to ten key-value pairs per resource. Each tag consists of a key (up to 128 characters) and a value (up to 256 characters). The tags are stored in the AWS cloud as part of your AWS account, and are private to the account.

You can tag the following types of resources: EC2 instances, Amazon Machine Images (AMIs), EBS volumes, EBS snapshots, and Amazon VPC resources such as VPCs, subnets, connections, and gateways. You can tag existing resources and you can tag new resources right after you create them.

You can manipulate your tags using three new API calls:

CreateTags allows you to tag one or more EC2 resources with one or more tags.

DescribeTags gives you the tags associated with one or more resources. The returned tags can be filtered by resource identifier, resource type, key, or value. You can, for example, retrieve all of the tags for a given resource, or you can retrieve all of the resources (regardless of type) with a given tag.

DeleteTags allows you to delete a set of tags from a set of resources.

The existing EC2 “Describe” APIs (DescribeInstances, DescribeVolumes, and so forth) return the tags associated with each of the resources in the response.

You can also manipulate them using the EC2 command-line tools (ec2-create-tags, ec2-describe-tags, and ec2-delete-tags) and from the AWS Management Console.

You can create and delete tags using the AWS Management Console. You can view the tags associated with any resource, you can use a tag as a column in any resource list, and you can filter any view by tag.

Here’s a tour of the new console features. I’ve tagged EC2 instances, but you can also tag many other types of resources.

You can tag new instances as part of the Request Instances Wizard:

You can also tag existing instances (I’m sure that you’ve been a part of at least one “Project Phoenix” in your career):

You can select an instance to see its tags:

The tag names can be used as columns, allowing you to hide and show them, and to see the values:

In the following illustration I have created tags named Use Case and Project on some EC2 instances:

As you can see, a column that represents a tag also includes a filter control:

Clicking the control gives you the ability to filter rows (instances in this case) by the value of the tag:

You can also use the filtering menu to find the items that don’t have a particular tag, or that have a tag with an empty value. You can use this to locate resources that are allocated but not assigned to a particular role, product, or user, for example.

Here’s a list of instances that I have tagged with a Use Case of Production:

Read more about the tagging in the newest version of the EC2 User Guide.

Next feature: Idempotent Instance Creation.

— Jeff;

New Amazon EC2 Feature: Idempotent Instance Creation

by Jeff Barr | on | in Amazon EC2 |

The Amazon EC2 API includes functions which create resources such as instances, disk volumes and snapshots, IP addresses, and key pairs.

Some of these functions create the resources in a synchronous fashion and you can determine the success or failure of the request by examining the value returned by the call.

Other functions work in an asynchronous fashion. Making the call initiates an action that may take a fairly long time (seconds or minutes) to complete.  When the call returns you cannot know if the request has succeeded or not. Timeouts and connection errors can further muddy the water; you don’t want to unnecessarily retry a request if there’s an associated cost for the resource. You don’t want to create two EC2 instances when you only needed one.

To provide you with better control in this situation, we’ve just a released a somewhat esoteric (yet very useful) feature called idempotent instance creation.

Performing an idempotent operation more than once yields the same result as applying it just once. Washing your dog is idempotent (you always end up with a clean dog); feeding your dog is not (your dog will get fat).

The EC2 RunInstances function now supports idempotency. If you are launching EC2 instances as part of a higher level process, this feature should help you to build management and control applications that are more robust.

To call RunInstances in an idempotent fashion, you need to create a client token. A client token is a case-sensitive string of up to 64 ASCII characters. You should use a unique client token for each new instance.

Once you have a properly formed client token, you simply pass it along as an additional parameter to RunInstances. The function will ignore the second and subsequent requests that have the same token. You must use the same set of parameters each time you call the function. If you don’t you will get an IdempotentParameterMismatch error.

Read more about idempotency in the newest version of the EC2 Developer Guide.

Next feature: Filtering.

— Jeff;

New Amazon EC2 Feature: Bring Your Own Keypair

by Jeff Barr | on | in Amazon EC2 |

You can now import your own RSA keypair (or the public half, to be precise) for use with your Amazon EC2 instances.

Why would you want to do this? Here are a couple of reasons:

  1. Trust – By importing your own keypair you can ensure that you have complete control over your keys.
  2. Security -You can be confident that your private key has never been transmitted over the wire.
  3. Management of Multiple Regions – You can use the same public key across multiple AWS Regions.

You can upload RSA keys (which can be 1024, 2048, or 4096 bits long) in a variety of formats including OpenSSH public key format, Base64 encoded DER format, or the SSH public key file format specified in RFC 4716. The ssh-keygen tool (part of the standard OpenSSH installation) is a handy way to create keys.

Read more about the import key feature in the newest version of the EC2 User Guide.

Update:Paul Maunder documented the process of uploading the same keypair to multiple EC2 regions. Thanks, Paul!

— Jeff;

AWS For High Performance Cloud Computing – NASA, MATLAB

by Jeff Barr | on | in Amazon EC2 |

It is great to see our customers putting EC2’s new Cluster Compute instance type to use in High Performance Computing (HPC) scenarios. Here are two example applications:

MathWorks / MATLAB

The MATLAB team at MathWorks tested performance scaling of the backslash (“\”) matrix division operator to solve for x in the equation A*x = b. In their testing, matrix A occupies far more memory (290 GB) than is available in a single high-end desktop machinetypically a quad core processor with 4-8 GB of RAM, supplying approximately 20 Gigaflops.

Therefore, they spread the calculation across machines. In order to solve linear systems of equations they need to be able to access all of the elements of the array even when the array is spread across multiple machines. This problem requires significant amounts of network communication, memory access, and CPU power. They scaled up to a cluster in EC2, giving them the ability to work with larger arrays and to perform calculations at up to 1.3 Teraflops, a 60X improvement. They were able to do this without making any changes to the application code.

Here’s a graph showing the near-linear scalability of an EC2 cluster across a range of matrix sizes with corresponding increases in cluster size for MATLAB’s parallel backslash operator:

Each Cluster Compute instance runs 8 workers (one per processor core on 8 cores per instance). Each doubling of the worker count corresponds to a doubling of the number of Cluster Computer instances used (scaling from 1 up to 32 instances). They saw near-linear overall throughput (measured in Gigaflops on the y axis) while increasing the matrix size (the x axis) as they successively doubled the number of instances.


A team at NASA’s Jet Propulsion Laboratory developed the ATHLETE robot. Each year they put the robot through autonomous field tests as part of the D-RATS (Desert Research and Training Studies) along with autonomous robots from other NASA centers. The operators rely on high-resolution satellite imagery for situational awareness while driving the robots. JPL engineers recently developed and deployed an application designed to streamline the processing of large (giga-pixel) images by leveraging the massively parallel nature of the workflow. The application is built on Polyphony, a versatile and modular workflow framework based on Amazon SQS and Eclipse Equinox. In the past, JPL has used Polyphony to validate the utility of cloud computing for processing hundreds of thousands of small images in an EC2-based compute environment. JPLers have now adopted the cluster compute environments for processing of very large monolithic images. Recently, JPLers processed a 3.2 giga-pixel image of the field site (provided courtesy of USGS) in less than two hours on a cluster of 30 Cluster Compute Instances. This demonstrates a significant improvement (an order of magnitude) over previous implementations, on non-HPC environments.

We’re happy to see MathWorks and JPL deploying Cluster Compute Instances with great results. It’s also exciting to see other customers scaling up to 128-node (1024 core) clusters with full bisection bandwidth. I’ll be writing up more of these stories in the near future, so stay tuned. If you have a story of your own, drop me an email or leave a comment.

— Jeff;

Introducing The Amazon Linux AMI

by Jeff Barr | on | in Amazon EC2 |

Yes, you read that right. We now have a Linux AMI tuned for AWS!

Many of our customers have asked us for a simple starting point for launching their Linux applications inside of Amazon EC2 that is easy to use, regularly maintained, and optimized for the Amazon EC2 environment. Starting today, customers can use Amazon Linux AMI to meet these needs.  This just adds to the great selection of AMI options in Amazon EC2 that range from free to paid, giving you access to the operating systems and environments you need.

Available in 32 and 64 bit form in all of the AWS Regions, Amazon Linux starts out as lean and mean as possible; no unnecessary applications or services are running. You can add more packages as needed, and you can do so very quickly and easily from a package repository that we host in Amazon S3.

The AWS command-line tools and libraries are pre-installed and ready to use. We’ve also integrated Ubuntu’s CloudInit to simplify the process of customizing each instance after it boots up. You can use CloudInit to set a default locale, set the hostname, generate and set up SSH private keys, and to set up mount points. You can also run custom commands and scripts on initial startup or on each reboot, as desired.

The Amazon Linux AMI can be booted from the AWS Management Console‘s Request Instances page at the usual charge for Linux/UNIX instances. This is a supported AMI; customers who use AWS Premium Support will be able to ask for help with installation and usage problems. Of course, everyone can use the forums to ask for help or to report bugs.

We will provide patches and security updates as needed. We also update the Amazon Linux AMI on a regular basis, and we’ll create a new set of AMIs each time we do so.

If you’ve used other Linux AMIs in the past, this one should hold few surprises. Nevertheless, here are a few things to keep in mind:

  1. Log in as ec2-user rather than as root.
  2. For S3-backed instances the first ephemeral volume is mounted at /media/ephemeral0.
  3. Complete release notes are available in file /etc/image-release-notes.
  4. The system is running kernel 2.6.34.

Read the Amazon Linux AMI User Guide [PDF] to learn more.

Update 1:

You can find the current AMI ID’s for each region on the Amazon Linux page. Once you have the ID you can search for it using the Community AMIs tab in the AWS Management Console‘s EC2 Request Instances Wizard:

You can do the same search using the Images tab in Elastic Fox:

Update 2:

The source code is available for reference purposes. Open up the user guide and search for the section labeled Accessing Source Packages for Reference for full information on how to download it to your instance.

— Jeff;