Category: Amazon EC2*

Jollat – Cross-Platform AWS Manager Client

JollatAndras wrote to tell me about Jollat, a new graphical cross-platform (Windows, Mac, and Linux) management client for Amazon EC2 and S3. Available for free download (with a purchase option), the client includes a number of interesting features.

On the S3 side, Jollat handles bucket creation in both the US and EU zones, upload and download of multiple files, log file configuration and management, and an access control list (ACL) editor.

On the EC2 side, Jollat’s image manager makes it easy to find and launch any AMI (Amazon Machine Image). Once launched, instances can be accessed using an embedded SSH client. The tool also manages availability zones, IP addresses, and key pairs.

You can see Jollat in action by watching the video.

— Jeff;

JBoss Releases on Amazon EC2

By now many of you are aware that Red Hat Enterprise Linux is fully supported by Red Hat on Amazon EC2. You can read more about the offering at Jeff Barr blogged about this in November, 2007 (

Im posting this from Boston, where I am attending the Red Hat Global Summit — more specifically helping with a hands-on lab that teaches developers and IT staff how to deploy Red Hat Enterprise Linux (RHEL) on Amazon EC2. (It’s really easy.) Its been fun to meet enterprise developers from all over the world, and surprising to find out that no matter what country the developer is in awareness about Cloud Computing is high.

Perhaps you already saw the posts in other blogs Red Hat announced that their JBoss Enterprise Application Platform is available in beta form as a service within the Amazon Elastic Compute Cloud (Amazon EC2).

Traditionally we think of Java application servers as building blocks that live in a hallowed enterprise data center; however with this announcement yet another one of those essential technologies is running fully supported by the vendor in the Cloud. In mission-critical applications support is essential–and for Red Hat products that means 24×7 operational support plus developer support. See for a menu of offerings to choose from.

This is all quite amazing. Just over two years ago Amazon Simple Storage Service launched, followed in August of 2006 by Amazon Elastic Compute Cloud. In the short span of time since 2006 weve seen Cloud Computing grow from an idea to of course we use it for many organizations. With the advent of powerhouse enterprise infrastructure and applications, it seems inevitable that line-of-business applications in the cloud will become commonplace.

Getting started is easy, with just three steps:

  1. Sign up for Amazon EC2
  2. Purchase a subscription to Red Hat Enterprise Linux (RHEL) on Amazon EC2 or purchase a subscription to JBoss on Amazon EC2
  3. Deploy your applications on the newly-minted application server; then optionally make a custom AMI from this image and save it as your own private version in Amazon S3.

You can learn more at


More EC2 Power

Ec2_high_cpu Amazon EC2 users now have access to a pair of new “High-CPU” instance types. The new instance types have proportionally more CPU power than memory, and are suitable for CPU-intensive applications. Here’s what’s now available:

The High-CPU Medium Instance is billed at $0.20 (20 cents) per hour. It features 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units Each), and 350 GB of instance storage, all on a 32-bit platform.

The High-CPU Extra Large Instance is billed at $0.80 (80 cents) per hour. It features 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), and 1,690 GB of instance storage, all on a 64-bit platform.

The AWS Simple Monthly Calculator now supports these new instance types.

We’ve been working with a number of tool vendors to line up early support for this important new feature. I plan to update the blog post several times in the coming days as this support becomes available.

— Jeff;

Cloud Studio

Cloudtools Alexsey and Tatyana from Cloud Services dropped me an email to tell me about the beta release of their new Cloud Studio product.

Cloud Studio is a Java application for the management of Amazon EC2 instances. It features a multi-pane interface with a list of available AMIs, a list of running instances, and access to keypairs, security groups, and  IP addresses. Menu options are provided for image registration and deletion, keypair manipulation, security group editing, and IP address assignment.

The application can be run standalone or it can be run from within Eclipse.

You can see a Flash demo on the home page, or you can simply download it.

— Jeff;

Redundant Disk Storage Across Multiple EC2

M_david_preparing_for_ec2_persisten XML Hacker M. David Peterson has put together a really interesting article.

As part of his work at 3rd and Urban, he has implemented redundant, fault-tolerant, read-write disk storage on Amazon EC2 using a number of open source tools and applications including LVM, DRBD, NFS, Heartbeat, and VTUN.

Mark notes that “the primary focus of this paper is to present both a detailed overview as well as a working code base that will enable you to begin designing, building, testing, and deploying your EC2-based applications using a generalized persistent storage foundation, doing so today in both lieu of and in preparation for release of Amazon Web Services offering in this same space.”

The article provides complete implementation details and links to source code for the scripts that Mark developed.

You can read the article, and you can also follow progress via the discussion group.

— Jeff;

On Condor and Grids

There is lots of buzz about Hadoop and Amazon EC2and of course there should be, given all the great projects such as the one that the New York Times one, where they converted old articles into PDF files in short order at a very reasonable cost.

Theres a second environment you should know about, although the buzz level is a bit lower. (That might change.) Condor is a scheduling application that is commonly used in HPC and grid applications. It can also be used to manage Hadoop grids, and manages jobs in much the same manner as mainframesthat is, you submit a job to Condor, along with metadata that describes the jobs characteristics. Then Condor finds suitable resources to allocate for the job. Note that Condor and Hadoop are trying to solve things in independent ways–with the result that they overlap in some ways, while doing unrelated things in some cases.

This week I attended Condor Week at the University of Wisconsin in Madison. Condor Week is an annual event that gives Condor collaborators and users the chance to exchange ideas and experiences, to learn about latest research, to experience live demos, and to influence our short and long term research and development directions.

If you are interested in large-scale grid computing, this approach is worth a serious look. There are two active projects that implement Condor on Amazon EC2, and of course thats why this blog entry is being posted.

Cycle Computing offers Amazon EC2 plus Condor as an integrated platform, in addition to supporting other underlying computing resources. Their software automates Condor grid management, including monitoring, configuration, version control, usage tracking, and more. At the conference Jason Stowe from Cycle Computing made a very strong case for using Amazon EC2 instead of a traditional grid environment. Jasons presentation is available for download at

Red Hats approach integrates EC2 directly into the Condor code base. The result is that an Amazon EC2 instance is the Condor Job, and in that manner they are able to manage the entire life cycle of an EC2 Instance. In some cases the entire Condor pool is running on EC2, and in other cases EC2 augments an existing pool. All of this work was done by collaboration between the University of Wisconsin (Jaeyoung Yoon , Fang Cao, and Jaime Frey, along with Matt Farrellee from Red Hat. They plan to integrate Amazon S3 as a storage medium in the near future.

One thing seems certain: on-demand virtualization brightens the lights in Grid Computing City, because organizations who could not afford a grid suddenly find themselves with both affordable infrastructure and powerful tools to manage their new-found tool.


Animoto – Scaling Through Viral Growth

Animoto is a very neat Amazon-powered application. Built on top of Amazon EC2, S3, and SQS, the site allows you to upload a series of images. It then generates a unique, attractive, and entertaining music video using your own music or something selected from the royalty-free library on the site. Last week I spoke to a group of Computer Science and IT students at Utah Valley State College. Before leaving Seattle I spent some time downloading images from their athletics site. I then combined this with some Southern Surf Syndicate music from The Penetrators and ended up with this really nice video:

There’s a lot going on in the background. After the images and the music have been uploaded, proprietary algorithms analyze them and then render the final video. This can take an appreciable amount of time and requires a considerable amount of computing power.

Animoto co-founder and CEO Brad Jefferson stopped by Amazon HQ for a quick visit on Thursday. Earlier in the week we had seen their EC2 usage grow substantially and I was interested in learning more. Brad explained that they had introduced the Animoto Videos Facebook application about a month earlier and that it had done pretty well, with about 25,000 users signing up over the course of the month, with steady, linear growth.

The reaction from the Facebook community was positive, so the folks at Animoto decided to step it up a notch.  They noticed that a significant portion of users who installed the app never made their first Animoto video yet the application (as they themselves admit) relies heavily on the ‘wow’ factor of seeing your first Animoto video and wanting to share it with your friends.  On Monday the team made a subtle but important change to their application: they auto-created a user’s first Animoto video.

That did the trick!

They had 25,000 members on Monday, 50,000 on Tuesday, and 250,000 on Thursday. Their EC2 usage grew as well. For the last month or so they had been using between 50 and 100 instances. On Tuesday their usage peaked at around 400, Wednesday it was 900, and then 3400 instances as of Friday morning. Here’s a chart:


We are really happy to see Animoto succeed and to be able to help them to scale up their user base and their application so quickly. I’m fairly certain that it would be difficult for them to get their hands on nearly 3500 compute nodes so quickly in any other way.

— Jeff;

Scalr – Scalable Web Sites with EC2

Scalr Dave Naffis of Intridea wrote to tell me that they have released Scalr in open source form. Scalr is a fully redundant, self-curing, self-hosting EC2 environment.

Using Scalr you can create a server farm using prebuilt AMI’s for load balancing (either Pound or nginx), web servers, and databases. There’s also a generic AMI that you can customize and use to host your actual application.

Scalr monitors the health of the entire server farm, ensuring that instances stay running and that load averages stay below a configurable threshold. If an instance crashes another one of the proper type will be launched and added to the load balancer.

Download the code, take a look at the diagrams, or (always the last resort) read the installation instructions.

— Jeff;

New EC2 Features: Static IP Addresses, Availability Zones, and User Selectable Kernels

We just added three important new features to Amazon EC2: Elastic IP Addresses, Availability Zones, and User Selectable Kernels. The documentation, the WSDL, the AMI tools, and the command line tools have been revised to match and there’s a release note as well.

Read on to learn all about them…

Elastic_ball The Elastic IP Addresses feature gives you more control of the IP addresses associated with your EC2 instances. Using this new feature, you use the AllocateAddress function to associate an IP address with your AWS account. Once allocated, the address remains attached to your account until released via the ReleaseAddress function.Separately, you can then point the address at any of  your running EC2 instances using the AssociateAddress function.The association remains in place as long as the instance is running, or until you remove it with the DisassociateAddress function.Finally, the DescribeAddresses function will provide you with information about the IP addresses attached to your account and  how they are mapped to your instances. Accounts can allocate up to 5 IP addresses top start; you can ask for more if you really need them. Addresses which you have allocated but not associated with an instance will cost you $.01 per hour.


Giant_world_map Availability Zones give you additional control of where your EC2 instances are run. We use a two level model which consists of geographic regions broken down into logical zones. Each zone is designed in such a way that it is insulated from failures which might affect other zones within the region. By running your application across multiple zones within a region you can protect yourself from zone-level failures.

The new DescribeAvailabilityZones function returns a list of availability zones along with the status of each zone. The existing RunInstances function has been enhanced to accept an optional placement parameter. Passing the name of an availability zone will force EC2 to run the new instances in the named zone. If no parameter is supplied, EC2 will assign the instances to any available zone.


Linux_kernel Finally, the User Selectable Kernels feature allows users to run a kernel other than the default EC2 kernel. Anyone can run a non-default kernel, but the ability to create new kernels is currently restricted to Amazon and select vendors. This feature introduces a new term, the AKI or Amazon Kernel Image. The AKI can be specified at instance launch time using another new parameter to RunInstances, or it can be attached to an AMI (Amazon Machine Image) as part of the image bundling process.

We are also rolling out 32 and 64 bit versions of Linux kernel version 2.6.18, all packaged up as AKIs and ready to run. And there’s a new 32 bit Fedora Core 6 AMI and both 32 and 64 bit versions of Fedora Core 8.


The developers at RightScale are already supporting these new features in the  free version of their RightScale platform. They’ve also assembled three very informative blog posts.

The first post covers DNS and Elastic IPs and how they come in to play when upgrading a server. One sentence from this post really captures the essence of cloud computing as applied to the upgrade process:

The power of the cloud is that we dont need to touch our existing web server and risk causing damage during the upgrade process. Instead we launch a second web server and install the new release on it.

The second post reviews the process of setting up a fault-tolerant site using the Availability Zones. This post describes two different ways to create a redundant architecture with the ability to load balance traffic across zones or to fail over to a second zone when the first one fails. When that happens redundancy can then be re-established by bringing another set of instances to life in yet another zone. As they note:

If you have never tried to set something like this up yourself starting from renting colo space, purchasing bandwidth to buying and installing servers, you really cant appreciate the amount of capital expense, time, headache, and ongoing expense saved by EC2s features! And best of all, using RightScale its just a couple of clicks away :-).

Finally, the third post  announces the fact that they now support the new Elastic IP and Availability Zone features. You’ll need to read the entire post, but they are pretty excited by the opportunities that this new set of features open up:

Whats really exciting is that the combination of Elastic IPs and Availability Zones bring cloud computing to a different level. In the above example, when the app servers get relaunched in a new zone, EC2 allows the elastic IPs that were associated with the app servers to be reassigned from the old servers in the failed zone to the new ones. So now traffic doesnt just get routed to new instances, it actually gets routed to a different datacenter. From the outside this may seem straightforward, but in reality the degree of engineering that is necessary to support this type of technical feature is quite staggering.

We’re looking forward to hearing from more developers and system architects as they engineer these new features into their systems. As always, drop me a note at if you have done something that you’d like us to cover in this blog.

— Jeff;

Increasing Your Amazon EC2 Instance Limit

Ec2_bump_limit We have simplified the process of requesting additional EC2 instances. You no longer need to call me at home or send a box of dog biscuits to Rufus.

You can now make a request by simply filling out the Request to Increase the Amazon EC2 Instance Limit form. We’ll need to know a little bit about you and about your application and the number of instances that you need, and we’ll take care of the rest.

As always, if you are doing something cool with EC2, we really  want to hear about it! Write a blog post that we can link to, or simply send us an email at .

— Jeff;