Category: Amazon EC2*


Now Available: Amazon EC2 Running Red Hat Enterprise Linux

We continue to add options to AWS in order to give our customers the freedom and flexibility that they need to build and run applications of all different shapes and sizes.

I’m pleased to be able to tell you that you can now run Red Hat Enterprise Linux on EC2 with support from Amazon and Red Hat. You can now launch 32 and 64-bit instances in every AWS Region and on every EC2 instance type. You can choose between versions 5.5, 5.6, 6.0, and 6.1 of RHEL. You can also launch AMIs right from AWS Console‘s Quick Start Wizard. Consult the full list of AMI’s to get started.

Capture
If you are a member of Red Hat’s Cloud Access program you can use your existing licenses. Otherwise, you can run RHEL on On-Demand instances now, with Spot and Reserved Instances planned for the future. Pricing for On-Demand instances is available here.

All customers running RHEL on EC2 have access to an update repository operated by Red Hat. AWS Premium Support customers can contact AWS to obtain support from Amazon and Red Hat.

— Jeff;

 

Live Streaming With Amazon CloudFront and Adobe Flash Media Server

You can now stream live audio or video through AWS using the Adobe Flash Media Server using a cost-effective pay-as-you-go model that makes uses of Amazon EC2, Amazon CloudFront, and Amazon Route 53, all configured and launched via a single CloudFormation template.

We’ve used AWS CloudFormation to make the signup and setup process as simple and straightforward as possible. The first step is to actually sign up for AWS CloudFormation. This will give you access to all of the AWS services supported by AWS CloudFormation, but you’ll pay only for what you use.

I’ve outlined the major steps needed to get up and running below. For more information, you’ll want to consult our new tutorial, Live Streaming Using Adobe Flash Media Player and Amazon Web Services.

Once you’ve signed up, you need to order Flash Media Server for your AWS Account by clicking here. After logging in, you can review the subscription fee and other charges before finalizing your order:

Then you need to create a Route 53 hosted zone and an EC2 key pair. The tutorial includes links to a number of Route 53 tools and you can create the key pair using the AWS Management Console.

The next step is to use CloudFormation to create a Live Streaming stack. As you’ll see in the documentation, this step makes use of a new feature of the AWS Management Console. It is now possible to construct a URL that will open up the console with a specified CloudFormation template selected and ready to use. Please feel free to take a peek inside the Live Streaming Template to see how it sets up all of the needed AWS resources.

When you initiate the stack creation process you’ll need to specify a couple of parameters:

Note that you’ll need to specify the name of the Route 53 hosted domain that you set up earlier in the process so that it can be populated with a DNS entry (a CNAME) for the live stream.

The CloudFormation template will create and connect up all of the following:

  • An EC2 instance of the specified instance type running the appropriate Flash Media Server AMI and accessible through the given Key Pair. You can, if desired, log in to the instance using the SSH client of your choice.
  • An EC2 security group with ports 22, 80, and 1935 open.
  • A CloudFront distribution.
  • An A record and a CNAME in the hosted domain.

The template will produce the URL of the live stream as output:

The resulting architecture looks like this:

The clients connect to the EC2 instance every 4 seconds to retrieve the manifest.xml file. This is specified in the template and can be modified as needed. You have complete access to the Flash Media Server and you can configured it as desired.

Once you’ve launched the Flash Media Server, you can install and run the Flash Media Live Encoder on your desktop, connect it up to your video source, and stream live video to your heart’s content. After you are done, you can simply delete the entire CloudFormation stack to release all of the AWS resources. In fact, you must do this in order to avoid on-going charges for the AWS resources.

The CloudFormation template specifies the final customizations to be applied to the AMI at launch time. You can easily copy and then edit the script if you need to make low-level changes to the running EC2 instance.

As you can see, it should be easy for you to set up and run your own live streams using the Adobe Flash Media Server and AWS if you start out with our tutorial. What do you think?

Update: The newest version of CloudBerry Explorer includes support for this new feature. Read their blog post to learn more.

— Jeff;

AWS Management Console Bookmarking

We’ve added a new bookmarking feature to the AWS Management Console. You can now construct a URL that will open the console with a specific AMI (Amazon Machine Image) or CloudFormation Template selected and ready to launch.

EC2 AMI Launch
The URL to open up the console with a particular AMI selected looks like this:

https://console.aws.amazon.com References the console
/ec2/home Specifies the EC2 tab
?region=us-west-1 Specifies the region
#launchAmi=ami-3bc9997e Specifies the AMI

If you create AMIs and share them with others, this is an easy way to pass references around so that they can be launched with ease. When the link is activated the console will start as follows (prompting for email address and password if necessary):

The developers at BitNami have already made use of this feature to link directly to their AMIs. For example, here’s their page of Magento AMIs:

Ubuntu AMIs are also available with a click:

The Cloud Market also supports this new feature.

CloudFormation Stack Create
The URL to open up the console to the CloudFormation tab with a particular template selected looks like this:

https://console.aws.amazon.com References the console
/cloudformation/home Specifies the CloudFormation tab
?region=us-east-1 Specifies the region
#cstack= Specifies that stack information follows
sn~PHPSample Sets the name of the stack
turl~https://s3.amazonaws.com/cloudformation-templates-us-east-1/PHPHelloWorld-1.0.0.template

Specifies the link to the template

In this case the console will appear as follows:

We have used this new bookmarking feature to set up a directory of CloudFormation Sample Templates. You can browse the directory, find the desired template, and then initiate the stack creation process with a single click.

The Console team is interested in your suggestions for additional types of bookmarks. Please feel free to leave comments to this post and I’ll pass them along.

— Jeff;

My EC2 Instance – The First 1000 Days

I launched my first “production” EC2 instance almost three years ago, on July 15, 2008. For my purposes, production includes hosting my personal blog and writing code for my AWS book, as well as a host for random development projects that I putter around with from time to time.

I am happy to report that my instance reached 1000 days of uptime over the weekend:

One of these days I’ll upgrade to a more modern instance (this one predates EBS) but I’m still quite happy with this one and I’ll keep it running as long as possible.

EC2 has certainly come a long way in just 1000 days. Here are some of the highlights:

Jeff;

 

Amazon EC2 Cluster Instances Available on Spot Market

Today we are coupling two popular aspects of Amazon EC2: Cluster computing and Spot Instances!

More and more of our customers are finding innovative ways to use EC2 Spot Instances to save up to two-thirds off the On-Demand price. Batch processing, media rendering and transcoding, grid computing, testing, web crawling, and Hadoop-based processing are just a handful of the use cases that are running on Spot today.

For example, researchers at the University of Melbourne and the University of Barcelona are doing vast amounts of data processing for their Belle particle physics experiments on EC2 Spot Instances and realizing a cost savings (when compared to the price of On-Demand Instances) of 56% in the process. Each job starts out small (15-20 EC2 instances) and then scales up to between 20 and 250 instances in the space of four hours. Read more in our new case study.

Scribd has also made very good use of EC2 Spot Instances. As described in the case study, they were able to save 63% (or $10,500) on a large-scale data conversion (From Flash to HTML5) running on over 2,000 EC2 instances at a time. They converted every one of the millions of documents that have been uploaded to the site to HTML5 using a scalable grid comprised of a single master node and multiple slave nodes.

At the same time, our customers have been making really good use of our Cluster Compute and Cluster GPU instances. We’ve seen interesting use cases in a number of fields including molecular dynamics, fluid dynamics, bioinformatics, batch data processing, MapReduce, machine learning, and media rendering. The applications use a variety of coordination strategies and coupling models, ranging from fairly loose to very tight.

The folks at Cycle Computing documented their cluster-building experience in a very informative blog post. They used Cluster GPU instances to create a 32-node, 64-GPU cluster that also includes 8 TB of shared storage. The entire cluster costs less than $82 per hour to operate. They have found that the GPU accelerates overall application performance by a factor of 50 to 60 and note that their success rate in moving internal applications to the GPU is 100%.

Bioproximity provides proteomic analytical services (in plain English, they study protein at the structural and functional level) on a contract basis. In order to do this they need lots of compute power and storage space. Lacking the funds to set up their own compute cluster, they found the AWS pay-as-you-go model to be a perfect fit for their business. They run a large-scale MPI cluster on EC2 with a web-based front end for job submission. Read more in the Bioproximity case study.

On the rendering side, our friends at Animoto have used the Cluster GPU instances to accelerate their video rendering process. The increased throughput allows them to deliver videos more quickly (seconds instead of minutes) and also gives them the ability to support full-on HD video. This article has more information about Animoto and their use of EC2 to generate professional-quality video.

At the same time, our customers are finding innovative ways to use the EC2 Spot Instances to get work done in an economical way.

Effective immediately, you can now use these two features together — you can now submit spot requests for Cluster Compute and Cluster GPU Instances. These instances are currently available in a pair of Availability Zones in the US East (Northern Virginia) Region. You can choose between SUSE Linux Enterprise Server and Amazon Linux AMIs, both of which are now available in HVM form.

You can request the instances using the EC2 Command Line tools, the EC2 APIs, or the AWS Management Console:

We’re looking forward to seeing the new and interesting ways that our customers will use Spot pricing and  Cluster compute instances, alone or (preferably!) together. Here are some of the application areas that should be a good fit:

  • Batch and background processing.
  • Web and data crawling.
  • Financial modeling and analytics.
  • MapReduce and Grid computing.
  • Video processing, especially transcoding.

What can you do with this new combination of features?

— Jeff;

 

Adding a Second AWS Availability Zone in Tokyo

Our hearts go out to those who have suffered through the recent events in Japan. I was relieved to hear from my friends and colleagues there in the days following the earthquake. I’m very impressed by the work that the Japan AWS User Group (JAWS) has done to help some of the companies, schools, and government organizations affected by the disaster to rebuild their IT infrastructure.

We launched our Tokyo Region with a single Availability Zone (“AZ”) about a month ago. At that time we said we would be launching a second Tokyo AZ soon. After a very thorough review of our primary and backup power supplies, we have decided to open up that second Availability Zone, effective today.

As you may know, AWS is currently supported in five separate Regions around the world: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo). Each Region is home to one or more Availability Zones. Each Availability Zone in a Region is engineered to be operationally independent of the other Zones, with independent power, cooling, physical security, and network connectivity. As a developer or system architect, you have full control over the Regions and Availability Zones that your application uses.

A number of our customers are already up and running in Tokyo and have encouraged us to open up the second Availability Zone so that they can add fault tolerance by running in more than one AZ. For example, with the opening of the second AZ developers can use the Amazon Relational Database Service (RDS) in Multi-AZ mode (see my blog post for more information about this), or load balance between web servers running Amazon EC2 in both AZ’s.

— Jeff;

PS – We continue to monitor the power situation closely. The AWS Service Health Dashboard is the best place to go for information on any possible service issues.

Amazon EC2 Dedicated Instances

We continue to listen to our customers, and we work hard to deliver the services, features, and business models based on what they tell us is most important to them. With hundreds of thousands of customers using Amazon EC2 in various ways, we are able to see trends and patterns in the requests, and to respond accordingly. Some of our customers have told us that they want more network isolation than is provided by “classic EC2.”  We met their needs with Virtual Private Cloud (VPC). Some of those customers wanted to go even further. They have asked for hardware isolation so that they can be sure that no other company is running on the same physical host.

We’re happy to oblige!

Today we are introducing a new EC2 concept the Dedicated Instance. You can now launch Dedicated Instances within a Virtual Private Cloud on single-tenant hardware. Let’s take a look at the reasons why this might be desirable, and then dive in to the specifics, including pricing.

Background
Amazon EC2 uses a technology commonly known as virtualization to run multiple operating systems on a single physical machine. Virtualization ensures that each guest operating system receives its fair share of CPU time, memory, and I/O bandwidth to the local disk and to the network using a host operating system, sometimes known as a hypervisor. The hypervisor also isolates the guest operating systems from each other so that one guest cannot modify or otherwise interfere with another one on the same machine. We currently use a highly customized version of the Xen hypervisor. As noted in the AWS Security White Paper, we are active participants in the Xen community and track all of the latest developments.

While this logical isolation works really well for the vast majority of EC2 use cases, some of our customers have regulatory or restrictions that require physical isolation. Dedicated Instances have been introduced to address these requests.

The Specifics

Each Virtual Private Cloud (VPC) and each EC2 instance running in a VPC now has an associated tenancy attribute. Leaving the attribute set to the value “default” specifies the existing behavior: a single physical machine may run instances launched by several different AWS customers.

Setting the tenancy of a VPC to “dedicated” when the VPC is created will ensure that all instances launched in the VPC will run on single-tenant hardware. The tenancy of a VPC cannot be changed after it has been created.

You can also launch Dedicated Instances in a non-dedicated VPC by setting the instance tenancy to “dedicated” when you call RunInstances. This gives you a lot of flexibility; you can continue to use the default tenancy for most of your instances, reserving dedicated tenancy for the subset of instances that have special needs.

This is supported for all EC2 instance types with the exception of Micro, Cluster Compute, and Cluster GPU.

It is important to note that launching a set of instances with dedicated tenancy does not in any way guarantee that they’ll share the same hardware (they might, but you have no control over it). We actually go to some trouble to spread them out across several machines in order to minimize the effects of a hardware failure.

Pricing
When you launch a Dedicated Instance, we can’t use the remaining “slots” on the hardware to run instances for other AWS users. Therefore, we incur an opportunity cost when you launch a single Dedicated Instance. Put another way, if you run one Dedicated Instance on a machine that can support 10 instances, 9/10ths of the potential revenue from that machine is lost to us.

In order to keep things simple (and to keep you from wasting your time trying to figure out how many instances can run on a single piece of hardware), we add a $10/hour charge whenever you have at least one Dedicated Instance running in a Region. When figured as a per-instance cost, this charge will asymptotically approach $0 (per instance) for customers that run hundreds or thousands of instances in a Region.

We also add a modest premium to the On-Demand pricing for the instance to represent the added value of being able to run it in a dedicated fashion. You can use EC2 Reserved Instances to lower your overall costs in situations where at least part of your demand for EC2 instances is predictable.

— Jeff;

 

Build a Cluster Computing Environment in Under 10 minutes

We’ve created a new video tutorial, which describes how to setup a cluster of high performance compute nodes in under 10 minutes. Follow along with the tutorial to get a feel for how to provision high performance systems with Amazon EC2 – we’ll even cover the cost of the resources you use, through a $20 free service credit.

Why HPC?

Data is at the heart of many modern businesses. The tools and products that we create in turn generate complex datasets which are increasing in size, scope and importance. Whether we are looking for meaning within the bases of our genomes, performing risk assesments on the markets or reporting on click-through traffic from our websites, these data hold valuable information which can drive the state of the art forward.

Constraints are everywhere when dealing with data and its associated analysis, but few are as restrictive as the time and effort it takes to procure, provision and maintain the high performance compute servers which drive that analysis.

The cluster compute instance sizes available on Amazon EC2 can greatly reduce this constraint, and give you the freedom to run high specification analysis on-demand, as and when you need them. Amazon EC2 takes care of provisioning and monitoring your compute cluster and storage, leaving you more time to dive into your data.

A guided tour

To demonstrate the agility this approach provides, I made a short video tutorial which guides you through how to provision, configure and run a tightly coupled molecular dynamics simulation using cluster compute instances. The whole cluster is up and running in under 10 minutes.

Start the tutorial!

To help get a feel for this environment, we’re also providing $20 of service credits (enough to cover the cost of the demo), so you can follow along with this tutorial for free. To register for your free credits, just follow the link on the tutorial page.

In addition to getting up and running quickly, each cluster compute instance is no slouch either. They use hardware virtualisation to allow your code to get closer to the dual quad core Nehalem processors, and full bi-section 10Gbps networking for high speed communication between instances. Multi-core GPUs are also available – a perfect fit for large scale computational simulation or rendering. 

Just as in other fields, cloud infrastructure can help reduce the ‘muck’ and greatly lower the barrier of entry associated with working with high performance computing. We hope this short video will give you a flavour for things.

Get in touch

Feel free to drop me a line if you have any questions, or you can follow along on Twitter. I also made a longer form video, which includes a wider discussion on high performance computing with Amazon EC2.

~ Matt

Updated Amazon Linux AMI (2011.02) Released

We released an updated version of the Amazon Linux AMI earlier this week. It is available in all AWS Regions for all instance types.

Here’s what’s new:

  • Default compiler upgraded from GCC 4.1 to GCC 4.4.
  • The AMI kernel is now based on 2.6.35.11 release.
  • An HVM AMI was released to support the Cluster Compute (cc1.4xlarge) and Cluster GPU (cg1.4xlarge) instance types.
  • Default filesystem type for the AMI root filesystem has been changed from ext3 to ext4.
  • The Amazon Linux AMI is using upstart instead of sysvinit when booting.
  • The default Yum configuration on Amazon Linux AMI enables fail-over access to neighboring regions in case the repository in the local region is not accessible.

There’s more information and a complete list of new and updated packages in the Amazon Linux Release Notes.

 — Jeff;

 

Now Available: Windows Server 2008 R2 on Amazon EC2

Today we are adding new options for our customers running Windows and SQL Server environments on Amazon EC2. In addition to running Windows Server 2003 and 2008, you can now run now run Windows Server 2008 R2. Sharing the kernel with Windows 7, this release of Windows includes additional Active Directory features, support for version 7.5 of IIS, new management tools, reduced boot time, and enhanced I/O performance. We are also adding support for SQL Server 2008 R2 and we are introducing Reserved Instances for SQL Server.

You can now launch instances of Windows Server 2008 R2 in four different flavors:

  • Core – A scaled-down version of Windows Server, with the minimum set of server roles.
  • Base – A basic installation of Windows Server 2008 R2.
  • Base with IIS and SQL Server Express – A starting point for Windows developers.
  • SQL Server Standard 2008 R2 – A starting point for Windows developers.

Here are the details:

  • All of these AMIs are available for immediate use in every Region and on most 64-bit instance types, excluding the t1.micro and Cluster Compute families.
  • We plan to add support for running Windows Server 2008 R2 in the Amazon Virtual Private Cloud (VPC).
  • The AMIs support English, Italian, French, Spanish, German, Traditional Chinese, Korean, and Japanese. The languages are supported only within the applicable regions — European languages in the EU and Asian languages in Singapore and Tokyo.
  • Windows Server 2008 R2 is available at the same price as previous versions of Windows on EC2. Reserved Instances and Spot Instances are also available.

Update: You can use the AWS VM Import feature to bring existing virtual machines to EC2. VM Import has been updated and now supports the 64-bit Standard, Datacenter, and Enterprise editions of Windows Server 2008 R2.

To get started, you can visit the Windows section of the AMI catalog or select “Windows 2008 R2” in the Quick Start menu when you launch a new instance. Microsoft has also posted additional Amazon Machine Images with Windows 2008 R2 in the Windows section of the AMI Catalog.

 

I look forward to hearing from you as you put Windows 2008 R2 to use. Leave a comment or send email to awseditor@amazon.com.

— Jeff;