Category: Amazon EC2

AWS Direct Connect

by Jeff Barr | on | in Amazon EC2, Amazon S3 |

The new AWS Direct Connect service allows enterprises to create a connection to an AWS Region via a dedicated network circuit. In addition to enhancing privacy, dedicated circuits will generally result in more predictable data transfer performance and will also increase bandwidth between your data center and AWS. Additionally, users of dedicated circuits will frequently see a net reduction in bandwidth costs.

AWS Direct Connect has one location available today, located at Equinixs Ashburn, Virginia colocation facility. From this location, you can connect to services in the AWS US-East (Virginia) region. Additional AWS Direct Connect locations are planned for San Jose, Los Angeles, London, Tokyo, and Singapore in the next several months.

There are two ways to get started:

  • If you already have your own hardware in an Equinix data center in Ashburn, Virginia, you can simply ask them to create a cross-connect from your network to ours. They can generally get this set up in 72 hours or less.
  • If you don’t have hardware in this data center, you can work with one of the AWS Direct Connect solution providers (our initial list includes AboveNet, Equinix, and Level 3) to procure a circuit to the same datacenter or obtain colocation space. If you procure a circuit, the AWS Direct Connect solution provider will take care of the cross-connect for you.

You can select 1 Gbit or 10 Gbit networking for each connection, and you can create multiple connections for redundancy if you’d like. Each connection can be used to access all AWS services. It can also be used to connect to one or more Virtual Private Clouds.

Billing will be based on the number of ports and the speed of each one. Data transfer out of AWS across the circuit will be billed at $0.02 / GB (2 cents per GB). There is no charge for data transfer in to AWS.

I expect to see AWS Direct Connect used in a number of different scenarios. Here are a few of them:

  • Data Center Replacement – Migrate an existing data center to AWS and then use Direct Connect to link AWS to the corporate headquarters using a known private connection.
  • Custom Hosting – Place some custom network or storage devices in a facility adjacent to an AWS Region, and enjoy high bandwidth low latency access to the devices from AWS.
  • High Volume Data Transfer – Move extremely large amounts of data in and out of HPC-style applications.

In order to make the most of a dedicated high speed connection, you will want to look at a category of software often known as WAN optimization (e.g. Riverbed’s Cloud Steelhead) or high speed file transfer (e.g. Aspera’s On-Demand Direct for AWS). Late last month I saw a demonstration from Aspera. They showed me that that were able to achieve 700 Mbps of data transfer across a 1 Gbps line. At this rate they are able to transfer 5 Terabytes of data to AWS in 17 hours. 

— Jeff;


AWS Summer Startups: Mediology

by Jeff Barr | on | in Amazon CloudFront, Amazon EC2, Amazon RDS, Amazon S3, Amazon SDB, Amazon SES, Amazon Simple Notification Service, APAC, Asia Pacific, AWS Identity and Access Management, Summer Startups |

Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling Mediology Software from India!

The Story
2010 was the first year when we allowed countries from Asia Pacific to enter the start-up challenge. We were very impressed with the quality of entries and, in specific, one of them, Mediology Software, caught our eye and made it to the final round in Palo Alto.

Mediology Software is currently a one-year old start-up based out of India and employing 35 people. Mediology DigitalEdition, their main product, is a SaaS platform that enables print publishers to digitize their content, add interactivity, create workflows, and then distribute the content via web, mobile and e-reading platforms. The system achieves its massive scale for content digitization and delivery using event-centric cloud computing services from AWS.

As an example of the type of work Mediology does, I encourage you to take a look at the case study we recently published, describing how AWS and Mediology teamed up to help, a website geared to East Asian and South Asian women on a wide range of topics including family, health, beauty, etc. In addition to offering CozyCot a better website hosting and scaling solution through the AWS infrastructure, Mediology has helped them distribute and promote their content through a wide variety of platforms, increasing CozyCots bottom line. 

From the Founders
I caught up with Manish Dhingra and Gaurav Bhatnagar, Co-Founders at Mediology Software, a few days ago, as I was checking on how theyre doing almost a year after being named finalists in the AWS Start-up Challenge.

Since January 2011, we have had some high-profile launches on our DigitalEdition platform. Naturally, the usage of AWS, not just in terms of the instance volume, but across the set of AWS services has enabled us to create a very scalable, yet cost-effective architecture. We’re 100% build and reliant on AWS. For instance, we use EC2, Cloudfront, S3, SES, SimpleDB, RDS, SNS, CloudWatch and IAM, all orchestrated together to enable our SaaS platform, Mediology DigitalEdition.

How Has the AWS Start-up Challenge Helped Mediology?
I asked Manish to tell me how the AWS Start-up Challenge has helped their business. Here’s what he told me:

Consumer and Customer confidence in our solution has definitely taken a giant leap, since we returned from Palo Alto in December 2010. Although the same has also led to higher expectations, our grasp of AWS has enabled us to meet the customer expectations quite easily.

Sharing the Wisdom with other Asia-Pacific Start-ups:

AWS gives you the ability to enable application or solution heavy-lifting. We believe Asia is a growth market and many new age concepts around value-based computing, value-added services (specifically around mobile, which works on the core tenets of SaaS and SOA) will find great traction here. 

The key is to not get fazed during the stealth and growth stages of your start-up. Think of AWS as something that gives the wings to your creativity and enables very effective working-capital utilization. In fact, if the pricing benefits are passed on to the consumers, then there is a great chance of leveling the playing field and being the best at what you do, without compromising on the bottom line.

The AWS Startup Challenge
We’re getting ready to launch this year’s edition of our own annual contest, the AWS Startup Challenge. You can sign up to get notified when we launch it, or you can follow @AWSStartups on Twitter.

— Simone;

Summer Startups: Sportaneous

by Jeff Barr | on | in Amazon EC2, Amazon Elastic Load Balancer, Amazon RDS, Customer Success, Summer Startups |

Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling Sportaneous from New York City!

The Story
I first learned about Sportaneous after reading about NYC Big Apps, an application contest launched by the city of New York, BigApps is a competition that is organized by Mayor Bloomberg. Its goal is to reward apps that improve NYC by using public data sets released by the local government.

Sportaneous jumped at the opportunity to enter the contest because their applications already offered users a database of public sports facilities to choose from, many of which are obtained from Park & Recreation data. Sportaneous makes it easy for busy people to play sports or engage in group fitness activities. Through the Sportaneous mobile app and website, a person can quickly view all sports games and fitness activities that have been proposed in her surrounding neighborhoods. The user can choose to join whichever game best fits her schedule, location, and skill level. Alternatively, a Sportaneous user may spontaneously propose a game herself (for example, a beginner soccer game in Central Park three hours from now), which allows all Sportaneous users in the Manhattan area to join the game until the maximum number of players has been reached.

From the over 50 applications that entered the NYC BigApps competition, Sportaneous won two of the main awards: the Popular Vote Grand Prize, based on over 9,500 people voting for their favorite app in the contest and the Second Overall Grand Prize voted on by a panel of very distinguished judges, including Jack Dorsey (Co-founder, Twitter), Naveen Selvadurai (Co-founder, Foursquare) and prominent tech investors in NYC.

Here is a video of Sportaneous in action:

From the CEO
I spoke to Omar Haroun, CEO and Co-Founder at Sportaneous about how they got started and ended up using AWS. He shared a bit about their humble beginnings and how their growth plans continued to include AWS:

We initially bootstrapped the service using a single EC2 instance.  We used an off-the-shelf AMI with a backing EBS volume, so we could fine-tune the machine’s configuration as we started higher traffic numbers.  We wanted a low cost, reliable hosting option which we knew had the ability to scale gracefully (and very quickly) when needed.  EC2 allowed us to get up and running in a matter of hours, without forcing any design compromises which we’d later regret.

As traffic has grown and we’ve begun preparing for a public launch, we’re planning to move our MySQL databases to RDS and to take advantage of some additional ELB features (including SSL termination). RDS was also a no-brainer.  We realize that any data-loss event would be devastating to our momentum, but we don’t have the resources for a full time DBA (or even a database expert).  RDS and its cross-AZ replication takes a huge amount of pressure off of our shoulders.

Behind the Scenes
I asked Omar to tell me a bit about the technology behind Sportaneous. Here’s what he told me:

Our web app is written in Scala, using the very awesome Lift Framework. Our iPhone App is written in Objective-C.  Both web app and iPhone app are thin clients on top of a backend implemented in Java, using the Hibernate persistence framework. Our EC2 boxes (which serve both our web app and our backend) run Jetty behind nginx.

He wrapped up on a very positive note:

Using EC2 with off the shelf AMIs, we went from zero to scalable, performant web app in under two hours.

The AWS Startup Challenge
We’re getting ready to launch this year’s edition of our own annual contest, the AWS Startup Challenge. You can sign up to get notified when we launch it, or you can follow @AWSStartups on Twitter.

— Jeff;

EC2 Spot Pricing – Now Specific to Each Availability Zone

by Jeff Barr | on | in Amazon EC2 |

We have made an important change to the way pricing works for EC2 Spot Instances. We are replacing the original Region-based pricing information with more detailed information that is specific to each Availability Zone. This change ensures that both the supply (the amount of unused EC2 capacity) and the demand (the amount and number of bids for the capacity) reflect the status of a particular Availability Zone, enabling you to submit bids that are more likely to be fulfilled quickly and to use Spot Instances in more types of applications.

As you may know, Spot Instances allow you to bid for unused EC2 capacity, often allowing you to significantly reduce your Amazon EC2 bill. After you place a bid, your instances continue to run as long as the bid exceeds the current Spot Market price. You can also create persistent requests that will automatically be considered again for fulfillment if you are ever outbid.

Over the last year and a half, our customers have successfully leveraged Spot Instances to obtain compute cycles at substantial discounts for use cases like batch processing, scientific research, image processing and encoding, data and web crawling, testing, and financial analysis. Here are some examples:

Social analytics platform BackType (full case study) uses Spot Instances to handle large-scale Hadoop-based batch data processing (tens of terabytes of data representing over 100 billion records). They have been able to reduce their costs by up to 66% when compared to On-Demand instances. Their Spot strategy includes placing high bids to reduce the chance of being interrupted (they pay the current price regardless of their bid, so this does not increase their operating costs).

Monitoring and load testing company BrowserMob (full case study) uses a combination of Spot and On-Demand instances to meet their capacity needs. Their provisioning system forecasts capacity needs 5 minutes ahead of time and submits suitably priced bids for Spot Instances, resorting to On-Demand instances as needed based on pricing and availability.

Biotechnology drug design platform Numerate (full case study) has incorporated Amazon EC2 as a production computational cluster and Amazon S3 for cache storage. Numerate enjoys around 50% cost savings by using Amazon EC2 Spot Instances after spending just 5 days of engineering effort.

Image rendering tool Litmus (full case study) takes snapshots of an email in various email clients and consolidates the images for their customers. Litmus enjoys a 57% cost savings by using Amazon EC2 Spot Instances for their compute needs.

When Spot Instances were first launched, there was a Spot Price for each EC2 instance type and platform (Linux/Unix or Windows) in each Region:

This model worked well but we think we can do even better based on your feedback. We have made some improvements to the Spot Instance model to make it easier for you to implement a cost-effective bidding strategy.

  1. The market price for each type of Spot Instance is now specific to an Availability Zone, not a Region.
  2. We will now publish historical pricing for each Availability Zone. This change will provide fine-grained data that you can use to determine a suitable bid for capacity in a particular Availability Zone. Because this will make a great deal of additional data available to you, we have made an API change to allow you to paginate the results of the DescribeSpotPriceHistory function.
  3. Spot requests that target a particular Availability Zone now have a greater chance of being fulfilled.

With these changes you now have the information needed to do additional fine tuning of your Spot requests. In particular, we believe that these changes will allow you to more easily use Spot Instances for applications that are sensitive to latency or that need to run in a particular Availability Zone in order to be co-located with other services or data. For example,  it will now be easier for AWS to run a set of Hadoop-based processed in the same Availability Zone, paving the way for you to use Spot Instances with Elastic MapReduce.

Its easy to get started. Simply go to the AWS Management Console and launch an instance like normal. In the Request Instance Wizard, click the Request Spot Instance radio button to set your bid to the maximum that you are willing to pay for an instance hour of the desired instance type. Here’s a screenshot of a persistent request for four Micro instances at a maximum price of $0.01 (one penny) per hour per instance:

I will look forward to hearing about the ways that you find to put Spot Instances to use in your application.

Read More:

— Jeff;

Summer Startups: GoSquared

by Jeff Barr | on | in Amazon EC2, Amazon Elastic Load Balancer, Architecture, Case Studies |

AWS is a pay-as-you-use mix of tools and services that help businesses of all sizes build innovative products. Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling GoSquared.



GoSquared is a real-time web analytics platform, built entirely on AWS, enabling businesses to improve and adapt their online presence quickly. The real-time metrics allow rapid website optimisation through buyer conversion, signups, engagement or other measurements important to a site.

The company was founded in 2006 by three 15 year old school friends, James Gill, Geoff Wagstaff and James Taylor and has been rolling on AWS since 2009. It is now funded and run out of the legendary White Bear Yard offices in Clarkenwell, London.

I talked to Geoff, GoSquared CTO, about their use of AWS. In his own words:

“Initially running on a low budget with experimental technology, we needed flexibility not only for our compute resources but for billing, so that we could develop our system without worrying about over or under provisioning resources and expenses. It was clear we needed the cloud.”

The analytics platform runs on a wide range of AWS services. EC2 is the workhorse for compute resources, including web, processing, development, application, database and cache servers. The GoSquared architecture is configured for high availability with fault tolerance and cost-effective vertical and horizontal scaling. The site uses Elastic Load Balancing and the AWS Auto Scaling service with CloudWatch to distribute workloads and drive down costs. The team also use CloudFront to deliver low latency assets, including tracking code for customer websites. A few other details that make GoSquared interesting:

Price-Aware Architecture

A really nice architectural feature of the GoSquared platform is the integration of Spot Instances. For their data analysis and tracking platform, GoSquared balance incoming data across a collection of EC2 instances. Some of those instances are under Auto Scaling control, which means they automatically scale up and down based on demand, but the remainder are provisioned as Spot Instances. A low bid price ensures that costs stay down, and a collection of CloudWatch metrics and alarms gracefully replace terminated Spot Instances to ensure availability should the EC2 spot price exceed the bid price.

Looking Forward

The small team (which has just hired employee #4!) have done a great job in taking advantage of the AWS services to build a robust, available, scalable, joyful product and we couldn’t be happier to help GoSquared as it continues to grow by leaps and bounds.

“The overall flexibility and diversity of the AWS platform has been an intrinsic ingredient to the agility of our technology and business, and has lowered barriers to entry in our market. Before AWS, the kind of infrastructure required to run a real-time web analytics operation was largely only available to highly skilled datacenter technicians managing their own physical hardware, accounting for all the overheads associated with that. By attacking this problem, AWS has brought infrastructure right to the fingertips of everyone.”, says Geoff, GoSquared CTO.

If you’re interested in learning more, visit GoSquared’s site, or read our case study.

~ Matt


Related topics:

Join us in London!

 We’re hosting an evening meetup in Shoreditch on 12th July for startups and entrepreneurs. The AWS team will be joined by GoSquared and the fine folks of Mendeley to discuss how they’re using the cloud to build their businesses. Join us! The event is free, but you’ll need to register to attend.

AWS Start-up Challenge

We’re getting close to launching our yearly contest. Sign up to get notified the second we open it up for submissions.

Now Available: Amazon EC2 Running Red Hat Enterprise Linux

by Jeff Barr | on | in Amazon EC2 |

We continue to add options to AWS in order to give our customers the freedom and flexibility that they need to build and run applications of all different shapes and sizes.

I’m pleased to be able to tell you that you can now run Red Hat Enterprise Linux on EC2 with support from Amazon and Red Hat. You can now launch 32 and 64-bit instances in every AWS Region and on every EC2 instance type. You can choose between versions 5.5, 5.6, 6.0, and 6.1 of RHEL. You can also launch AMIs right from AWS Console‘s Quick Start Wizard. Consult the full list of AMI’s to get started.

If you are a member of Red Hat’s Cloud Access program you can use your existing licenses. Otherwise, you can run RHEL on On-Demand instances now, with Spot and Reserved Instances planned for the future. Pricing for On-Demand instances is available here.

All customers running RHEL on EC2 have access to an update repository operated by Red Hat. AWS Premium Support customers can contact AWS to obtain support from Amazon and Red Hat.

— Jeff;


Live Streaming With Amazon CloudFront and Adobe Flash Media Server

by Jeff Barr | on | in Amazon CloudFront, Amazon EC2, AWS CloudFormation |

You can now stream live audio or video through AWS using the Adobe Flash Media Server using a cost-effective pay-as-you-go model that makes uses of Amazon EC2, Amazon CloudFront, and Amazon Route 53, all configured and launched via a single CloudFormation template.

We’ve used AWS CloudFormation to make the signup and setup process as simple and straightforward as possible. The first step is to actually sign up for AWS CloudFormation. This will give you access to all of the AWS services supported by AWS CloudFormation, but you’ll pay only for what you use.

I’ve outlined the major steps needed to get up and running below. For more information, you’ll want to consult our new tutorial, Live Streaming Using Adobe Flash Media Player and Amazon Web Services.

Once you’ve signed up, you need to order Flash Media Server for your AWS Account by clicking here. After logging in, you can review the subscription fee and other charges before finalizing your order:

Then you need to create a Route 53 hosted zone and an EC2 key pair. The tutorial includes links to a number of Route 53 tools and you can create the key pair using the AWS Management Console.

The next step is to use CloudFormation to create a Live Streaming stack. As you’ll see in the documentation, this step makes use of a new feature of the AWS Management Console. It is now possible to construct a URL that will open up the console with a specified CloudFormation template selected and ready to use. Please feel free to take a peek inside the Live Streaming Template to see how it sets up all of the needed AWS resources.

When you initiate the stack creation process you’ll need to specify a couple of parameters:

Note that you’ll need to specify the name of the Route 53 hosted domain that you set up earlier in the process so that it can be populated with a DNS entry (a CNAME) for the live stream.

The CloudFormation template will create and connect up all of the following:

  • An EC2 instance of the specified instance type running the appropriate Flash Media Server AMI and accessible through the given Key Pair. You can, if desired, log in to the instance using the SSH client of your choice.
  • An EC2 security group with ports 22, 80, and 1935 open.
  • A CloudFront distribution.
  • An A record and a CNAME in the hosted domain.

The template will produce the URL of the live stream as output:

The resulting architecture looks like this:

The clients connect to the EC2 instance every 4 seconds to retrieve the manifest.xml file. This is specified in the template and can be modified as needed. You have complete access to the Flash Media Server and you can configured it as desired.

Once you’ve launched the Flash Media Server, you can install and run the Flash Media Live Encoder on your desktop, connect it up to your video source, and stream live video to your heart’s content. After you are done, you can simply delete the entire CloudFormation stack to release all of the AWS resources. In fact, you must do this in order to avoid on-going charges for the AWS resources.

The CloudFormation template specifies the final customizations to be applied to the AMI at launch time. You can easily copy and then edit the script if you need to make low-level changes to the running EC2 instance.

As you can see, it should be easy for you to set up and run your own live streams using the Adobe Flash Media Server and AWS if you start out with our tutorial. What do you think?

Update: The newest version of CloudBerry Explorer includes support for this new feature. Read their blog post to learn more.

— Jeff;

AWS Management Console Bookmarking

by Jeff Barr | on | in Amazon EC2, AWS CloudFormation |

We’ve added a new bookmarking feature to the AWS Management Console. You can now construct a URL that will open the console with a specific AMI (Amazon Machine Image) or CloudFormation Template selected and ready to launch.

EC2 AMI Launch
The URL to open up the console with a particular AMI selected looks like this: References the console
/ec2/home Specifies the EC2 tab
?region=us-west-1 Specifies the region
#launchAmi=ami-3bc9997e Specifies the AMI

If you create AMIs and share them with others, this is an easy way to pass references around so that they can be launched with ease. When the link is activated the console will start as follows (prompting for email address and password if necessary):

The developers at BitNami have already made use of this feature to link directly to their AMIs. For example, here’s their page of Magento AMIs:

Ubuntu AMIs are also available with a click:

The Cloud Market also supports this new feature.

CloudFormation Stack Create
The URL to open up the console to the CloudFormation tab with a particular template selected looks like this: References the console
/cloudformation/home Specifies the CloudFormation tab
?region=us-east-1 Specifies the region
#cstack= Specifies that stack information follows
sn~PHPSample Sets the name of the stack

Specifies the link to the template

In this case the console will appear as follows:

We have used this new bookmarking feature to set up a directory of CloudFormation Sample Templates. You can browse the directory, find the desired template, and then initiate the stack creation process with a single click.

The Console team is interested in your suggestions for additional types of bookmarks. Please feel free to leave comments to this post and I’ll pass them along.

— Jeff;

My EC2 Instance – The First 1000 Days

by Jeff Barr | on | in Amazon EC2 |

I launched my first “production” EC2 instance almost three years ago, on July 15, 2008. For my purposes, production includes hosting my personal blog and writing code for my AWS book, as well as a host for random development projects that I putter around with from time to time.

I am happy to report that my instance reached 1000 days of uptime over the weekend:

One of these days I’ll upgrade to a more modern instance (this one predates EBS) but I’m still quite happy with this one and I’ll keep it running as long as possible.

EC2 has certainly come a long way in just 1000 days. Here are some of the highlights:



Amazon EC2 Cluster Instances Available on Spot Market

by Jeff Barr | on | in Amazon EC2 |

Today we are coupling two popular aspects of Amazon EC2: Cluster computing and Spot Instances!

More and more of our customers are finding innovative ways to use EC2 Spot Instances to save up to two-thirds off the On-Demand price. Batch processing, media rendering and transcoding, grid computing, testing, web crawling, and Hadoop-based processing are just a handful of the use cases that are running on Spot today.

For example, researchers at the University of Melbourne and the University of Barcelona are doing vast amounts of data processing for their Belle particle physics experiments on EC2 Spot Instances and realizing a cost savings (when compared to the price of On-Demand Instances) of 56% in the process. Each job starts out small (15-20 EC2 instances) and then scales up to between 20 and 250 instances in the space of four hours. Read more in our new case study.

Scribd has also made very good use of EC2 Spot Instances. As described in the case study, they were able to save 63% (or $10,500) on a large-scale data conversion (From Flash to HTML5) running on over 2,000 EC2 instances at a time. They converted every one of the millions of documents that have been uploaded to the site to HTML5 using a scalable grid comprised of a single master node and multiple slave nodes.

At the same time, our customers have been making really good use of our Cluster Compute and Cluster GPU instances. We’ve seen interesting use cases in a number of fields including molecular dynamics, fluid dynamics, bioinformatics, batch data processing, MapReduce, machine learning, and media rendering. The applications use a variety of coordination strategies and coupling models, ranging from fairly loose to very tight.

The folks at Cycle Computing documented their cluster-building experience in a very informative blog post. They used Cluster GPU instances to create a 32-node, 64-GPU cluster that also includes 8 TB of shared storage. The entire cluster costs less than $82 per hour to operate. They have found that the GPU accelerates overall application performance by a factor of 50 to 60 and note that their success rate in moving internal applications to the GPU is 100%.

Bioproximity provides proteomic analytical services (in plain English, they study protein at the structural and functional level) on a contract basis. In order to do this they need lots of compute power and storage space. Lacking the funds to set up their own compute cluster, they found the AWS pay-as-you-go model to be a perfect fit for their business. They run a large-scale MPI cluster on EC2 with a web-based front end for job submission. Read more in the Bioproximity case study.

On the rendering side, our friends at Animoto have used the Cluster GPU instances to accelerate their video rendering process. The increased throughput allows them to deliver videos more quickly (seconds instead of minutes) and also gives them the ability to support full-on HD video. This article has more information about Animoto and their use of EC2 to generate professional-quality video.

At the same time, our customers are finding innovative ways to use the EC2 Spot Instances to get work done in an economical way.

Effective immediately, you can now use these two features together — you can now submit spot requests for Cluster Compute and Cluster GPU Instances. These instances are currently available in a pair of Availability Zones in the US East (Northern Virginia) Region. You can choose between SUSE Linux Enterprise Server and Amazon Linux AMIs, both of which are now available in HVM form.

You can request the instances using the EC2 Command Line tools, the EC2 APIs, or the AWS Management Console:

We’re looking forward to seeing the new and interesting ways that our customers will use Spot pricing and  Cluster compute instances, alone or (preferably!) together. Here are some of the application areas that should be a good fit:

  • Batch and background processing.
  • Web and data crawling.
  • Financial modeling and analytics.
  • MapReduce and Grid computing.
  • Video processing, especially transcoding.

What can you do with this new combination of features?

— Jeff;