Posted On: Jul 13, 2010

Article Excerpt:

“Businesses and researchers have long been utilizing Amazon EC2 to run highly parallel workloads ranging from genomics sequence analysis and automotive design to financial modeling,” said Peter De Santis, general manager of Amazon EC2, in a statement. “At the same time, these customers have told us that many of their largest, most complex workloads required additional network performance. Cluster Compute Instances provide network latency and bandwidth that previously could only be obtained with expensive, capital intensive, custom-built compute clusters. For perspective, in our last pre-production test run, we saw an 880 server sub-cluster achieve a network rate of 40.62 TFlops – we’re excited that Amazon EC2 customers now have access to this type of HPC performance with the low per-hour pricing, elasticity, and functionality they have come to expect from Amazon EC2.”

Cluster Compute Instances complement other AWS offerings designed to make large-scale computing easier and more cost effective, Amazon officials said. For example, Public Data Sets on AWS provide a repository of useful public data sets that can be easily accessed from Amazon EC2, allowing fast, cost-effective data analysis by researchers and businesses, Amazon said in its press release. These large data sets are hosted on AWS at no charge to the community. Additionally, the Amazon Elastic MapReduce service enables low-friction, cost effective implementation of the Hadoop framework on Amazon EC2. Hadoop is a popular tool for analyzing very large data sets in a highly parallel environment, and Amazon EC2 provides the scale-out environment to run Hadoop clusters of all sizes.

View Article »