Category: Compute*


Amazon Linux AMI 2012.09 Now Available

Max Spevack of the Amazon EC2 team brings news of the latest Amazon Linux AMI.

— Jeff;


The Amazon Linux AMI 2012.09 is now available.

After we removed the Public Beta tag from the Amazon Linux AMI last September, weve been on a six month release cycle focused on making sure that EC2 customers have a stable, secure, and simple Linux-based AMI that integrates well with other AWS offerings.

There are several new features worth discussing, as well as a host of general updates to packages in the Amazon Linux AMI repositories and to the AWS command line tools. Here’s what’s new:

  • Kernel 3.2.30: We have upgraded the kernel to version 3.2.30, which follows the 3.2.x kernel series that we introduced in the 2012.03 AMI.
  • Apache 2.4 & PHP 5.4: This release supports multiple versions of both Apache and PHP, and they are engineered to work together in specific combinations.  The first combination is the default, Apache 2.2 in conjunction with PHP 5.3, which are installed by running yum install httpd php. Based on customer requests, we support Apache 2.4 in conjunction with PHP 5.4 in the package repositories.  These packages are accessed by running yum install httpd24 php54.
  • OpenJDK 7: While OpenJDK 1.6 is still installed by default on the AMI, OpenJDK 1.7 is included in the package repositories, and available for installation.  You can install it by running yum install java-1.7.0-openjdk.
  • R 2.15: Also coming from your requests, we have added the R language to the Amazon Linux AMI.  We are here to serve your statistical analysis needs!  Simply yum install R and off you go.
  • Multiple Interfaces & IP Addresses: Additional network interfaces attached while the instance is running are configured automatically. Secondary IP addresses are refreshed during DHCP lease renewal, and the related routing rules are updated.
  • Multiple Versions of GCC: The default version of GCC that is available in the package repositories is GCC 4.6, which is a change from the 2012.03 AMI in which the default was GCC 4.4 and GCC 4.6 was shipped as an optional package.  Furthermore, GCC 4.7 is available in the repositories.  If you yum install gcc you will get GCC 4.6.  For the other versions, either run yum install gcc44 or yum install gcc47.

The Amazon Linux AMI 2012.09 is available for launch in all regions. Users of 2012.03, 2011.09, and 2011.02 versions of the Amazon Linux AMI can easily upgrade using yum.

The Amazon Linux AMI is a rolling release, configured to deliver a continuous flow of updates that allow you to roll from one version of the Amazon Linux AMI to the next.  In other words, Amazon Linux AMIs are treated as snapshots in time, with a repository and update structure that gives you the latest packages that we have built and pushed into the repository.  If you prefer to lock your Amazon Linux AMI instances to a particular version, please see the Amazon Linux AMI FAQ for instructions.

As always, if you need any help with the Amazon Linux AMI, dont hesitate to post on the EC2 forum, and someone from the team will be happy to assist you.

— Max

PS – Help us to build the Amazon Linux AMI! We are actively hiring Linux Systems Engineer, Linux Software Development Engineer, and Linux Kernel Engineer positions:

Scaling Science: 1 Million Compute Hours in 1 week

For many scientists, the computer has become as important as the test tube, the centrifuge or the grad student in delivering ground breaking research. Whether screening for active cancer treatments or colliding atoms, the availability of compute cycles can significantly affect the time it takes for scientists to crunch their numbers. Indeed, compute resources are often so constrained that researchers often have to scale back the scope of their work to fit the capacity available.

Not so with Amazon EC2, where the general purpose, utility computing model is a perfect fit for scientific workloads of any scale. Researchers (and their grad students), can access the computational resources they need to deliver on their scientific vision, while staying focused on their analysis and results.

Scaling up at the Morgridge Instutute
Victor Ruotti faced this exact problem. His team at the Morgridge Institute at the University of Wisconsin-Madison are looking at the genes expressed as template cells, stem cells, start to take on the various special functions our tissues need, such as absorbing nutrients or conducting nervous impulses. Impressive and important work, which has large computational requirements: millions of RNA sequence reads and a data footprint of 78 TB.

Victor’s research was selected as the winner of Cycle Computing’s inaugural Big Science Challenge, and using’s Cycle’s software ran through the 15,376 alignment runs on Amazon EC2, clocking up over a million compute hours in a week, for just $116 an hour.

A Century of Compute
Over 1,000,000 compute hours, 115 years of work for a single processor, were used to build the genetic map the team needed to quickly identify which regions of the genome are important for establishing cell types which have clinical importance. The entire analysis started running on Spot instances in just 20 minutes, on high memory instance types (the M2 class), meaning that the team could use Cycle Server to stretch their budget further and build an extremely high resolution genetic map. The spot price was typically 12 times lower than the equivalent on-demand price, and their cluster ran across an average of 5000 instances (8000 at peak), for a total cost of $19,555. That’s less than the price of 20 lab pipettes.

Cycle Computing on the AWS Report
Our very own Jeff Barr was lucky enough to spend a few minutes chatting with Cycle Computing CEO, Jason Stowe for the AWS Report. Here is the episode they recorded:

Cycle also have a blog post with some more information on this, and the 2012 Big Science Challenge.

We’re very happy to see the utility computing platform of AWS be used for such ground breaking work. If you’re working with data and would like to discuss how to get up and running at this, or any other scale, I do hope you’ll get in touch.

Upcoming Webinar
If you would like to know more I’ll be hosting a webinar on big data and HPC on the 16th of October. We’ll discuss some customer success stories and common best practices for using tools such as Elastic MapReduce, DynamoDB and the broad range of services on the AWS Marketplace to accelerate your own applications and analytics.

Registration is free. See you there.

~ Matt

Customize Elastic Beanstalk Using Configuration Files

Saad Ladki of the Elastic Beanstalk team is back with another guest post. Today, Saad talks about a customization feature that will allow you to customize the run-time configuration of your applications by installing packages and libraries, running commands, and editing files.

— Jeff;


A few weeks ago, we announced Python support for AWS Elastic Beanstalk, including the ability to customize the Python environment using a declarative configuration file and integrating with Amazon RDS DB Instances.

Today, Im excited that were adding more customizations through the configuration files. The configuration files now allow you to declaratively install packages and libraries, configure software components (such as Apache Tomcat or the Apache Web Server), and run commands on the Amazon EC2 instances in your Elastic Beanstalk environment. You can also set environment variables across your fleet of EC2 instances, create users and groups, and start or stop daemons.

So why is this big news? In the past, to make changes to an Elastic Beanstalk environment, you had to create and maintain custom Amazon Machine Images (AMIs). Now, a change as small as adding a font library or as involved as installing and configuring an agent entails only a few lines of YAML.

Example: Customize a Tomcat 7 configuration
Lets look at an example of customizing the Tomcat server.xml to increase the number of concurrent threads:

  1. Using the AWS Toolkit for Eclipse, create a new AWS Web Project:
  2. Create a new directory .ebextensions in the WebContent directory of your application. This ensures that the .ebextensions directory is at the root of your .war file.
    Tip: By default, the Eclipse Project Explorer view will hide files that start with a .. You can modify this setting by clicking on the menu at the top of the Project Explorer, then click on Customize View and uncheck the .* resources box on the Filters tab.
  3. Add a file called server-update.config to the .ebextensions directory. The contents of the file should look like this:
     
    container_commands:
      replace-config:
        command: cp .ebextensions/server.xml /etc/tomcat7/server.xml
     
  4. Right-click the project, click Amazon Web Services, and then click Deploy to Elastic Beanstalk.

For more details on how to deploy applications from Eclipse, visit the AWS Elastic Beanstalk Developer Guide. For more details on the configuration system, read our new Customizing and Configuring Elastic Beanstalk Containers documentation.

Once the environment is up and running, your Tomcat installation on the EC2 instances will now be using the updated server.xml. Any new instances that are launched within the environment will have the same customizations applied.

Note: Configuration files can be used only with newly created Python and Java environments. If you have existing environments, heres how you can migrate them.

You can include multiple .config files inside the .ebextensions directory. This allows you to separate customizations into different files and allow you to share your favorite customizations with your co-workers.

You can also leverage the Amazon CloudWatch custom metrics example to easily setup monitoring through CloudWatch.

Add an Amazon RDS Database Instance
Additionally, you can now easily create a new Amazon Relational Database Service (RDS) database Instance with your Elastic Beanstalk Tomcat environment. Simply log into to the Elastic Beanstalk console, select your application, and on the Actions menu for the environment, click Edit/Load Configuration. The Database tab allows you to add a new Amazon RDS DB Instance.

Once the database is created, you can retrieve the connection information and build the connection string using Java system properties:


String
dbName = System. getProperty ( “RDS_DB_NAME” ) ;
String userName = System. getProperty ( “RDS_USER” ) ;
String password = System. getProperty ( “RDS_PASSWORD” ) ;
String hostname = System. getProperty ( “RDS_HOSTNAME” ) ;
String port = System. getProperty ( “RDS_PORT” ) ;

For more information on how to set up an RDS database Instance in the Elastic Beanstalk environment, visit the AWS Elastic Beanstalk Developer Guide.

— Saad

 

Amazon VPC – Additional VPN Features

The Amazon Virtual Private Cloud (VPC) gives you the power to create a private, isolated section of the AWS Cloud. You have full control of network addressing. Each of your VPCs can include subnets (with access control lists), route tables, and gateways to your existing network and to the Internet.

You can connect your VPC to the Internet via an Internet Gateway and enjoy all the flexibility of Amazon EC2 with the added benefits of Amazon VPC.  You can also setup an IPsec VPN connection to your VPC, extending your corporate data center into the AWS Cloud.  Today we are adding two options to give you additional VPN connection flexibility:

  1. You can now create Hardware VPN connections to your VPC using static routing. This means that you can establish connectivity using VPN devices that do not support BGP such as Cisco ASA and Microsoft Windows Server 2008 R2. You can also use  Linux to establish a Hardware VPN connection to your VPC. In fact, any IPSec VPN implementation should work.
  2. You can now configure automatic propagation of routes from your VPN and Direct Connect links (gateways) to your VPC’s routing tables. This will make your life easier as you wont need to create static route entries in your VPC route table for your VPN connections.  For instance, if youre using dynamically routed (BGP) VPN connections, your BGP route advertisements from your home network can be automatically propagated into your VPC routing table.

If your VPN hardware is capable of supporting BGP, this is still the preferred way to go as BGP performs a robust liveness check on the IPSec tunnel. Each VPN connection uses two tunnels for redundancy; BGP simplifies the failover procedure that is invoked when one VPN tunnel goes down.

Static Routing
We added the static routing option for a number of reasons. First, BGP can be difficult to set up and to manage, and we don’t want to ask you to go to all of that trouble if all you want to do is set up a VPN connection to a VPC. Second, some firewalls and entry-level routers support IPSec but not BGP. These devices are very popular in corporate branch offices. As I mentioned above, this change dramatically increases the number of VPN devices that can be used to connect to a VPC. We have tested the static routing “No BGP” option with devices from Cisco, Juniper, Yamaha, Netgear, and Microsoft. We’ve assembled a list of VPN devices that weve tested for dynamic and statically routed VPN connections

You can select this option when you create the VPN connection between a VPN and one of your customer gateways:

If you choose this option you must also enter one or more routes (CIDR addresses) to indicate which traffic is to be routed back to your customer gateways (your home network).

For client-side redundancy, you can use two customer gateway devices (two VPN connections).  That way, if your gateway device goes down, or needs maintenance, the other one can continue to carry your traffic into the VPC. On the AWS side, we have multiple redundant VPN concentrators to handle failover in case of device failure.

Route Propagation
You can automatically propagate your VPN Connection routes (whether statically entered or advertised via BGP) to your VPC route table:

In order to enable this option for a particular routing table, you must establish an association between the table and a gateway like this:

You can also arrange to update multiple routing tables from the same virtual private gateway.

As you can see, you can access these new VPN features from the AWS Management Console. They are also accessible through the VPC APIs and the command line tools.

— Jeff;

  1. You can now create Hardware VPN connections to your VPC using static routing. This means that you can establish connectivity using VPN devices that do not support BGP such as Cisco ASA and Microsoft Windows Server 2008 R2. You can also use [ews1]  Linux to establish a Hardware VPN connection to your VPC. In fact, any IPSec VPN implementation should work.

 [ews1]We use the terms hardware and software VPNs to refer to the termination point in the VPC.  hardware VPN means it terminates on hardware in the VPC, software means it terminates on an instance in VPC.  I dont want to confuse people by using the term software here to refer to the customer side initiation point.

Amazon RDS for SQL Server – Now Available in VPC

As you can tell from some of my recent posts, we’re adding new features to the Virtual Private Cloud and to the Relational Database Service at a very rapid clip. Today’s release involves both of these services!

You can now use launch RDS database instances running Microsoft SQL Server inside of a Virtual Private Cloud (VPC). You can now get all of the benefits of the Relational Database Service including backup management, automatic failure detection and recovery, software patching (both OS and database), and easy scaling, all within a private, isolated section of the AWS Cloud.

— Jeff;

 

AWS Growth – Adding a Third Availability Zone in Tokyo

We announced an AWS Region in Tokyo about 18 months ago. In the time since the launch, our customers have launched all sorts of interesting applications and businesses there. Here are a few examples:

  • Cookpad.com is the top recipe site in Japan. They are hosted entirely on AWS, and handle more than 15 million users per month.
  • KAO is one of Japan’s largest manufacturers of cosmetics and toiletries. They recently migrated their corporate site to the AWS cloud.
  • Fukoka City launched the Kawaii Ward project to promote tourism to the virtual city. After a member of the popular Japanese idol group AKB48 raised awareness of this site, virtual residents flocked to the site to sign up for an email newsletter. They expected 10,000 registrations in the first week and were pleasantly surprised to receive over 20,000.

Demand for AWS resources in Japan has been strong and steady, and we’ve been expanding the region accordingly. You might find it interesting to know that an AWS region can be expanded in two different ways. First, we can add additional capacity to an existing Availability Zone, spanning multiple datacenters if necessary. Second, we can create an entirely new Availability Zone. Over time, as we combine both of these approaches, a single AWS region can grow to encompass many datacenters. For example, the US East (Northern Virginia) region currently occupies more than ten datacenters structured as multiple Availability Zones.

Today, we are expanding the Tokyo region with the addition of a third Availability Zone. This will add capacity and will also provide you with additional flexibility. As is always the case with AWS, untargeted launches of EC2 instances will now make use of this zone with no changes to existing applications or configurations. If you are currently targeting specific Availability Zones, please make sure that your code can handle this new option.

— Jeff;

 

Amazon RDS News – Oracle Data Pump

The Amazon RDS team is rolling out new features at a very rapid clip. Here’s the latest and greatest:

Oracle Data Pump
Customers have asked us to make it easier to import their existing databases into Amazon RDS. We are making it easy for you to move data on and off of the DB Instances by using Oracle Data Pump. A number of scenarios are supported including:

  • Transfer between an on-premises Oracle database and an RDS DB Instance.
  • Transfer between an Oracle database running on an EC2 instance and an RDS DB Instance.
  • Transfer between two RDS DB Instances.

These transfers can be run in either direction. We currently support the network mode of Data Pump where the job source is an Oracle database. Transfers using Data Pump should be considerably faster than those using the original Import and Export utilities. Oracle Data Pump is available on all new DB Instances running Oracle Database 11.2.0.2.v5. To use Data Pump with your existing v3 and v4 instances, please upgrade to v5 by following the directions in the RDS User Guide. To learn more about importing and exporting data from your Oracle databases, check out our new import/export guide.

 — Jeff;

Amazon EC2 Reserved Instance Marketplace

EC2 Options
I often tell people that cloud computing is equal parts technology and business model. Amazon EC2 is a good example of this; you have three options to choose from:

  • You can use On-Demand Instances, where you pay for compute capacity by the hour, with no upfront fees or long-term commitments. On-Demand instances are recommended for situations where you don’t know how much (if any) compute capacity you will need at a given time.
  • If you know that you will need a certain amount of capacity, you can buy an EC2 Reserved Instance. You make a low, one-time upfront payment, reserve it for a one or three year term, and pay a significantly lower hourly rate. You can choose between Light Utilization, Medium Utilization, and Heavy Utilization Reserved Instances to further align your costs with your usage.
  • You can also bid for unused EC2 capacity on the Spot Market with a maximum hourly price you are willing to pay for a particular instance type in the Region and Availability Zone of your choice. When the current Spot Price for the desired instance type is at or below the price you set, your application will run.

Reserved Instance Marketplace
Today we are increasing the flexibility of the EC2 Reserved Instance model even more with the introduction of the Reserved Instance Marketplace. If you have excess capacity, you can list it on the marketplace and sell it to someone who needs additional capacity. If you need additional capacity, you can compare the upfront prices and durations of Reserved Instances on the marketplace to the upfront prices of one and three year Reserved Instances available directly from AWS. The Reserved Instances in the Marketplace are functionally identical to other Reserved Instances and have the then-current hourly rates, they will just have less than a full term and a different upfront price. Transactions in the Marketplace are always between a buyer and a seller; the Reserved Instance Marketplace hosts the listings and allows buyers and sellers to locate and transact with each other.

You can use this newfound flexibility in a variety of ways. Here are a few ideas:

  1. Switch Instance Types. If you find that your application has put on a little weight (it happens to the best of us), and you need a larger instance type, sell the old RIs and buy new ones from the Marketplace or from AWS. This also applies to situations where we introduce a new instance type that is a better match for your requirements.
  2. Buy Reserved Instances on the Marketplace for your medium-term needs. Perhaps you are running a cost-sensitive marketing promotion that will last for 60-90 days. Purchase the Reserved Instances (which we sometimes call RIs), use it until the promotion is over, and then sell it. You’ll benefit from RI pricing without the need to own them for the full one or three year term. Keep the RIs as long as they continue to save you money.
  3. Relocate. Perhaps you started to run your application in one AWS Region, only to find out later that another one would be a better fit for the majority of your customers. Again, sell the old ones and buy new ones.

In short, you get the pricing benefit of Reserved Instances and the flexibility to make changes as your application and your business evolves, grows, or (perish the thought) shrinks.

Dave Tells All
I interviewed Dave Ward of the EC2 Spot Instances team to learn more about this feature and how it will benefit our users. Watch and learn:

The Details
Now that I’ve whet your appetite, let’s take a look at the details. All of the functions described below are supported by the AWS Management Console, the EC2 API (command line) tools, and the EC2 APIs.

After registration, any AWS customer (US or non-US legal entity) can buy and sell Reserved Instances. Sellers will need to have a US bank account, and will need to complete an online tax interview before they reach 200 transactions or $20,000 in sales. You will need to verify your bank account as part of the registration process; this may take up to two weeks depending on your bank. You will not be able to receive funds until the verification process has succeeded.

Reserved Instances can be listed for sale after you have owned them for at least 30 days, and after we have received and processed your payment for them. The RI’s state must be displayed as Active in the Reserved Instance section of the AWS Management Console:

You can list the remainder of your Reserved Instance term, rounded down to the nearest month. If you have 11 months and 13 days remaining on an RI, you can list the 11 months. You can set the upfront payment that you are willing to accept for your RI, and you can also customize the month-over-month price adjustment for the listing. You will continue to own (and to benefit from) the Reserved Instance until it is sold.

As a seller, you will receive a disbursement report if you have activity on a particular day. This report is a digest of all Reserved Instance Marketplace activity associated with your account and will include new Reserved Instance listings, listings that are fully or partially fulfilled, and all sales proceeds, along with details of each transaction.

When your Reserved Instance is sold, funds will be disbursed to your bank account after the payment clears, less a 12% seller fee. You will be informed of the purchaser’s city, state, country, and zip code for tax purposes. As a seller, you are responsible for calculating and remitting any applicable transaction taxes such as sales tax or VAT.

As a buyer, you can search and browse the Marketplace for Reserved Instances that best suit your needs with respect to location, instance type, price, and remaining time. Once acquired, you will automatically gain the pricing and capacity assurance benefits of the instance. You can later turn around and resell the instance on the Marketplace if your needs change.

When you purchase a Reserved Instance through the Marketplace, you will be charged for Premium Support on the upfront fee. The upfront fees will also count toward future Reserved Instance purchases using the volume discount tiers, but the discounts do not apply to Marketplace purchases.

Visual Tour for Sellers
Here is a visual tour of the Reserved Instance Marketplace from the seller’s viewpoint, starting with the process of registering as a seller and listing an instance for sale. The Sell Reserved Instance button initiates the process:


The console outlines the entire selling process for you:

Here’s how you set the price for your Reserved Instances. As you can see, you have the ability to set the price on a month-by-month basis to reflect the declining value of the instance over time:


You will have the opportunity to finalize the listing, and it will become active within a few minutes. This is the perfect time to acquire new Reserved Instances to replace those that you have put up for sale:

Your listings are visible within the Reserved Instances section of the Console:

Here’s a video tutorial on the selling process:

Visual Tour for Buyers
Here is a similar tour for buyers. You can purchase Reserved Instances in the Console. You start by adding searching for instances with the characteristics that you need, and adding the most attractive ones to your cart:

You can then review the contents of your cart and complete your purchase:

Here’s a video tutorial on the buying process:

I hope that you enjoy (and make good use of) the additional business flexibility of the Reserved Instance Marketplace.

Jeff;

AWS Elastic Beanstalk – Now Available in Singapore

AWS Elastic Beanstalk is now available in the Asia Pacific (Singapore) region. You can now use Elastic Beanstalk to deploy, manage, and scale Java, .NET, PHP, and Python applications in the following AWS regions:

  • US East (Northern Virginia)
  • US West (Northern California)
  • US West (Oregon)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Singapore)
  • EU (Ireland)

After you upload your application to Elastic Beanstalk, it will manage all of the details for you. It will take care of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

You can manage your Elastic Beanstalk applications in several different ways. You can use the AWS Management ConsoleGit deployment, and the eb command line interface, the AWS Toolkit for Visual Studio, or the AWS Toolkit for Eclipse.

Jeff;

PS – If you are interested in learning more about Elastic Beanstalk, plan on attending my online presentation at 1:00 PM (PT) on Monday, September 10th. This presentation is part of Bootstrap Week; you may also enjoy the other sessions.

EC2 Cluster Compute Instances now Available in US West (Oregon)

We’re adding a new region to our High Performance Computing service on EC2 today, and making a new template available to spin up clusters quickly and easily. But first, a quick recap.

The story of High Performance Computing on EC2

As the name suggests, High Performance Computing (HPC) users have an insatiable desire for compute, but are all too often constrainted by the resources available to them. The scientists and engineers working in this field often have to restrict the scope of their research based on the resources available to them. So it was no surprise that almost as soon as Amazon EC2 arrived in 2006, we saw customers starting to remove these constraints by taking advantage of the utility computing model, spinning up large scale computing clusters to get their jobs done more quickly, and to remove the constraints placed on their research by running simulations with more mocules, higher resolutions or bigger galaxies. And boy, did they fill them up, with everything from Monte Carlo simulations to R&D in the life sciences.

This area was given an extra set of tools in December 2009 with the arrival of the Spot Market, which opened up name-your-price supercomputing: customers were able to run at higher scale for the same price, or choose the cost of their compute runs. Whether you’re running batch processing or Hadoop, this is a big win, but we heard customers wanted to augment their large, scale out systems with more tightly coupled, parallel jobs.

To that end, in July 2010 we added a new instance type to EC2 which was specifically designed for HPC clusters. The Cluster Compute instance was deployed on a high performance network to help run tightly coupled, parallel computations over non blocking, fully bisectional 10 gigabit Ethernet, ran hardware virtualization and introduced the concept of a Placement Group, which helps physically locate instances close to one another for low latency communication. We added GPUs to the line up in November 2010, and rolled out the second generation in November 2011 (cc2.8xlarge instances, starring the Intel Xeon E5 chip), and rolled into the Top 500 list of the world’s faster supercomputers at number 42. We’ve seen cluster compute instances become adopted by some of the finest research organisations in the world, and beyond to customers with business applications that need high bandwidth networking and significant CPU cycles.

Extended availability

Today we’re expanding the availability of CC2 instances into a third region so that you can take advantage of high performance computing from the West Coast. US West (Oregon) joins US East (Virginia) and EU West (Ireland), allowing you to bring the power of cc2.8xlarge instances to your data and colleagues on the West Coast.

Easier to use

We’re also always looking for ways to make our HPC services easier to use, and to help the scientists and engineers get to their familiar tools quickly, so they can get straight to work. Today we’re adding a new script which will automatically provision, configure and install a fully functioning HPC cluster in just five clicks.

Your first click is here:

This will set up an AWS CloudFormation template in the console, from there you can choose the size of the cluster to spin up, and the name of a keypair to use to log in. Review your settings, click continue, and we’ll provision a fully operational HPC cluster, ready to rock with Open MPI, Open Grid Scheduler/Grid Engine, ATLAS libraries, NFS for data sharing, SciPi, NumPi and iPython (courtsey of the wonderful StarCluster). I’ve also pre-loaded it with some example code and instructions on how to run them.

 

Google Chrome

Once the stack is up, you’ll see the connection details in the ‘Outputs’ tab, to connect directly via SSH, or to jump to a web page with more details for running parallel tasks over MPI, or via the scheduler. You can also scale your cluster up down elastically using StarCluster.

Give it a try, and let us know how you’re putting all these FLOPS to use.

~ Matt