Category: Amazon EC2*


New Screencast: Building a High Performance Cluster

From aeronautics to genomics to financial services, High Performance Computing is becoming a common requirement in many fields of industry and academia. Traditionally, the barrier to entry into this area has remained high, with the expertise and cost needed to provide such facilities proving to be prohibitive.

With Amazon EC2’s Cluster Compute instances, extremely high performance elastic computing is now available in just a few mouse clicks.

With fast network interconnects, high memory and quick CPUs, these instances are extremely capable for tightly coupled tasks or batch processing, and very easy to use. I’ve recorded a short screencast that demonstrates how to build an 8 node, 64 core cluster and kick off a highly parallel analysis run, all in around 10 minutes.

You can read more about HPC in the cloud, including our new GPU enabled instances, on our HPC applications page. You may also be interested in the upcoming Analytics in the Cloud webinar.

~ Matt

New Webinar: High Availability Websites

As part of a new, monthly hands on series of webinars, I’ll be giving a technical review of building, managing and maintaining high availability websites and web applications using Amazons cloud computing platform.

Hosting websites and web applications is a very common use of our services, and in this webinar we’ll take a hands-on approach to websites of all sizes, from personal blogs and static sites to complex multi-tier web apps.

Join us on January 28 at 10:00 AM (GMT) for this 60 minute, technical web-based seminar, where we’ll aim to cover:

  • Hosting a static website on S3
  • Building highly available, fault tolerant websites on EC2
  • Adding multiple tiers for caching, reverse proxies and load balancing
  • Autoscaling and monitoring your website

Using real world case studies and tried and tested examples, well explore key concepts and best practices for working with websites and on-demand infrastructure.

The session is free, but you’ll need to register!

See you there.

~ Matt

 

Run Oracle Applications on Amazon EC2 Now!

Earlier this year I discussed our plans to allow you to run a wide variety of Oracle applications on Amazon EC2 in the near future. The future is finally here; the following applications are now available as AMIs for use with EC2:

  • Oracle PeopleSoft CRM 9.1 PeopleTools
  • Oracle PeopleSoft CRM 9.1 Database
  • Oracle PeopleSoft ELM 9.1 PeopleTools
  • Oracle PeopleSoft ELM 9.1 Database
  • Oracle PeopleSoft FSCM 9.1 PeopleTools
  • Oracle PeopleSoft FSCM 9.1 Database
  • Oracle PeopleSoft PS 9.1 PeopleTools
  • Oracle PeopleSoft PS 9.1 Database
  • Oracle E-Business Suite 12.1.3 App Tier
  • Oracle-E-Business-Suite-12.1.3-DB
  • JD Edwards Enterprise One – ORCLVMDB
  • JD Edwards Enterprise One – ORCLVMHTML
  • JD Edwards Enterprise One – ORCLVMENT

The application AMIs are all based on Oracle Linux and run on 64-bit high-memory instances atop the Oracle VM. You can use them as-is or you can create derivative versions tuned to your particular needs. We’ll start out in one Region and add more in the near future.

As I noted in my original post, you can use your existing Oracle licenses at no additional license cost or you can acquire new licenses from Oracle. We implemented Oracle VM support on Amazon EC2 with hard partitioning so Oracle’s standard partitioned processor licensing models apply.

All of these applications are certified and supported by Oracle. Customers with active Oracle Support and Amazon Premium Support will be able to contact either Amazon or Oracle for support.

You can find the Oracle AMIs in the Oracle section of the AWS AMI Catalog.

— Jeff;

VM Import – Bring Your VMware Images to The Cloud

If you have invested in virtualization to meet IT security, compliance, or configuration management requirements and are now looking at the cloud as the next step toward the future, I’ve got some good news for you.

VM Import lets you bring existing VMware images (VMDK files) to Amazon EC2. You can import “system disks” containing bootable operating system images as well as data disks that are not meant to be booted.

This new feature opens the door to a number of migration and disaster recovery scenarios. For example, you could use VM Import to migrate from your on-premises data center to Amazon EC2.

You can start importing 32 and 64 bit Windows Server 2008 SP2 images right now (we support the Standard, Enterprise, and Datacenter editions). We are working to add support for other versions of Windows including Windows Server 2003 and Windows Server 2008 R2. We are also working on support for several Linux distributions including CentOS, RHEL, and SUSE. You can even import images into the Amazon Virtual Private Cloud (VPC).

The import process can be initiated using the VM Import APIs or the command line tools. You’ll want to spend some time preparing the image before you upload it. For example, you need to make sure that you’ve enabled remote desktop access and disabled any anti-virus or intrusion detection systems that are installed (you can enable them again after you are up and running in the cloud). Other image-based security rules should also be double-checked for applicability.

The ec2-import-instance command is used to start the import process for a system disk. You specify the name of the disk image along with the desired Amazon EC2 instance type and parameters (security group, availability zone, VPC, and so forth) and the name of an Amazon S3 bucket. The command will provide you with a task ID for use in the succeed steps of the import process.

The ec2-upload-disk-image command uploads the disk image associated with the given task ID. You’ll get upload statistics as the bits make the journey into the cloud. The command will break the upload into multiple parts for efficiency and will automatically retry any failed uploads.

The next step in the import process takes place within the cloud; the time it takes will depend on the size of the uploaded image. You can use the ec2-describe-conversion-tasks command to monitor the progress of this step.

When the upload and subsequent conversion is complete you will have a lovely, gift-wrapped EBS-backed EC2 instance in the “stopped” state. You can then use the ec2-delete-disk-image command to clean up.

The ec2-import-volume command is used to import a data disk, in conjunction with ec2-upload-disk-image. The result of this upload process is an Amazon EBS volume that can be attached to any running EC2 instance in the same Availability Zone.

There’s no charge for the conversion process. Upload bandwidth, S3 storage, EBS storage, and Amazon EC2 time (to run the imported image) are all charged at the usual rates. When you import and run a Windows server you will pay the standard AWS prices for Windows instances.

As is often the case with AWS, we have a long roadmap for this feature. For example, we plan to add support for additional operating systems and virtualization formats along with a plugin for VMware’s vSphere console (if you would like to help us test the plugin prior to release, please let us know at ec2-vm-import-plugin-preview@amazon.com). We’ll use your feedback to help us to shape and prioritize our roadmap, so keep those cards and letters coming.

— Jeff;

 

FreeBSD on Amazon EC2

Colin Percival (developer of Tarsnap) wrote to tell me that the FreeBSD operating system is now running on Amazon EC2 in experimental fashion.

According to his FreeBSD on EC2 blog post, version 9.0-CURRENT of FreeBSD is now available in the US East (Northern Virginia) region and can be run on t1.micro instances. Colin expects to be able to expand to other regions and EC2 instance types over time.

The AMI is stable enough to be able to build and run Apache under light load for several days. FreeBSD 9.0-CURRENT is a bleeding-edge snapshot release. Plans are in place to back-port the changes made to this release to FreeBSD 8.0-STABLE in the future.

Congratulations to Colin and to the rest of the FreeBSD team for making this happen. I have received a number of requests for this operating system over the years and I am happy to see that this community-driven effort has made so much progress.

— Jeff;

New Features for Amazon CloudWatch

The Amazon CloudWatch team has put together a really impressive set of new features. Too many, in fact, to fit on this page. I’ve written a series of posts with all of the information. Here’s a summary, with links to each post:

  • Basic Monitoring of Amazon EC2 instances at 5-minute intervals at no additional charge.
  • Elastic Load Balancer Health Checks -Auto Scaling can now be instructed to automatically replace instances that have been deemed unhealthy by an Elastic Load Balancer.
  • Alarms – You can now monitor Amazon CloudWatch metrics, with notification to the Amazon SNS topic of your choice when the metric falls outside of a defined range.
  • Auto Scaling Suspend/Resume – You can now push a “big red button” in order to prevent scaling activities from being initiated.
  • Auto Scaling Follow the Line -You can now use scheduled actions to perform scaling operations at particular points in time, creating a time-based scaling plan.
  • Auto Scaling Policies – You now have more fine-grained control over the modifications to the size of your AutoScaling groups.
  • VPC and HPC Support – You can now use AutoScaling with Amazon EC2 instances that are running within your Virtual Private Cloud or as Cluster Compute instances.

— Jeff;

Amazon Linux AMI 2010.11.1 Released

We have released a new version of the Amazon Linux AMI. The new version includes new features, security fixes, package updates, and additional packages. The AWS Management Console will be updated to use these AMIs in the near future.

Users of the existing Amazon Linux AMI can access the package additions and updates through our Yum repository.

New features include:

  • AMI size reduction to 8 GB to simplify usage of the AWS Free Usage Tier.
  • Security updates to the Amazon Linux AMI are automatically installed on the first launch by default. This can be disabled if necessary.
  • The AMI versioning system has changed to a YYYY.MM.# scheme.

The following packages were updated to address security issues:

  • glibc
  • kernel
  • java-1.6.0-openkdk
  • openssl

The following packages were updated to newer versions:

  • bash
  • coreutils
  • gcc44
  • ImageMagick
  • php
  • ruby
  • python
  • tomcat6

We have added a number of new packages including:

  • cacti
  • fping
  • libdmx
  • libmcrypt
  • lighttpd
  • memcached
  • mod_security
  • monit
  • munin
  • nagios
  • nginx
  • rrdtool
  • X11 applicaitons, client utilities, and bitmaps

We also added a number of Perl libraries.

A full list of all changes and additions, along with the AMI ID’s, can be found in the Amazon Linux AMI Release Notes.

— Jeff;

 

New EC2 Instance Type – The Cluster GPU Instance

If you have a mid-range or high-end video card in your desktop PC, it probably contains a specialized processor called a GPU or Graphics Processing Unit. The instruction set and memory architecture of a GPU are designed to handle the types of operations needed to display complex graphics at high speed. The instruction sets typically include instructions for manipulating points in 2D or 3D space and for performing advanced types of calculations. The architecture of a GPU is also designed to handle long streams (usually known as vectors) of points with great efficiency. This takes the form of a deep pipeline and wide, high-bandwidth access to memory.

A few years ago advanced developers of numerical and scientific application started to use GPUs to perform general-purpose calculations, termed GPGPU, for General-Purpose computing on Graphics Processing Units. Application development continued to grow as the demands of many additional applications were met with advances in GPU technology, including high performance double precision floating point and ECC memory.  However, accessibility to such high-end technology, particularly on HPC cluster infrastructure for tightly coupled applications, has been elusive for many developers. Today we are introducing our latest EC2 instance type (this makes eleven, if you are counting at home) called the Cluster GPU Instance. Now any AWS user can develop and run GPGPU on a cost-effective, pay-as-you-go basis.

Similar to the Cluster Compute Instance type that we introduced earlier this year, the Cluster GPU Instance (cg1.4xlarge if you are using the EC2 APIs) has the following specs:

  • A pair of NVIDIA Tesla M2050 “Fermi” GPUs.
  • A pair of quad-core Intel “NehalemX5570 processors offering 33.5 ECUs (EC2 Compute Units).
  • 22 GB of RAM.
  • 1690 GB of local instance storage.
  • 10 Gbps Ethernet, with the ability to create low latency, full bisection bandwidth HPC clusters.

Each of the Tesla M2050s contains 448 cores and 3 GB of ECC RAM and are designed to deliver up to 515 gigaflops of double-precision performance when pushed to the limit. Since each instance contains a pair of these processors, you can get slightly more than a trillion FLOPS per Cluster GPU instance. With the ability to cluster these instances over 10Gbps Ethernet, the compute power delivered for highly data parallel HPC, rendering, and media processing applications is staggering.  I like to think of it as a nuclear-powered bulldozer that’s about 1000 feet wide that you can use for just $2.10 per hour!

Each AWS account can use up to 8 Cluster GPU instances by default with more accessible by contacting us. Similar to Cluster Compute instances, this default setting exists to help us understand your needs for the technology early on and is not a technology limitation. For example, we have now removed this default setting on Cluster Compute instances and have long had users running clusters up through and above 128 nodes as well as running multiple clusters at once at varied scale.

You’ll need to develop or leverage some specialized code in order to achieve optimal GPU performance, of course. The Tesla GPUs implements the CUDA architecture. After installing the latest NVIDIA driver on your instance, you can make use of the Tesla GPUs in a number of different ways:

  • You can write directly to the low-level CUDA Driver API.
  • You can use higher-level functions in the C Runtime for CUDA.
  • You can use existing higher-level languages such as FORTRAN, Python, C, C++, Java, or Ruby.
  • You can use CUDA versions of well-established packages such as CUBLAS (BLAS), CUFFT (FFT), and LAPACK.
  • You can build new applications in OpenCL (Open Compute Language), a new cross-vendor standard for heterogeneous computing.
  • You can run existing applications that have been adapted to make use of CUDA.

Elastic MapReduce can now take advantage of the Cluster Compute and Cluster GPU instances, giving you the ability to combine Hadoop’s massively parallel processing architecture with high performance computing. You can focus on your application and Elastic MapReduce will handle workload parallelization, node configuration, scaling, and cluster management.

Here are some resources to help you to learn more about GPUs and GPU programming:

 

So, what do you think? Can you make use of this “bulldozer” in your application? What can you build with this much on-demand computing power at your fingertips? Leave a comment, let me know!

–Jeff;

Fedora 14 AMIs for Amazon EC2

Earlier this month the Fedora Community released Fedora 14. At that time they also released an Amazon Machine Image (AMI) for EC2.

This is pretty big news — Fedora is one of the most popular Linux distributions around, with millions of copies running worldwide. The new version of Fedora includes new desktop, system administration, and developer features.

Just six months ago it was not possible to launch the previous version of Fedora in the cloud due to kernel incompatibilities. As of this launch, the Fedora team is now treating Amazon EC2 as a tier 1 platform that must be supported for launch.

Here’s a table of Fedora 14 AMI IDs. Be sure to log in as ec2-user, not root!

— Jeff;

 

Converting an S3-Backed Windows AMI to an EBS-Backed AMI

If you are running a Windows Server 2003 AMI it is most likely S3-backed. If you’d like to migrate it to an EBS-backed AMI so that you can take advantage of new features such as the ability to stop it and then restart it later, I’ve got some good news for you.

We’ve just put together a set of step-by-step instructions for converting an S3-backed Windows AMI to an EBS-backed AMI.

You will need to launch the existing AMI, create an EBS volume, and copy some data back and forth. It is pretty straightforward and you need only be able to count to 11 (in decimal) to succeed.

The document also includes information on the best way to resize an EBS-backed Windows instance and outlines some conversion approaches that may appear promising but are actually dead-ends.

— Jeff;