Category: Amazon EC2


Adding a Second AWS Availability Zone in Tokyo

by Jeff Barr | on | in Amazon EC2, APAC |

Our hearts go out to those who have suffered through the recent events in Japan. I was relieved to hear from my friends and colleagues there in the days following the earthquake. I’m very impressed by the work that the Japan AWS User Group (JAWS) has done to help some of the companies, schools, and government organizations affected by the disaster to rebuild their IT infrastructure.

We launched our Tokyo Region with a single Availability Zone (“AZ”) about a month ago. At that time we said we would be launching a second Tokyo AZ soon. After a very thorough review of our primary and backup power supplies, we have decided to open up that second Availability Zone, effective today.

As you may know, AWS is currently supported in five separate Regions around the world: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo). Each Region is home to one or more Availability Zones. Each Availability Zone in a Region is engineered to be operationally independent of the other Zones, with independent power, cooling, physical security, and network connectivity. As a developer or system architect, you have full control over the Regions and Availability Zones that your application uses.

A number of our customers are already up and running in Tokyo and have encouraged us to open up the second Availability Zone so that they can add fault tolerance by running in more than one AZ. For example, with the opening of the second AZ developers can use the Amazon Relational Database Service (RDS) in Multi-AZ mode (see my blog post for more information about this), or load balance between web servers running Amazon EC2 in both AZ’s.

— Jeff;

PS – We continue to monitor the power situation closely. The AWS Service Health Dashboard is the best place to go for information on any possible service issues.

Amazon EC2 Dedicated Instances

by Jeff Barr | on | in Amazon EC2, Amazon VPC |

We continue to listen to our customers, and we work hard to deliver the services, features, and business models based on what they tell us is most important to them. With hundreds of thousands of customers using Amazon EC2 in various ways, we are able to see trends and patterns in the requests, and to respond accordingly. Some of our customers have told us that they want more network isolation than is provided by “classic EC2.”  We met their needs with Virtual Private Cloud (VPC). Some of those customers wanted to go even further. They have asked for hardware isolation so that they can be sure that no other company is running on the same physical host.

We’re happy to oblige!

Today we are introducing a new EC2 concept the Dedicated Instance. You can now launch Dedicated Instances within a Virtual Private Cloud on single-tenant hardware. Let’s take a look at the reasons why this might be desirable, and then dive in to the specifics, including pricing.

Background
Amazon EC2 uses a technology commonly known as virtualization to run multiple operating systems on a single physical machine. Virtualization ensures that each guest operating system receives its fair share of CPU time, memory, and I/O bandwidth to the local disk and to the network using a host operating system, sometimes known as a hypervisor. The hypervisor also isolates the guest operating systems from each other so that one guest cannot modify or otherwise interfere with another one on the same machine. We currently use a highly customized version of the Xen hypervisor. As noted in the AWS Security White Paper, we are active participants in the Xen community and track all of the latest developments.

While this logical isolation works really well for the vast majority of EC2 use cases, some of our customers have regulatory or restrictions that require physical isolation. Dedicated Instances have been introduced to address these requests.

The Specifics

Each Virtual Private Cloud (VPC) and each EC2 instance running in a VPC now has an associated tenancy attribute. Leaving the attribute set to the value “default” specifies the existing behavior: a single physical machine may run instances launched by several different AWS customers.

Setting the tenancy of a VPC to “dedicated” when the VPC is created will ensure that all instances launched in the VPC will run on single-tenant hardware. The tenancy of a VPC cannot be changed after it has been created.

You can also launch Dedicated Instances in a non-dedicated VPC by setting the instance tenancy to “dedicated” when you call RunInstances. This gives you a lot of flexibility; you can continue to use the default tenancy for most of your instances, reserving dedicated tenancy for the subset of instances that have special needs.

This is supported for all EC2 instance types with the exception of Micro, Cluster Compute, and Cluster GPU.

It is important to note that launching a set of instances with dedicated tenancy does not in any way guarantee that they’ll share the same hardware (they might, but you have no control over it). We actually go to some trouble to spread them out across several machines in order to minimize the effects of a hardware failure.

Pricing
When you launch a Dedicated Instance, we can’t use the remaining “slots” on the hardware to run instances for other AWS users. Therefore, we incur an opportunity cost when you launch a single Dedicated Instance. Put another way, if you run one Dedicated Instance on a machine that can support 10 instances, 9/10ths of the potential revenue from that machine is lost to us.

In order to keep things simple (and to keep you from wasting your time trying to figure out how many instances can run on a single piece of hardware), we add a $10/hour charge whenever you have at least one Dedicated Instance running in a Region. When figured as a per-instance cost, this charge will asymptotically approach $0 (per instance) for customers that run hundreds or thousands of instances in a Region.

We also add a modest premium to the On-Demand pricing for the instance to represent the added value of being able to run it in a dedicated fashion. You can use EC2 Reserved Instances to lower your overall costs in situations where at least part of your demand for EC2 instances is predictable.

— Jeff;

 

Build a Cluster Computing Environment in Under 10 minutes

by Jeff Barr | on | in Amazon EC2, Architecture, Hardware, HPC, Science |

We’ve created a new video tutorial, which describes how to setup a cluster of high performance compute nodes in under 10 minutes. Follow along with the tutorial to get a feel for how to provision high performance systems with Amazon EC2 – we’ll even cover the cost of the resources you use, through a $20 free service credit.

Why HPC?

Data is at the heart of many modern businesses. The tools and products that we create in turn generate complex datasets which are increasing in size, scope and importance. Whether we are looking for meaning within the bases of our genomes, performing risk assesments on the markets or reporting on click-through traffic from our websites, these data hold valuable information which can drive the state of the art forward.

Constraints are everywhere when dealing with data and its associated analysis, but few are as restrictive as the time and effort it takes to procure, provision and maintain the high performance compute servers which drive that analysis.

The cluster compute instance sizes available on Amazon EC2 can greatly reduce this constraint, and give you the freedom to run high specification analysis on-demand, as and when you need them. Amazon EC2 takes care of provisioning and monitoring your compute cluster and storage, leaving you more time to dive into your data.

A guided tour

To demonstrate the agility this approach provides, I made a short video tutorial which guides you through how to provision, configure and run a tightly coupled molecular dynamics simulation using cluster compute instances. The whole cluster is up and running in under 10 minutes.

Start the tutorial!

To help get a feel for this environment, we’re also providing $20 of service credits (enough to cover the cost of the demo), so you can follow along with this tutorial for free. To register for your free credits, just follow the link on the tutorial page.

In addition to getting up and running quickly, each cluster compute instance is no slouch either. They use hardware virtualisation to allow your code to get closer to the dual quad core Nehalem processors, and full bi-section 10Gbps networking for high speed communication between instances. Multi-core GPUs are also available – a perfect fit for large scale computational simulation or rendering. 

Just as in other fields, cloud infrastructure can help reduce the ‘muck’ and greatly lower the barrier of entry associated with working with high performance computing. We hope this short video will give you a flavour for things.

Get in touch

Feel free to drop me a line if you have any questions, or you can follow along on Twitter. I also made a longer form video, which includes a wider discussion on high performance computing with Amazon EC2.

~ Matt

Updated Amazon Linux AMI (2011.02) Released

by Jeff Barr | on | in Amazon EC2 |

We released an updated version of the Amazon Linux AMI earlier this week. It is available in all AWS Regions for all instance types.

Here’s what’s new:

  • Default compiler upgraded from GCC 4.1 to GCC 4.4.
  • The AMI kernel is now based on 2.6.35.11 release.
  • An HVM AMI was released to support the Cluster Compute (cc1.4xlarge) and Cluster GPU (cg1.4xlarge) instance types.
  • Default filesystem type for the AMI root filesystem has been changed from ext3 to ext4.
  • The Amazon Linux AMI is using upstart instead of sysvinit when booting.
  • The default Yum configuration on Amazon Linux AMI enables fail-over access to neighboring regions in case the repository in the local region is not accessible.

There’s more information and a complete list of new and updated packages in the Amazon Linux Release Notes.

 — Jeff;

 

Now Available: Windows Server 2008 R2 on Amazon EC2

by Jeff Barr | on | in Amazon EC2, Windows |

Today we are adding new options for our customers running Windows and SQL Server environments on Amazon EC2. In addition to running Windows Server 2003 and 2008, you can now run now run Windows Server 2008 R2. Sharing the kernel with Windows 7, this release of Windows includes additional Active Directory features, support for version 7.5 of IIS, new management tools, reduced boot time, and enhanced I/O performance. We are also adding support for SQL Server 2008 R2 and we are introducing Reserved Instances for SQL Server.

You can now launch instances of Windows Server 2008 R2 in four different flavors:

  • Core – A scaled-down version of Windows Server, with the minimum set of server roles.
  • Base – A basic installation of Windows Server 2008 R2.
  • Base with IIS and SQL Server Express – A starting point for Windows developers.
  • SQL Server Standard 2008 R2 – A starting point for Windows developers.

Here are the details:

  • All of these AMIs are available for immediate use in every Region and on most 64-bit instance types, excluding the t1.micro and Cluster Compute families.
  • We plan to add support for running Windows Server 2008 R2 in the Amazon Virtual Private Cloud (VPC).
  • The AMIs support English, Italian, French, Spanish, German, Traditional Chinese, Korean, and Japanese. The languages are supported only within the applicable regions — European languages in the EU and Asian languages in Singapore and Tokyo.
  • Windows Server 2008 R2 is available at the same price as previous versions of Windows on EC2. Reserved Instances and Spot Instances are also available.

Update: You can use the AWS VM Import feature to bring existing virtual machines to EC2. VM Import has been updated and now supports the 64-bit Standard, Datacenter, and Enterprise editions of Windows Server 2008 R2.

To get started, you can visit the Windows section of the AMI catalog or select “Windows 2008 R2” in the Quick Start menu when you launch a new instance. Microsoft has also posted additional Amazon Machine Images with Windows 2008 R2 in the Windows section of the AMI Catalog.

 

I look forward to hearing from you as you put Windows 2008 R2 to use. Leave a comment or send email to awseditor@amazon.com.

— Jeff;

 

A New Approach to Amazon EC2 Networking

by Jeff Barr | on | in Amazon EC2, Amazon VPC |

You’ve been able to use the Amazon Virtual Private Cloud to construct a secure bridge between your existing IT infrastructure and the AWS cloud using an encrypted VPN connection. All communication between Amazon EC2 instances running within a particular VPC and the outside world (the Internet) was routed across the VPN connection.

Today we are releasing a set of features that expand the power and value of the Virtual Private Cloud. You can think of this new collection of features as virtual networking for Amazon EC2. While I would hate to be innocently accused of hyperbole, I do think that today’s release legitimately qualifies as massive, one that may very well change the way that you think about EC2 and how it can be put to use in your environment.

The features include:

  • A new VPC Wizard to streamline the setup process for a new VPC.
  • Full control of network topology including subnets and routing.
  • Access controls at the subnet and instance level, including rules for outbound traffic.
  • Internet access via an Internet Gateway.
  • Elastic IP Addresses for EC2 instances within a VPC.
  • Support for Network Address Translation (NAT).
  • Option to create a VPC that does not have a VPN connection.

You can now create a network topology in the AWS cloud that closely resembles the one in your physical data center including public, private, and DMZ subnets. Instead of dealing with cables, routers, and switches you can design and instantiate your network programmatically. You can use the AWS Management Console (including a slick new wizard), the command line tools, or the APIs. This means that you could store your entire network layout in abstract form, and then realize it on demand.

VPC Wizard
The new VPC Wizard lets you get started with any one of four predefined network architectures in under a minute:

 

The following architectures are available in the wizard:

  • VPC with a single public subnet – Your instances run in a private, isolated section of the AWS cloud with direct access to the Internet. Network access control lists and security groups can be used to provide strict control over inbound and outbound network traffic to your instances.
  • VPC with public and private subnets – In addition to containing a public subnet, this configuration adds a private subnet whose instances are not addressable from the Internet.  Instances in the private subnet can establish outbound connections to the Internet via the public subnet using Network Address Translation.
  • VPC with Internet and VPN access – This configuration adds an IPsec Virtual Private Network (VPN) connection between your VPC and your data center effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your VPC.
  • VPC with VPN only access – Your instances run in a private, isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet. You can connect this private subnet to your corporate data center via an IPsec Virtual Private Network (VPN) tunnel.

You can start with one of these architectures and then modify it to suit your particular needs, or you can bypass the wizard and build your VPC piece-by-piece. The choice is yours, as is always the case with AWS.

After you choose an architecture, the VPC Wizard will prompt you for the IP addresses and other information that it needs to have in order to create the VPC:

Your VPC will be ready to go within seconds; you need only launch some EC2 instances within it (always on a specific subnet) to be up and running.

Route Tables
Your VPC will use one or more Route Tables to direct traffic to and from the Internet and VPN Gateways (and your NAT instance, which I haven’t told you about yet) as desired., based on the CIDR block of the destination. Each VPC has a default, or main routing table. You can create additional routing tables and attach them to individual subnets if you’d like:


Internet Gateways
You can now create an Internet Gateway within your VPC in order to give you the ability to route traffic to and from the Internet using a Routing Table (see below). It can also be used to streamline access to other parts of AWS, including Amazon S3 (in the absence of an Internet Gateway you’d have to send traffic out through the VPN connection and then back across the public Internet to reach S3).

Network ACLs
You can now create and attach a Network ACL (Access Control List) to your subnets if you’d like. You have full control (using a combination of Allow and Deny rules) of the traffic that flows in to and out of each subnet and gateway. You can filter inbound and outbound traffic, and you can filter on any protocol that you’d like:


You can also use AWS Identity and Access Management to restrict access to the APIs and resources related to setting up and managing Network ACLs.

Security Groups
You can now use Security Groups on the EC2 instances that your launch within your VPC. When used in a VPC, Security Groups gain a number of powerful new features including outbound traffic filtering and the ability to create rules that can match any IP protocol including TCP, UDP, and ICMP.

You can also change (add and remove) these security groups on running EC2 instances. The AWS Management Console sports a much-improved user interface for security groups; you can now make multiple changes to a group and then apply all of them in one fell swoop.

Elastic IP Addresses
You can now assign Elastic IP Addresses to the EC2 instances that are running in your VPC, with one small caveat: these addresses are currently allocated from a separate pool and you can’t assign an existing (non-VPC) Elastic IP Address to an instance running in a VPC.

NAT Addressing
You can now launch a special “NAT Instance” and route traffic from your private subnet to it in. Doing this allows the private instances to initiate outbound connections to the Internet without revealing their IP addresses. A NAT Instance is really just an EC2 instance running a NAT AMI that we supply; you’ll pay the usual EC2 hourly rate for it.

ISV Support
Several companies have been working with these new features and have released (or are just about to release) some very powerful new tools. Here’s what I know about:

 

The OpenVPN Access Server is now available as an EC2 AMI and can be launched within a VPC. This is a complete, software-based VPN solution that you can run within a public subnet of your VPC. You can use the web-based administrative GUI to check status, control networking configuration, permissions, and other settings.

 

CohesiveFT’s VPN-Cubed product now supports a number of new scenarios.

By running the VPN-Cubed manager in the public section of a VPC, you can connect multiple IPsec gateways to your VPC.You can even do this using security appliances from vendors like Cisco, ASA, Juniper, Netscreen, and SonicWall, and you don’t need BGP.

VPN-Cubed also lets you run grid and clustering products that depend on support for multicast protocols.

 

CloudSwitch further enhances VPC’s security and networking capabilities. They support full encryption of data and rest and in transit, key management, and network encryption between EC2 instances and between a data center and EC2 instances. The net-net is complete isolation of virtual machines, data, and communications with no modifications to the virtual machines or the networking configuration.

 

The The Riverbed Cloud Steelhead extends Riverbeds WAN optimization solutions to the VPC, making it easier and faster to migrate and access applications and data in the cloud. Available on an elastic, subscription-based pricing model with a portal-based management system.

 

Pricing

I think this is the best part of the Virtual Private Cloud: you can deploy a feature-packed private network at no additional charge! We don’t charge you for creating a VPC, subnet, ACLs, security groups, routing tables, or VPN Gateway, and there is no charge for traffic between S3 and your Amazon EC2 instances in VPC. Running Instances (including NAT instances), Elastic Block Storage, VPN Connections, Internet bandwidth, and unmapped Elastic IPs will incur our usual charges.

Internet Gateways in VPC has been a high priority for our customers, and Im excited about all the new ways VPC can be used. For example, VPC is a great place for applications that require the security provided by outbound filtering, network ACLs, and NAT functionality. Or you could use VPC to host public-facing web servers that have VPN-based network connectivity to your intranet, enabling you to use your internal authentication systems. I’m sure your ideas are better than mine; leave me a comment and let me know what you think!

— Jeff;

Even More EC2 Goodies in the AWS Management Console

by Jeff Barr | on | in Amazon EC2 |

We’ve added some new features to the EC2 tab of the AWS Management Console to make it even more powerful and even easier to use.

You can now change the instance type of a stopped, EBS-backed EC2 instance. This means that you can scale up or scale down as your needs change. The new instance type must be compatible with the AMI that you used to boot the instance, so you can’t change from 32 bit to 64 bit or vice versa.

The Launch Instances Wizard now flags AMIs that will not incur any additional charges when used with an EC2 instance running within the AWS free usage tier:

You can now control what happens when an EBS-backed instance shuts itself down. You can choose to stop the instance (so that it can be started again later) or to terminate the instance:

You can now modify the EC2 user data (a string passed to the instance on startup) while the instance is stopped:

We’ll continue to add features to the AWS Management Console to make it even more powerful and easier to use. Please feel free to leave us comments and suggestions.

— Jeff;

Run SUSE Linux Enterprise Server on Cluster Compute Instances

by Jeff Barr | on | in Amazon EC2 |

You can now run SUSE Linux Enterprise Server on EC2’s Cluster Compute and Cluster GPU instances. As I noted in the post that I wrote last year when this distribution became available on the other instance types, SUSE Linux Enterprise Server is a proven, commercially supported Linux platform that is ideal for development, test, and production workloads. This is the same operating system that runs the IBM Watson DeepQA application that competed against a human opponent (and won) on Jeopardy just last month.

After reading Tony Pearson’s article (How to Build Your Own Watson Jr. In Your Basement), I set out to see how his setup could be replicated on an hourly, pay as you go basis using AWS. Here’s what I came up with:

  1. Buy the Hardware. With AWS there’s nothing to buy. Simply choose from among the various EC2 instance types. A couple of Cluster Compute Quadruple Extra Large instances should do the trick:
  2. Establish Networking. Tony recommends 1 Gigabit Ethernet. Create an EC2 Placement Group, and launch the Cluster Compute instances within it to enjoy 10 Gigabit non-blocking connectivity between the instances:

  3. Install Linux and Middleware. The article recommends SUSE Linux Enterprise Server. You can run it on a Cluster Compute instance by selecting it from the Launch Instances Wizard:

    Launch the instances within the placement group in order to get the 10 Gigabit non-blocking connectivity:

    You can use the local storage on the instance, or you can create a 300 GB Elastic Block Store volume for the reference data:

  4. Download Information Sources. Tony recommends the use of NFS to share files within the cluster. That will work just fine on EC2; see the Linux-NFS-HOWTO for more information. He also notes that you will need a relational database. You can use Apache Derby per his recommendation, or you can start up an Amazon RDS instance so that you don’t have to worry about backups, scaling or other administrative chores (if you do this you might not need the 300 GB EBS volume created in the previous step):

    You’ll need some information sources. Check out the AWS Public Data Sets to get started.

  5. The Query Panel – Parsing the Question. You can download and install OpenNLP and OpenCyc as described in the article. You can run most applications (open source and commercial) on an EC2 instance without making any changes.
  6. Unstructured Information Management Architecture. This part of the article is a bit hand-wavey. It basically boils down to “write a whole lot of code around the Apache UIMA framework.”
  7. Parallel Processing. The original Watson application ran in parallel across 2,880 cores. While this would be prohibitive for a basement setup, it is possible to get this much processing power from AWS in short order and (even more importantly) to put it to productive use. Tony recommends the use of the UIMA-AS package for asynchronous scale-out, all managed by Hadoop. Fortunately, Amazon Elastic MapReduce is based on Hadoop, so we are all set:
  8. Testing. Tony recommends a batch-based approach to testing, with questions stored in text files to allow for repetitive testing. Good enough, but you still need to evaluate all of the answers and decide if your tuning is taking you in the desired direction. I’d recommend that you use the Amazon Mechanical Turk instead. You could easily run A/B tests across multiple generations of results.

I really liked Tony’s article because it took something big and complicated and reduced it to a series of smaller and more approachable steps. I hope that you see from my notes above that you can easily create and manage the same types of infrastructure, run the same operating system, and the same applications using AWS, without the need to lift a screwdriver or to max out your credit cards. You could also use Amazon CloudFormation to automate the entire setup so that you could re-create it on demand or make copies for your friends.

Read more about features and pricing on our SUSE Linux Enterprise Server page.

— Jeff;

JumpBox for the AWS Free Usage Tier

by Jeff Barr | on | in Amazon EC2, Cool Sites |

We’ve teamed up with JumpBox to make it even easier and less expensive for you to host a WordPress blog, publish a web site with Drupal, run a Wiki with MediaWiki, or publish content with Joomla. You can benefit from two separate offers:

  • The new JumpBox free tier trial for AWS customers lets you launch and run the applications listed above at no charge. There will be a small charge for EBS storage (see below).
  • If you qualify for the AWS free usage tier it will give you sufficient EC2 time, S3 storage space, and internet data transfer to host the application and to handle a meaningful amount of traffic.

Any AWS user (free or not) can take advantage of JumpBox’s offer, paying the usual rates for AWS. The AWS free usage tier is subject to the AWS Free Usage Tier Offer Terms; use of AWS in excess of free usage amounts will be charged standard AWS rates.

Note: The JumpBox machine images are larger than the 10 GB of EBS storage provided in the free usage tier; you’ll be charged $1.50 per month (an additional 10 GB of EBS storage per month) if you run them in the free usage tier.

The applications are already installed and configured; there’s nothing to set up. The application will run on an EC2 instance of its own; you have full control of the configuration and you can install themes, add-ins, and the like. Each application includes a configuration portal to allow you to configure the application and to make backups.

Here’s a tour, starting with the 1-page signup form:

After a successful signup, JumpBox launches the application:

The application will be ready to run in a very short time (less than a minute for me):

The next step is to configure the application (I choose to launch Joomla):

And I am up and running:

You can access all of the administrative and configuration options from a password-protected control panel that runs on the EC2 instance that’s hosting the application:

Here are the links that you need to get started:

As you can probably see from the tour, you can be up and running with any of these applications in minutes. As long as you are eligible for and stay within the provisions of the AWS free usage tier, you can do this for free. I’m looking forward to hearing your thought and success stories; leave me a comment below.

— Jeff;

ActivePython AMI from ActiveState

by Jeff Barr | on | in Amazon EC2, Developer Tools |

The folks at ActiveState have cooked up an ActivePython AMI to make it easy for you to build and deploy web application written in Python.You can get started in minutes without having to download, install, or configure anything.

The AMI is based on the 64-bit version of Ubuntu and includes MySQL, SQLite, Apache, ActivePython, Django, Memcached, Nginx, and a lot of other useful components. You can run the AMI on the Micro, Large, and Extra Large instance types.

They have put together a nice suite of resources around the AMI including a tutorial (Building a Python-Centric Web Server in the Cloud) and a set of Best Practice Notes on Cloud Computing With Python.

Check it out, and let me know what you think!

— Jeff;