Category: Amazon EC2


EC2 VM Import Connector

by Jeff Barr | on | in Amazon EC2 |

The new Amazon EC2 VM Import Connector is a virtual appliance (vApp) plug-in for VMware vCenter. Once installed, you can import virtual machines from your VMware vSphere infrastructure into Amazon EC2 using the GUI that you are already familiar with. This feature builds on top of the VM Import feature that I blogged about late last year.

The Connector stores separate AWS credentials for each vCenter user so that multiple users (each with separate AWS accounts) can use the same Connector. The account must be subscribed to EC2 in order to use the Connector.

You can download the Connector from the AWS Developer Tools page. You’ll need to make sure that you have adequate disk space available, and you’ll also need to verify that certain network ports are open (see the EC2 User Guide for more information). The Connector is shipped as an OVF template that you will deploy with your vSphere Client.

After you’ve installed and configured the Connector, you can import any virtual machine that needs the following requirements:

  • Runs Windows Server 2008 SP2 (32 or 64 bit).
  • Currently turned off.
  • Uses a single virtual hard drive (multiple partitions are OK) no larger than one terabyte.

 Importing is a simple matter of selecting a virtual machine and clicking on the Import to EC2 tab:

The import process can take a couple of hours, depending on the speed and utilization of your Internet connection. You can monitor the progress using the Tasks and Events tab of the vSphere Client.

As is always the case with AWS, we started out with a core feature (VM Import) and are now adding additional capabilities to it. Still on the drawing board (but getting closer every day) are additional features such as VM Export (create a virtual machine image from an EC2 instance or AMI), support for additional image formats and operating systems.

— Jeff;

 

 

Now Open: AWS Region in Tokyo

by Jeff Barr | on | in Amazon CloudFront, Amazon CloudWatch, Amazon EC2, Amazon Elastic Load Balancer, Amazon RDS, Amazon S3, Amazon Simple Notification Service, Amazon SimpleDB, Amazon SQS, APAC, AWS Identity and Access Management |

I have made many visits to Japan over the last several years to speak at conferences and to meet with developers. I really enjoy the people, the strong sense of community, and the cuisine.

Over the years I have learned that there’s really no substitute for sitting down, face to face, with customers and potential customers. You can learn things in a single meeting that might not be obvious after a dozen emails. You can also get a sense for the environment in which they (and their users or customers) have to operate. For example, developers in Japan have told me that latency and in-country data storage are of great importance to them.

Long story short, we’ve just opened up an AWS Region in Japan, Tokyo to be precise. The new region supports Amazon EC2 (including Elastic IP Addresses, Amazon CloudWatch, Elastic Block Storage, Elastic Load Balancing, VM Import, and Auto Scaling), Amazon S3, Amazon SimpleDB, the Amazon Relational Database Service, the Amazon Simple Queue Service, the Amazon Simple Notification Service, Amazon Route 53, and Amazon CloudFront. All of the usual EC2 instance types are available with the exception of the Cluster Compute and Cluster GPU. The page for each service includes full pricing information for the Region.

Although I can’t share the exact location of the Region with you, I can tell you that private beta testers have been putting it to the test and have reported single digit latency (e.g. 1-10 ms) from locations in and around Tokyo. They were very pleased with the observed latency and performance.

Existing toolkit and tools can make use of the new Tokyo Region with a simple change of endpoints. The documentation for each service lists all of the endpoints for each service.

This offering goes beyond the services themselves. We also have the following resources available:

Put it all together and developers in Japan can now build applications that respond very quickly and that store data within the country.

 

The JAWS-UG (Japan AWS User Group) is another important resource. The group is headquartered in Tokyo, with regional branches in Osaka and other cities. I have spoken at JAWS meetings in Tokyo and Osaka and they are always a lot of fun. I start the meeting with an AWS update. The rest of the meeting is devoted to short “lightning” talks related to AWS or to a product built with AWS. For example, the developer of the Cacoo drawing application spoke at the initial JAWS event in Osaka in late February. Cacoo runs on AWS and features real-time collaborative drawing.

We’ve been working with some of our customers to bring their apps to the new Region ahead of the official launch. Here is a sampling:

Zynga is now running a number of their applications here. In fact (I promise I am not making this up) I saw a middle-aged man playing Farmville on his Android phone on the subway when I was in Japan last month. He was moving sheep and fences around with rapid-fire precision!

 

The enStratus cloud management and governance tools support the new region.

enStratus supports role-based access, management of encryption keys, intrusion detection and alerting, authentication, audit logging, and reporting.

All of the enStratus AMIs are available. The tools feature a fully localized user interface (Cloud Manager, Cluster Manager, User Manager, and Report) that can display text in English, Japanese, Korean, Traditional Chinese, and French.

enStratus also provides local currency support and can display estimated operational costs in JPY (Japan / Yen) and a number of other currencies.

 

Sekai Camera is a very cool augmented reality application for iPhones and Android devices. It uses the built-in camera on each device to display a tagged, augmented version of what the camera is looking at. Users can leave “air tags” at any geographical location. The application is built on AWS and makes use of a number of services including EC2, S3, SimpleDB, SQS, and Elastic Load Balancing. Moving the application to the Tokyo Region will make it even more responsive and interactive.

 

G-Mode Games is running a multi-user version of Tetris in the new Region. The game is available for the iPhone and the iPod and allows you to play against another person.

 

Cloudworks is a management tool for AWS built in Japan, and with a Japanese language user interface. It includes a daily usage report, scheduled jobs, and a history of all user actions. It also supports AWS Identity and Access Management (IAM) and copying of AMIs from region to region.

 

Browser 3Gokushi is a well-established RPG (Role-Playing Game) that is now running in the new region.

 

Here’s some additional support that came in after the original post:

Here are some of the jobs that we have open in Japan:

— Jeff;

Note: Tetris and 1985~2011 Tetris Holding. Tetris logos, Tetris theme song and Tetriminos are trademarks of Tetris Holding. The Tetris trade dress is owned by Tetris Holding. Licensed to The Tetris Company. Game Design by Alexey Pajitony. Original Logo Design by Roger Dean. All Rights Reserved. Sub-licensed to Electronic Arts Inc. and G-mode, Inc.

Upcoming Event: AWS Tech Summit, London

by Jeff Barr | on | in Amazon EC2, Amazon Elastic Load Balancer, Amazon RDS, Amazon S3, Amazon SES, Amazon SimpleDB, Architecture, Auto Scaling, AWS Elastic Beanstalk, Developer Tools, Europe, Events |

I’m very pleased to invite you all to join the AWS team in London, for our first Tech Summit of 2011. We’ll take a quick, high level tour of the Amazon Web Services cloud platform before diving into the technical detail of how to build highly available, fault tolerant systems, host databases and deploy Java applications with Elastic Beanstalk.

We’re also delighted to be joined by three expert customers who will be discussing their own, real world use of our services:

So if you’re a developer, architect, sysadmin or DBA, we look forward to welcoming you to the Congress Centre in London on the 17th of March.

We had some great feedback from our last summit in November, and this event looks set to be our best yet.

The event is free, but you’ll need to register.

~ Matt

New AWS Console Features: Forced Detach, Termination Protection

by Jeff Barr | on | in Amazon EC2 |

We’ve added two new features to the AWS Management Console: forced detach of EBS volumes and termination protection.

Forced Detach
From time to time an Elastic Block Storage (EBS) volume will refuse to cleanly detach itself from an EC2 instance. This can occur for several reasons, including continued system or user activity on the volume. Under normal circumstances you should always log in to the instance, terminate any processes that are reading or writing data to the volume, and unmount the volume before detaching it. If the volume fails to detach after you have taken all of these precautions, you can forcibly detach it. This is a last-resort option since there may still be some write operations pending for the volume. The console now includes a new option that allows you to forcibly detach a volume:

This function has been present in the EC2 APIs and in the command line tools for some time. You can now access it from the console.

Termination Protection
EC2 termination protection has been around for a while and is now accessible from the console:

Once activated for an EC2 instance, this feature blocks attempts to terminate an instance by way of the command line tools or the EC2 API. This gives you an extra measure of protection for those “precious” instances that you would prefer not to shut down by accident. For example, my primary EC2 instance has been running for 949 days and I would hate to terminate it by accident:

This is more for sentimental value than anything else, but I enjoy having this instance around from the early days of EC2. Here’s what happens if you try to terminate a protected instance:

Termination protection is also useful when several users share a single AWS account.

— Jeff;

Rack and the Beanstalk

by Jeff Barr | on | in Amazon EC2, Amazon RDS, AWS Elastic Beanstalk, Developer Tools |

AWS Elastic Beanstalk manages your web application via Java, Tomcat and the Amazon cloud infrastructure. This means that in addition to Java, Elastic Beanstalk can host applications developed with languages compatible with the Java VM.

This includes tools such as Clojure, Scala and JRuby – in this post we start to think out of the box, and show you how to run any Rack based Ruby application (including Rails and Sinatra) on the Elastic Beanstalk platform. You get all the benefits of deploying to Elastic Beanstalk: autoscaling, load balancing, versions and environments, with the joys of developing in Ruby.

Getting started

We’ll package a new Rails app into a Java .war file which will run natively through JRuby on the Tomcat application server. There is no smoke and mirrors here – Rails will run natively on JRuby, a Ruby implementation written in Java.

Java up

If you’ve not used Java or JRuby before, you’ll need to install them. Java is available for download, or via your favourite package repository and is usually already installed on Mac OS X. The latest version of JRuby is available here. It’s just a case of downloading the latest binaries for your platform (or source, if you are so inclined), and unpacking them into your path – full details here. I used v1.5.6 for this post.

Gem cutting

Ruby applications and modules are often distributed as Rubygems. JRuby maintains a separate Rubygem library, so we’ll need to install a few gems to get started including Rails, the Java database adaptors and warbler, which we’ll use to package our application for deployment to AWS Elastic Beanstalk. Assuming you added the jruby binaries to your path, you can run the following on your command line:

jruby -S gem install rails

jruby -S gem install warbler

jruby -S gem install jruby-openssl

jruby -S gem install activerecord-jdbcsqlite3-adapter

jruby -S gem install activerecord-jdbcmysql-adapter

To skip the lengthy documentation generation, just throw ‘–no-ri –no-rdoc‘ on the end of each of these commands.

A new hope

We can now create a new Rails application, and set it up for deployment under the JVM application container of Elastic Beanstalk. We can use a preset template, provided by jruby.org, to get us up and running quickly. Again, on the command line, run:

jruby -S rails new aws_on_rails -m http://jruby.org/rails3.rb

This will create a new Rails application in a directory called ‘aws_on_rails’. Since it’s so easy with Rails, let’s make our example app do something interesting. For this, we’ll need to first setup our database configuration to use our Java database drivers. To do this, just define the gems in the application’s Gemfile, just beneath the line that starts gem ‘jdbc-sqlite3’:

gem ‘activerecord-jdbcmysql-adapter’, :require => false

gem ‘jruby-openssl’

Now we setup the database configuration details – add these to your app’s config/database.yml file.

development:  
  adapter: jdbcsqlite3
  database: db/development.sqlite3
  pool: 5
  timeout: 5000 

production:
  adapter: jdbcmysql
  driver: com.mysql.jdbc.Driver
  username: admin
  password: <password>
  pool: 5
  timeout: 5000
  url: jdbc:mysql://<hostname>/<db-name>

If you don’t have a MySQL database, you can create one quickly using the Amazon Relational Database Service. Just log into the AWS Management Console, go to the RDS tab, and click ‘Launch DB instance’. you can find more details about Amazon RDS here. The hostname for the production settings above are listed in the console as the database ‘endpoint’. Be sure to create the RDS database in the same region as Elastic Beanstalk, us-east and setup the appropriate security group access.

Application

We’ll create a very basic application that lets us check in to a location. We’ll use Rails’ scaffolding to generate a simple interface, a controller and a new model.

jruby -S rails g scaffold Checkin name:string location:string

Then we just need to migrate our production database, ready for the application to be deployed to Elastic Beanstalk:

jruby -S rake db:migrate RAILS_ENV=production

Finally, we just need to set up the default route. Add the following to config/routes.rb:

root :to => “checkins#index”

This tells Rails how to respond to the root URL, which is used by the Elastic Beanstalk load balancer by default to monitor the health of your application.

Deployment

We’re now ready to package our application, and send it to Elastic Beanstalk. First of all, we’ll use warble to package our application into a Java war file.

jruby -S warble

This will create a new war file, named after your application, located in the root directory of your application. Head over to the AWS Management Console, click on the Elastic Beanstalk tab, and select ‘Create New Application’. Setup your Elastic Beanstalk application with a name, URL and container type, then upload the Rails war file.

After Elastic Beanstalk has provisioned your EC2 instances, load balancer and autoscaling groups, your application will start under Tomcat’s JVM. This step can take some time but once your app is launched, you can view it at the Elastic Beanstalk URL.

Congrats! You are now running Rails on AWS Elastic Beanstalk.

By default, your application will launch under Elastic Beanstalk in production mode, but you can change this and a wide range of other options using the warbler configuration settings. You can adjust the number of instances and autoscaling settings from the Elastic Beanstalk console.

Since Elastic Beanstalk is also API driven, you can automate the configuration, packaging and deployment as part of your standard build and release process. 

 ~ Matt

New Screencast: Building a High Performance Cluster

by Jeff Barr | on | in Amazon CloudWatch, Amazon EC2, Science, Screencast |

From aeronautics to genomics to financial services, High Performance Computing is becoming a common requirement in many fields of industry and academia. Traditionally, the barrier to entry into this area has remained high, with the expertise and cost needed to provide such facilities proving to be prohibitive.

With Amazon EC2’s Cluster Compute instances, extremely high performance elastic computing is now available in just a few mouse clicks.

With fast network interconnects, high memory and quick CPUs, these instances are extremely capable for tightly coupled tasks or batch processing, and very easy to use. I’ve recorded a short screencast that demonstrates how to build an 8 node, 64 core cluster and kick off a highly parallel analysis run, all in around 10 minutes.

You can read more about HPC in the cloud, including our new GPU enabled instances, on our HPC applications page. You may also be interested in the upcoming Analytics in the Cloud webinar.

~ Matt

New Webinar: High Availability Websites

by Jeff Barr | on | in Amazon CloudWatch, Amazon EC2, Amazon Elastic Load Balancer, Amazon RDS, Amazon S3, Architecture, Auto Scaling, Webinars |

As part of a new, monthly hands on series of webinars, I’ll be giving a technical review of building, managing and maintaining high availability websites and web applications using Amazons cloud computing platform.

Hosting websites and web applications is a very common use of our services, and in this webinar we’ll take a hands-on approach to websites of all sizes, from personal blogs and static sites to complex multi-tier web apps.

Join us on January 28 at 10:00 AM (GMT) for this 60 minute, technical web-based seminar, where we’ll aim to cover:

  • Hosting a static website on S3
  • Building highly available, fault tolerant websites on EC2
  • Adding multiple tiers for caching, reverse proxies and load balancing
  • Autoscaling and monitoring your website

Using real world case studies and tried and tested examples, well explore key concepts and best practices for working with websites and on-demand infrastructure.

The session is free, but you’ll need to register!

See you there.

~ Matt

 

Run Oracle Applications on Amazon EC2 Now!

by Jeff Barr | on | in Amazon EC2 |

Earlier this year I discussed our plans to allow you to run a wide variety of Oracle applications on Amazon EC2 in the near future. The future is finally here; the following applications are now available as AMIs for use with EC2:

  • Oracle PeopleSoft CRM 9.1 PeopleTools
  • Oracle PeopleSoft CRM 9.1 Database
  • Oracle PeopleSoft ELM 9.1 PeopleTools
  • Oracle PeopleSoft ELM 9.1 Database
  • Oracle PeopleSoft FSCM 9.1 PeopleTools
  • Oracle PeopleSoft FSCM 9.1 Database
  • Oracle PeopleSoft PS 9.1 PeopleTools
  • Oracle PeopleSoft PS 9.1 Database
  • Oracle E-Business Suite 12.1.3 App Tier
  • Oracle-E-Business-Suite-12.1.3-DB
  • JD Edwards Enterprise One – ORCLVMDB
  • JD Edwards Enterprise One – ORCLVMHTML
  • JD Edwards Enterprise One – ORCLVMENT

The application AMIs are all based on Oracle Linux and run on 64-bit high-memory instances atop the Oracle VM. You can use them as-is or you can create derivative versions tuned to your particular needs. We’ll start out in one Region and add more in the near future.

As I noted in my original post, you can use your existing Oracle licenses at no additional license cost or you can acquire new licenses from Oracle. We implemented Oracle VM support on Amazon EC2 with hard partitioning so Oracle’s standard partitioned processor licensing models apply.

All of these applications are certified and supported by Oracle. Customers with active Oracle Support and Amazon Premium Support will be able to contact either Amazon or Oracle for support.

You can find the Oracle AMIs in the Oracle section of the AWS AMI Catalog.

— Jeff;

VM Import – Bring Your VMware Images to The Cloud

by Jeff Barr | on | in Amazon EC2 |

If you have invested in virtualization to meet IT security, compliance, or configuration management requirements and are now looking at the cloud as the next step toward the future, I’ve got some good news for you.

VM Import lets you bring existing VMware images (VMDK files) to Amazon EC2. You can import “system disks” containing bootable operating system images as well as data disks that are not meant to be booted.

This new feature opens the door to a number of migration and disaster recovery scenarios. For example, you could use VM Import to migrate from your on-premises data center to Amazon EC2.

You can start importing 32 and 64 bit Windows Server 2008 SP2 images right now (we support the Standard, Enterprise, and Datacenter editions). We are working to add support for other versions of Windows including Windows Server 2003 and Windows Server 2008 R2. We are also working on support for several Linux distributions including CentOS, RHEL, and SUSE. You can even import images into the Amazon Virtual Private Cloud (VPC).

The import process can be initiated using the VM Import APIs or the command line tools. You’ll want to spend some time preparing the image before you upload it. For example, you need to make sure that you’ve enabled remote desktop access and disabled any anti-virus or intrusion detection systems that are installed (you can enable them again after you are up and running in the cloud). Other image-based security rules should also be double-checked for applicability.

The ec2-import-instance command is used to start the import process for a system disk. You specify the name of the disk image along with the desired Amazon EC2 instance type and parameters (security group, availability zone, VPC, and so forth) and the name of an Amazon S3 bucket. The command will provide you with a task ID for use in the succeed steps of the import process.

The ec2-upload-disk-image command uploads the disk image associated with the given task ID. You’ll get upload statistics as the bits make the journey into the cloud. The command will break the upload into multiple parts for efficiency and will automatically retry any failed uploads.

The next step in the import process takes place within the cloud; the time it takes will depend on the size of the uploaded image. You can use the ec2-describe-conversion-tasks command to monitor the progress of this step.

When the upload and subsequent conversion is complete you will have a lovely, gift-wrapped EBS-backed EC2 instance in the “stopped” state. You can then use the ec2-delete-disk-image command to clean up.

The ec2-import-volume command is used to import a data disk, in conjunction with ec2-upload-disk-image. The result of this upload process is an Amazon EBS volume that can be attached to any running EC2 instance in the same Availability Zone.

There’s no charge for the conversion process. Upload bandwidth, S3 storage, EBS storage, and Amazon EC2 time (to run the imported image) are all charged at the usual rates. When you import and run a Windows server you will pay the standard AWS prices for Windows instances.

As is often the case with AWS, we have a long roadmap for this feature. For example, we plan to add support for additional operating systems and virtualization formats along with a plugin for VMware’s vSphere console (if you would like to help us test the plugin prior to release, please let us know at ec2-vm-import-plugin-preview@amazon.com). We’ll use your feedback to help us to shape and prioritize our roadmap, so keep those cards and letters coming.

— Jeff;

 

FreeBSD on Amazon EC2

by Jeff Barr | on | in Amazon EC2 |

Colin Percival (developer of Tarsnap) wrote to tell me that the FreeBSD operating system is now running on Amazon EC2 in experimental fashion.

According to his FreeBSD on EC2 blog post, version 9.0-CURRENT of FreeBSD is now available in the US East (Northern Virginia) region and can be run on t1.micro instances. Colin expects to be able to expand to other regions and EC2 instance types over time.

The AMI is stable enough to be able to build and run Apache under light load for several days. FreeBSD 9.0-CURRENT is a bleeding-edge snapshot release. Plans are in place to back-port the changes made to this release to FreeBSD 8.0-STABLE in the future.

Congratulations to Colin and to the rest of the FreeBSD team for making this happen. I have received a number of requests for this operating system over the years and I am happy to see that this community-driven effort has made so much progress.

— Jeff;