Category: Amazon EC2*


A New Approach to Amazon EC2 Networking

You’ve been able to use the Amazon Virtual Private Cloud to construct a secure bridge between your existing IT infrastructure and the AWS cloud using an encrypted VPN connection. All communication between Amazon EC2 instances running within a particular VPC and the outside world (the Internet) was routed across the VPN connection.

Today we are releasing a set of features that expand the power and value of the Virtual Private Cloud. You can think of this new collection of features as virtual networking for Amazon EC2. While I would hate to be innocently accused of hyperbole, I do think that today’s release legitimately qualifies as massive, one that may very well change the way that you think about EC2 and how it can be put to use in your environment.

The features include:

  • A new VPC Wizard to streamline the setup process for a new VPC.
  • Full control of network topology including subnets and routing.
  • Access controls at the subnet and instance level, including rules for outbound traffic.
  • Internet access via an Internet Gateway.
  • Elastic IP Addresses for EC2 instances within a VPC.
  • Support for Network Address Translation (NAT).
  • Option to create a VPC that does not have a VPN connection.

You can now create a network topology in the AWS cloud that closely resembles the one in your physical data center including public, private, and DMZ subnets. Instead of dealing with cables, routers, and switches you can design and instantiate your network programmatically. You can use the AWS Management Console (including a slick new wizard), the command line tools, or the APIs. This means that you could store your entire network layout in abstract form, and then realize it on demand.

VPC Wizard
The new VPC Wizard lets you get started with any one of four predefined network architectures in under a minute:

 

The following architectures are available in the wizard:

  • VPC with a single public subnet – Your instances run in a private, isolated section of the AWS cloud with direct access to the Internet. Network access control lists and security groups can be used to provide strict control over inbound and outbound network traffic to your instances.
  • VPC with public and private subnets – In addition to containing a public subnet, this configuration adds a private subnet whose instances are not addressable from the Internet.  Instances in the private subnet can establish outbound connections to the Internet via the public subnet using Network Address Translation.
  • VPC with Internet and VPN access – This configuration adds an IPsec Virtual Private Network (VPN) connection between your VPC and your data center effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your VPC.
  • VPC with VPN only access – Your instances run in a private, isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet. You can connect this private subnet to your corporate data center via an IPsec Virtual Private Network (VPN) tunnel.

You can start with one of these architectures and then modify it to suit your particular needs, or you can bypass the wizard and build your VPC piece-by-piece. The choice is yours, as is always the case with AWS.

After you choose an architecture, the VPC Wizard will prompt you for the IP addresses and other information that it needs to have in order to create the VPC:

Your VPC will be ready to go within seconds; you need only launch some EC2 instances within it (always on a specific subnet) to be up and running.

Route Tables
Your VPC will use one or more Route Tables to direct traffic to and from the Internet and VPN Gateways (and your NAT instance, which I haven’t told you about yet) as desired., based on the CIDR block of the destination. Each VPC has a default, or main routing table. You can create additional routing tables and attach them to individual subnets if you’d like:


Internet Gateways
You can now create an Internet Gateway within your VPC in order to give you the ability to route traffic to and from the Internet using a Routing Table (see below). It can also be used to streamline access to other parts of AWS, including Amazon S3 (in the absence of an Internet Gateway you’d have to send traffic out through the VPN connection and then back across the public Internet to reach S3).

Network ACLs
You can now create and attach a Network ACL (Access Control List) to your subnets if you’d like. You have full control (using a combination of Allow and Deny rules) of the traffic that flows in to and out of each subnet and gateway. You can filter inbound and outbound traffic, and you can filter on any protocol that you’d like:


You can also use AWS Identity and Access Management to restrict access to the APIs and resources related to setting up and managing Network ACLs.

Security Groups
You can now use Security Groups on the EC2 instances that your launch within your VPC. When used in a VPC, Security Groups gain a number of powerful new features including outbound traffic filtering and the ability to create rules that can match any IP protocol including TCP, UDP, and ICMP.

You can also change (add and remove) these security groups on running EC2 instances. The AWS Management Console sports a much-improved user interface for security groups; you can now make multiple changes to a group and then apply all of them in one fell swoop.

Elastic IP Addresses
You can now assign Elastic IP Addresses to the EC2 instances that are running in your VPC, with one small caveat: these addresses are currently allocated from a separate pool and you can’t assign an existing (non-VPC) Elastic IP Address to an instance running in a VPC.

NAT Addressing
You can now launch a special “NAT Instance” and route traffic from your private subnet to it in. Doing this allows the private instances to initiate outbound connections to the Internet without revealing their IP addresses. A NAT Instance is really just an EC2 instance running a NAT AMI that we supply; you’ll pay the usual EC2 hourly rate for it.

ISV Support
Several companies have been working with these new features and have released (or are just about to release) some very powerful new tools. Here’s what I know about:

 

The OpenVPN Access Server is now available as an EC2 AMI and can be launched within a VPC. This is a complete, software-based VPN solution that you can run within a public subnet of your VPC. You can use the web-based administrative GUI to check status, control networking configuration, permissions, and other settings.

 

CohesiveFT’s VPN-Cubed product now supports a number of new scenarios.

By running the VPN-Cubed manager in the public section of a VPC, you can connect multiple IPsec gateways to your VPC.You can even do this using security appliances from vendors like Cisco, ASA, Juniper, Netscreen, and SonicWall, and you don’t need BGP.

VPN-Cubed also lets you run grid and clustering products that depend on support for multicast protocols.

 

CloudSwitch further enhances VPC’s security and networking capabilities. They support full encryption of data and rest and in transit, key management, and network encryption between EC2 instances and between a data center and EC2 instances. The net-net is complete isolation of virtual machines, data, and communications with no modifications to the virtual machines or the networking configuration.

 

The The Riverbed Cloud Steelhead extends Riverbeds WAN optimization solutions to the VPC, making it easier and faster to migrate and access applications and data in the cloud. Available on an elastic, subscription-based pricing model with a portal-based management system.

 

Pricing

I think this is the best part of the Virtual Private Cloud: you can deploy a feature-packed private network at no additional charge! We don’t charge you for creating a VPC, subnet, ACLs, security groups, routing tables, or VPN Gateway, and there is no charge for traffic between S3 and your Amazon EC2 instances in VPC. Running Instances (including NAT instances), Elastic Block Storage, VPN Connections, Internet bandwidth, and unmapped Elastic IPs will incur our usual charges.

Internet Gateways in VPC has been a high priority for our customers, and Im excited about all the new ways VPC can be used. For example, VPC is a great place for applications that require the security provided by outbound filtering, network ACLs, and NAT functionality. Or you could use VPC to host public-facing web servers that have VPN-based network connectivity to your intranet, enabling you to use your internal authentication systems. I’m sure your ideas are better than mine; leave me a comment and let me know what you think!

— Jeff;

Even More EC2 Goodies in the AWS Management Console

We’ve added some new features to the EC2 tab of the AWS Management Console to make it even more powerful and even easier to use.

You can now change the instance type of a stopped, EBS-backed EC2 instance. This means that you can scale up or scale down as your needs change. The new instance type must be compatible with the AMI that you used to boot the instance, so you can’t change from 32 bit to 64 bit or vice versa.

The Launch Instances Wizard now flags AMIs that will not incur any additional charges when used with an EC2 instance running within the AWS free usage tier:

You can now control what happens when an EBS-backed instance shuts itself down. You can choose to stop the instance (so that it can be started again later) or to terminate the instance:

You can now modify the EC2 user data (a string passed to the instance on startup) while the instance is stopped:

We’ll continue to add features to the AWS Management Console to make it even more powerful and easier to use. Please feel free to leave us comments and suggestions.

— Jeff;

Run SUSE Linux Enterprise Server on Cluster Compute Instances

You can now run SUSE Linux Enterprise Server on EC2’s Cluster Compute and Cluster GPU instances. As I noted in the post that I wrote last year when this distribution became available on the other instance types, SUSE Linux Enterprise Server is a proven, commercially supported Linux platform that is ideal for development, test, and production workloads. This is the same operating system that runs the IBM Watson DeepQA application that competed against a human opponent (and won) on Jeopardy just last month.

After reading Tony Pearson’s article (How to Build Your Own Watson Jr. In Your Basement), I set out to see how his setup could be replicated on an hourly, pay as you go basis using AWS. Here’s what I came up with:

  1. Buy the Hardware. With AWS there’s nothing to buy. Simply choose from among the various EC2 instance types. A couple of Cluster Compute Quadruple Extra Large instances should do the trick:
  2. Establish Networking. Tony recommends 1 Gigabit Ethernet. Create an EC2 Placement Group, and launch the Cluster Compute instances within it to enjoy 10 Gigabit non-blocking connectivity between the instances:

  3. Install Linux and Middleware. The article recommends SUSE Linux Enterprise Server. You can run it on a Cluster Compute instance by selecting it from the Launch Instances Wizard:

    Launch the instances within the placement group in order to get the 10 Gigabit non-blocking connectivity:

    You can use the local storage on the instance, or you can create a 300 GB Elastic Block Store volume for the reference data:

  4. Download Information Sources. Tony recommends the use of NFS to share files within the cluster. That will work just fine on EC2; see the Linux-NFS-HOWTO for more information. He also notes that you will need a relational database. You can use Apache Derby per his recommendation, or you can start up an Amazon RDS instance so that you don’t have to worry about backups, scaling or other administrative chores (if you do this you might not need the 300 GB EBS volume created in the previous step):

    You’ll need some information sources. Check out the AWS Public Data Sets to get started.

  5. The Query Panel – Parsing the Question. You can download and install OpenNLP and OpenCyc as described in the article. You can run most applications (open source and commercial) on an EC2 instance without making any changes.
  6. Unstructured Information Management Architecture. This part of the article is a bit hand-wavey. It basically boils down to “write a whole lot of code around the Apache UIMA framework.”
  7. Parallel Processing. The original Watson application ran in parallel across 2,880 cores. While this would be prohibitive for a basement setup, it is possible to get this much processing power from AWS in short order and (even more importantly) to put it to productive use. Tony recommends the use of the UIMA-AS package for asynchronous scale-out, all managed by Hadoop. Fortunately, Amazon Elastic MapReduce is based on Hadoop, so we are all set:
  8. Testing. Tony recommends a batch-based approach to testing, with questions stored in text files to allow for repetitive testing. Good enough, but you still need to evaluate all of the answers and decide if your tuning is taking you in the desired direction. I’d recommend that you use the Amazon Mechanical Turk instead. You could easily run A/B tests across multiple generations of results.

I really liked Tony’s article because it took something big and complicated and reduced it to a series of smaller and more approachable steps. I hope that you see from my notes above that you can easily create and manage the same types of infrastructure, run the same operating system, and the same applications using AWS, without the need to lift a screwdriver or to max out your credit cards. You could also use Amazon CloudFormation to automate the entire setup so that you could re-create it on demand or make copies for your friends.

Read more about features and pricing on our SUSE Linux Enterprise Server page.

— Jeff;

JumpBox for the AWS Free Usage Tier

We’ve teamed up with JumpBox to make it even easier and less expensive for you to host a WordPress blog, publish a web site with Drupal, run a Wiki with MediaWiki, or publish content with Joomla. You can benefit from two separate offers:

  • The new JumpBox free tier trial for AWS customers lets you launch and run the applications listed above at no charge. There will be a small charge for EBS storage (see below).
  • If you qualify for the AWS free usage tier it will give you sufficient EC2 time, S3 storage space, and internet data transfer to host the application and to handle a meaningful amount of traffic.

Any AWS user (free or not) can take advantage of JumpBox’s offer, paying the usual rates for AWS. The AWS free usage tier is subject to the AWS Free Usage Tier Offer Terms; use of AWS in excess of free usage amounts will be charged standard AWS rates.

Note: The JumpBox machine images are larger than the 10 GB of EBS storage provided in the free usage tier; you’ll be charged $1.50 per month (an additional 10 GB of EBS storage per month) if you run them in the free usage tier.

The applications are already installed and configured; there’s nothing to set up. The application will run on an EC2 instance of its own; you have full control of the configuration and you can install themes, add-ins, and the like. Each application includes a configuration portal to allow you to configure the application and to make backups.

Here’s a tour, starting with the 1-page signup form:

After a successful signup, JumpBox launches the application:

The application will be ready to run in a very short time (less than a minute for me):

The next step is to configure the application (I choose to launch Joomla):

And I am up and running:

You can access all of the administrative and configuration options from a password-protected control panel that runs on the EC2 instance that’s hosting the application:

Here are the links that you need to get started:

As you can probably see from the tour, you can be up and running with any of these applications in minutes. As long as you are eligible for and stay within the provisions of the AWS free usage tier, you can do this for free. I’m looking forward to hearing your thought and success stories; leave me a comment below.

— Jeff;

ActivePython AMI from ActiveState

The folks at ActiveState have cooked up an ActivePython AMI to make it easy for you to build and deploy web application written in Python.You can get started in minutes without having to download, install, or configure anything.

The AMI is based on the 64-bit version of Ubuntu and includes MySQL, SQLite, Apache, ActivePython, Django, Memcached, Nginx, and a lot of other useful components. You can run the AMI on the Micro, Large, and Extra Large instance types.

They have put together a nice suite of resources around the AMI including a tutorial (Building a Python-Centric Web Server in the Cloud) and a set of Best Practice Notes on Cloud Computing With Python.

Check it out, and let me know what you think!

— Jeff;

 

EC2 VM Import Connector

The new Amazon EC2 VM Import Connector is a virtual appliance (vApp) plug-in for VMware vCenter. Once installed, you can import virtual machines from your VMware vSphere infrastructure into Amazon EC2 using the GUI that you are already familiar with. This feature builds on top of the VM Import feature that I blogged about late last year.

The Connector stores separate AWS credentials for each vCenter user so that multiple users (each with separate AWS accounts) can use the same Connector. The account must be subscribed to EC2 in order to use the Connector.

You can download the Connector from the AWS Developer Tools page. You’ll need to make sure that you have adequate disk space available, and you’ll also need to verify that certain network ports are open (see the EC2 User Guide for more information). The Connector is shipped as an OVF template that you will deploy with your vSphere Client.

After you’ve installed and configured the Connector, you can import any virtual machine that needs the following requirements:

  • Runs Windows Server 2008 SP2 (32 or 64 bit).
  • Currently turned off.
  • Uses a single virtual hard drive (multiple partitions are OK) no larger than one terabyte.

 Importing is a simple matter of selecting a virtual machine and clicking on the Import to EC2 tab:

The import process can take a couple of hours, depending on the speed and utilization of your Internet connection. You can monitor the progress using the Tasks and Events tab of the vSphere Client.

As is always the case with AWS, we started out with a core feature (VM Import) and are now adding additional capabilities to it. Still on the drawing board (but getting closer every day) are additional features such as VM Export (create a virtual machine image from an EC2 instance or AMI), support for additional image formats and operating systems.

— Jeff;

 

 

Now Open: AWS Region in Tokyo

I have made many visits to Japan over the last several years to speak at conferences and to meet with developers. I really enjoy the people, the strong sense of community, and the cuisine.

Over the years I have learned that there’s really no substitute for sitting down, face to face, with customers and potential customers. You can learn things in a single meeting that might not be obvious after a dozen emails. You can also get a sense for the environment in which they (and their users or customers) have to operate. For example, developers in Japan have told me that latency and in-country data storage are of great importance to them.

Long story short, we’ve just opened up an AWS Region in Japan, Tokyo to be precise. The new region supports Amazon EC2 (including Elastic IP Addresses, Amazon CloudWatch, Elastic Block Storage, Elastic Load Balancing, VM Import, and Auto Scaling), Amazon S3, Amazon SimpleDB, the Amazon Relational Database Service, the Amazon Simple Queue Service, the Amazon Simple Notification Service, Amazon Route 53, and Amazon CloudFront. All of the usual EC2 instance types are available with the exception of the Cluster Compute and Cluster GPU. The page for each service includes full pricing information for the Region.

Although I can’t share the exact location of the Region with you, I can tell you that private beta testers have been putting it to the test and have reported single digit latency (e.g. 1-10 ms) from locations in and around Tokyo. They were very pleased with the observed latency and performance.

Existing toolkit and tools can make use of the new Tokyo Region with a simple change of endpoints. The documentation for each service lists all of the endpoints for each service.

This offering goes beyond the services themselves. We also have the following resources available:

Put it all together and developers in Japan can now build applications that respond very quickly and that store data within the country.

 

The JAWS-UG (Japan AWS User Group) is another important resource. The group is headquartered in Tokyo, with regional branches in Osaka and other cities. I have spoken at JAWS meetings in Tokyo and Osaka and they are always a lot of fun. I start the meeting with an AWS update. The rest of the meeting is devoted to short “lightning” talks related to AWS or to a product built with AWS. For example, the developer of the Cacoo drawing application spoke at the initial JAWS event in Osaka in late February. Cacoo runs on AWS and features real-time collaborative drawing.

We’ve been working with some of our customers to bring their apps to the new Region ahead of the official launch. Here is a sampling:

Zynga is now running a number of their applications here. In fact (I promise I am not making this up) I saw a middle-aged man playing Farmville on his Android phone on the subway when I was in Japan last month. He was moving sheep and fences around with rapid-fire precision!

 

The enStratus cloud management and governance tools support the new region.

enStratus supports role-based access, management of encryption keys, intrusion detection and alerting, authentication, audit logging, and reporting.

All of the enStratus AMIs are available. The tools feature a fully localized user interface (Cloud Manager, Cluster Manager, User Manager, and Report) that can display text in English, Japanese, Korean, Traditional Chinese, and French.

enStratus also provides local currency support and can display estimated operational costs in JPY (Japan / Yen) and a number of other currencies.

 

Sekai Camera is a very cool augmented reality application for iPhones and Android devices. It uses the built-in camera on each device to display a tagged, augmented version of what the camera is looking at. Users can leave “air tags” at any geographical location. The application is built on AWS and makes use of a number of services including EC2, S3, SimpleDB, SQS, and Elastic Load Balancing. Moving the application to the Tokyo Region will make it even more responsive and interactive.

 

G-Mode Games is running a multi-user version of Tetris in the new Region. The game is available for the iPhone and the iPod and allows you to play against another person.

 

Cloudworks is a management tool for AWS built in Japan, and with a Japanese language user interface. It includes a daily usage report, scheduled jobs, and a history of all user actions. It also supports AWS Identity and Access Management (IAM) and copying of AMIs from region to region.

 

Browser 3Gokushi is a well-established RPG (Role-Playing Game) that is now running in the new region.

 

Here’s some additional support that came in after the original post:

Here are some of the jobs that we have open in Japan:

— Jeff;

Note: Tetris and 1985~2011 Tetris Holding. Tetris logos, Tetris theme song and Tetriminos are trademarks of Tetris Holding. The Tetris trade dress is owned by Tetris Holding. Licensed to The Tetris Company. Game Design by Alexey Pajitony. Original Logo Design by Roger Dean. All Rights Reserved. Sub-licensed to Electronic Arts Inc. and G-mode, Inc.

Upcoming Event: AWS Tech Summit, London

I’m very pleased to invite you all to join the AWS team in London, for our first Tech Summit of 2011. We’ll take a quick, high level tour of the Amazon Web Services cloud platform before diving into the technical detail of how to build highly available, fault tolerant systems, host databases and deploy Java applications with Elastic Beanstalk.

We’re also delighted to be joined by three expert customers who will be discussing their own, real world use of our services:

So if you’re a developer, architect, sysadmin or DBA, we look forward to welcoming you to the Congress Centre in London on the 17th of March.

We had some great feedback from our last summit in November, and this event looks set to be our best yet.

The event is free, but you’ll need to register.

~ Matt

New AWS Console Features: Forced Detach, Termination Protection

We’ve added two new features to the AWS Management Console: forced detach of EBS volumes and termination protection.

Forced Detach
From time to time an Elastic Block Storage (EBS) volume will refuse to cleanly detach itself from an EC2 instance. This can occur for several reasons, including continued system or user activity on the volume. Under normal circumstances you should always log in to the instance, terminate any processes that are reading or writing data to the volume, and unmount the volume before detaching it. If the volume fails to detach after you have taken all of these precautions, you can forcibly detach it. This is a last-resort option since there may still be some write operations pending for the volume. The console now includes a new option that allows you to forcibly detach a volume:

This function has been present in the EC2 APIs and in the command line tools for some time. You can now access it from the console.

Termination Protection
EC2 termination protection has been around for a while and is now accessible from the console:

Once activated for an EC2 instance, this feature blocks attempts to terminate an instance by way of the command line tools or the EC2 API. This gives you an extra measure of protection for those “precious” instances that you would prefer not to shut down by accident. For example, my primary EC2 instance has been running for 949 days and I would hate to terminate it by accident:

This is more for sentimental value than anything else, but I enjoy having this instance around from the early days of EC2. Here’s what happens if you try to terminate a protected instance:

Termination protection is also useful when several users share a single AWS account.

— Jeff;

Rack and the Beanstalk

AWS Elastic Beanstalk manages your web application via Java, Tomcat and the Amazon cloud infrastructure. This means that in addition to Java, Elastic Beanstalk can host applications developed with languages compatible with the Java VM.

This includes tools such as Clojure, Scala and JRuby – in this post we start to think out of the box, and show you how to run any Rack based Ruby application (including Rails and Sinatra) on the Elastic Beanstalk platform. You get all the benefits of deploying to Elastic Beanstalk: autoscaling, load balancing, versions and environments, with the joys of developing in Ruby.

Getting started

We’ll package a new Rails app into a Java .war file which will run natively through JRuby on the Tomcat application server. There is no smoke and mirrors here – Rails will run natively on JRuby, a Ruby implementation written in Java.

Java up

If you’ve not used Java or JRuby before, you’ll need to install them. Java is available for download, or via your favourite package repository and is usually already installed on Mac OS X. The latest version of JRuby is available here. It’s just a case of downloading the latest binaries for your platform (or source, if you are so inclined), and unpacking them into your path – full details here. I used v1.5.6 for this post.

Gem cutting

Ruby applications and modules are often distributed as Rubygems. JRuby maintains a separate Rubygem library, so we’ll need to install a few gems to get started including Rails, the Java database adaptors and warbler, which we’ll use to package our application for deployment to AWS Elastic Beanstalk. Assuming you added the jruby binaries to your path, you can run the following on your command line:

jruby -S gem install rails

jruby -S gem install warbler

jruby -S gem install jruby-openssl

jruby -S gem install activerecord-jdbcsqlite3-adapter

jruby -S gem install activerecord-jdbcmysql-adapter

To skip the lengthy documentation generation, just throw ‘–no-ri –no-rdoc‘ on the end of each of these commands.

A new hope

We can now create a new Rails application, and set it up for deployment under the JVM application container of Elastic Beanstalk. We can use a preset template, provided by jruby.org, to get us up and running quickly. Again, on the command line, run:

jruby -S rails new aws_on_rails -m http://jruby.org/rails3.rb

This will create a new Rails application in a directory called ‘aws_on_rails’. Since it’s so easy with Rails, let’s make our example app do something interesting. For this, we’ll need to first setup our database configuration to use our Java database drivers. To do this, just define the gems in the application’s Gemfile, just beneath the line that starts gem ‘jdbc-sqlite3’:

gem ‘activerecord-jdbcmysql-adapter’, :require => false

gem ‘jruby-openssl’

Now we setup the database configuration details – add these to your app’s config/database.yml file.

development:  
  adapter: jdbcsqlite3
  database: db/development.sqlite3
  pool: 5
  timeout: 5000 

production:
  adapter: jdbcmysql
  driver: com.mysql.jdbc.Driver
  username: admin
  password: <password>
  pool: 5
  timeout: 5000
  url: jdbc:mysql://<hostname>/<db-name>

If you don’t have a MySQL database, you can create one quickly using the Amazon Relational Database Service. Just log into the AWS Management Console, go to the RDS tab, and click ‘Launch DB instance’. you can find more details about Amazon RDS here. The hostname for the production settings above are listed in the console as the database ‘endpoint’. Be sure to create the RDS database in the same region as Elastic Beanstalk, us-east and setup the appropriate security group access.

Application

We’ll create a very basic application that lets us check in to a location. We’ll use Rails’ scaffolding to generate a simple interface, a controller and a new model.

jruby -S rails g scaffold Checkin name:string location:string

Then we just need to migrate our production database, ready for the application to be deployed to Elastic Beanstalk:

jruby -S rake db:migrate RAILS_ENV=production

Finally, we just need to set up the default route. Add the following to config/routes.rb:

root :to => “checkins#index”

This tells Rails how to respond to the root URL, which is used by the Elastic Beanstalk load balancer by default to monitor the health of your application.

Deployment

We’re now ready to package our application, and send it to Elastic Beanstalk. First of all, we’ll use warble to package our application into a Java war file.

jruby -S warble

This will create a new war file, named after your application, located in the root directory of your application. Head over to the AWS Management Console, click on the Elastic Beanstalk tab, and select ‘Create New Application’. Setup your Elastic Beanstalk application with a name, URL and container type, then upload the Rails war file.

After Elastic Beanstalk has provisioned your EC2 instances, load balancer and autoscaling groups, your application will start under Tomcat’s JVM. This step can take some time but once your app is launched, you can view it at the Elastic Beanstalk URL.

Congrats! You are now running Rails on AWS Elastic Beanstalk.

By default, your application will launch under Elastic Beanstalk in production mode, but you can change this and a wide range of other options using the warbler configuration settings. You can adjust the number of instances and autoscaling settings from the Elastic Beanstalk console.

Since Elastic Beanstalk is also API driven, you can automate the configuration, packaging and deployment as part of your standard build and release process. 

 ~ Matt