Category: Compute*

Additional IP Address Flexibility in the Virtual Private Cloud

VPC So Far
The Amazon Virtual Private Cloud (VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. Earlier this year, in my post Virtual Private Clouds for Everyone, I outlined some important changes to Amazon EC2 which combine its ease of use with the advanced networking features of Amazon VPC.

As part of that launch, we introduced the concept of the default subnet for default VPCs. EC2 instances created in the default subnet automatically received public IP addresses; instances in the other subnets did not. This was a very sensible default behavior because it made the new VPC feature more or less transparent.

More Control
Today we are launching a new feature that gives you additional control of public IP addresses in VPC at launch time. Here’s the scoop:

  • When you launch an instance into a default subnet, you now have the ability to decide if the instance is given a public IP address. Until now, launches into a default subnet were always assigned a public IP address and there was no way to remove it.
  • When you launch an instance into a nondefault subnet, you can now choose to assign a public IP address as part of the launch. In the past you had to launch the instance and then allocate and attach an Elastic IP address after the instance became available.

The AWS Management Console includes complete support for this new feature. If you are launching into a default VPC and you don’t select a subnet, or if you are making a Spot Instance request, you can check or uncheck a single check box in the launch wizard to select the desired outcome:

If you choose a specific subnet for your launch, you will see a similar checkbox in the ENI (Elastic Network Interface) section of the launch wizard:

The initial setting of the checkbox (checked or unchecked) reflects the behavior introduced when we launched VPCs for everyone. In other words, if you want the existing behavior, leave the checkbox the way you found it.

Some Notes
I should note that this feature is available only at launch time. If you do not assign a public IP address to an instance during the launch, you can associate an Elastic IP address after the instance has been launched.

Also, if, for some reason, you decide to attach an Elastic IP address to an instance that already has a public IP address, the Elastic IP address will replace the public IP address. When you detach the Elastic IP address from the instance, a new public IP address will be assigned to the instance.

We expect to add CloudFormation and Auto Scaling support before too long.

To learn more about this new feature, read the Using Instance Addressing topic in the EC2 documentation.

— Jeff;

Running Couchbase on AWS – New White Paper

As the third installement in our series of white papers that are designed to show you how to run popular relational and NoSQL databases on AWS, I am pleased to tell you that our new Couchbase on AWS white paper is now available.

Written by AWS Solution Architects Kyle Lichtenberg and Miles Ward, this detailed white paper contains everything you’ll need to know to design, setup, monitor, and maintain your Couchbase installation. You will learn how to design for performance, durability, and scalabilty, and how to choose the right EC2 instance types and EBS volume settings. The paper includes a detailed discussion of Couchbase’s Cross Datacenter Replication (XDCR) and shows you how to use it for disaster recovery and geographic replication.

— Jeff;

PS – If Couchbase isn’t your thing, what about PostgreSQL or Riak?

Elastic Load Balancing adds Support for Proxy Protocol

My colleague Lesley Mbogo is a Senior Product Manager on the Elastic Load Balancing team. She sent along the post below to tell you all about an important new feature — support for the Proxy Protocol.

— Jeff;

Starting today, Elastic Load Balancing (ELB) supports Proxy Protocol version 1. You can now identify the originating IP address of a client connecting to your servers using TCP load balancing. Client connection information, such as IP address and port, is typically lost when requests are proxied through a load balancer. This is because the load balancer sends requests to the server on behalf of the client, making your load balancer appear as though it is the requesting client. Having the originating client IP address is useful if you need more information about visitors to your applications. For example, you may want to gather connection statistics, analyze traffic logs, or manage whitelists of IP addresses.

Until today, ELB allowed you to obtain the clients IP address only if you used HTTP(S) load balancing, which adds this information in the X-Forwarded-For headers. Since X-Forwarded-For is used in HTTP headers only, you could not obtain the clients IP address if the ELB was configured for TCP load balancing. Many of you told us that you wanted similar functionality for TCP traffic, so we added support for Proxy Protocol. It simply prepends a human readable header with the clients connection information to the TCP data sent to your server. The advantage of Proxy Protocol is that it can be used with any protocol layer above TCP, since it has no knowledge of the higher-level protocol that is used on top of the connection. Proxy Protocol is useful when you are serving non-HTTP traffic. Alternatively, you can use it if you are sending HTTPS requests and do not want to terminate the SSL connection on the load balancer. For more information, please visit the Elastic Load Balancing Guide.

Creating a Simple Web Application Running Behind an ELB with Proxy Protocol
Id like to show you how we can use the Proxy Protocol feature in a simple Node.js application running behind an ELB. This application retrieves the client IP address and port number from the Proxy Protocol header in the TCP connection and outputs the information in an HTML response.

Well use AWS Elastic Beanstalk to quickly deploy and manage the application. Elastic Beanstalk automatically provisions an environment that includes Elastic Load Balancing, a set of EC2 instances with all the necessary software, and more. Elastic Beanstalk supports many languages and platforms; for this example, we chose to use Node.js.

Our sample application (, click to download) is a simple Node.js server bundled in a zip archive. Inside youll find the following files:

  • server.js a simple Node.js server that receives and responds to TCP connections from the ELB.
  • package.json declares the node-proxy-protocol package dependency that parses the Proxy Protocol header inserted by the ELB. Elastic Beanstalk installs these dependencies automatically.
  • .ebextensions/ – a directory containing two YAML files that we created to customize our environment. Elastic Beanstalk automatically detects these files and applies the customizations.

The first file, .ebextensions/01_elb.config, configures the ELB to listen for TCP connections on port 80 and forward requests to back-end instances on port 80, and finally enables Proxy Protocol. To enable Proxy Protocol for an existing ELB in your account, please see the Elastic Load Balancing Guide.

The second file, .ebextensions/02_container.config, customizes Node.js to listen to requests directly on port 80. The Node.js container can be configured to proxy traffic locally through Apache or Nginx before sending requests to our application. Weve however chosen to disable this feature and allow our Node.js application to act as the server because neither Apache nor Nginx currently support the Proxy Protocol header inserted by the ELB. To learn more about customizing your environment resources, visit the Elastic Beanstalk Developer Guide.

We are now ready to deploy the sample application to Elastic Beanstalk using the AWS Management Console.

  1. Log into the Elastic Beanstalk Console, choose Node.js, and then click Get Started.
  2. Wait for the default environment to spin up and turn green, then click Upload and Deploy to upload the Node.js application. (Beanstalk creates a sample application in the default environment, so we need to upload our new version).
  3. Choose the file that you downloaded, and deploy the new version in the default environment.
  4. Wait for the application to deploy and for the default environment to update. When the environment turns green, click the environments URL to access the Node.js application.
  5. The Node.js application parses the Proxy Protocol data from the ELB and responds with HTML output showing your original Source IP and Port, as well as the IP of the ELB that proxied the request to the application.

I hope that you find this useful. If you have any feature requests for Elastic Load Balancing, please leave a note in the EC2 forum.

Lesley Mbogo

A New Elastic Beanstalk Management Console…

With AWS Elastic Beanstalk, you can deploy, monitor, and grow your application quickly and easily. Its management console is an essential piece of the overall experience and helps make complex tasks simple. Today, Im happy to introduce a redesign of the Elastic Beanstalk management console that streamlines common tasks further and adds new functionality that youve requested through our feedback mechanism.

Starting today, you can also create a new type of Elastic Beanstalk environment that runs your application on a single EC2 instance. This new environment type is ideal for development workloads or for non-critical, low traffic applications and reduces the overall cost of an Elastic Beanstalk environment. This new environment type uses the same software stacks so you can easily migrate to a load-balanced, autoscaled environment when your application is ready for takeoff.

Heres a quick list of my favorite console features:

  • A redesigned environment dashboard that provides you with a snapshot of your environment and puts the most common actions at your fingertips:
  • A customizable monitoring and alarming experience that shows how your application is doing. You can customize this dashboard with relevant CloudWatch metrics and even add alarms in case you want to be notified of significant changes. For example, you may want to monitor the number of requests to your application and set an alarm when they exceed an unexpected value.

  • A VPC configuration page to help secure your environment inside an existing VPC. Be sure to check out the Elastic Beanstalk Developer Guide  for the requirements to run your Elastic Beanstalk environment in a VPC.

You can try out the new Elastic Beanstalk Console today!

–Jeff, with lots of help from Saad;

EC2 Dedicated Instance Price Reduction

I’m happy to announce that we are reducing the prices for Amazon EC2 Dedicated Instances.

Launched in 2011, Dedicated Instances run on hardware dedicated to a single customer account. They are ideal for workloads where corporate policies or industry regulations dictate physical isolation from instances run by other customers at the host hardware level.

Like our multi-tenant EC2 instances, Dedicated Instances let you take full advantage of On-Demand and Reserved Instance purchasing options. Todays price drop continues the AWS tradition of innovating to reduce costs and passing on the savings to our customers. This reduction applies to both the dedicated per region fee and the per-instance On-Demand and Reserved Instance fee across all supported instance types and all AWS Regions. Here are the details:

  • Dedicated Per Region Fee An 80% price reduction from $10 per hour to $2 per hour in any Region where at least one Dedicated Instance of any type is running.
  • Dedicated On-Demand Instances A reduction of up to 37% in hourly costs. For example the price of an m1.xlarge Dedicated Instance in the US East (Northern Virginia) Region will drop from $0.840 per hour to $0.528 per hour.
  • Dedicated Reserved Instances A reduction of up to 57% on the Reserved Instance upfront fee and the hourly instance usage fee. Dedicated Reserved Instances also provide additional savings of up to 65% compared to Dedicated On-Demand instances.

These changes are effective July 1, 2013 and will automatically be reflected in your AWS charges.

To launch a Dedicated Instance via the AWS Management Console, simply choose a target VPC and select the Dedicated Tenancy option when you configure your instance. You can also create a Dedicated VPC to ensure that all instances launched within it are Dedicated Instances.

To learn more about Dedicated Instances and to see a complete list of prices, please visit the Dedicated Instances page.

— Jeff;

Resource-Level Permissions for EC2 and RDS Resources

With AWS being put to use in an ever-widening set of use cases across organizations of all shapes and sizes, the need for additional control over the permissions granted to users and to applications has come through loud and clear.  This need for control becomes especially pronounced at the enterprise level. You don’t want the developers who are building cloud applications to have the right to make any changes to the cloud resources used by production systems, and you don’t want the operators of one production system to have access to the cloud resources used by another.

The Story So Far
With the launch of IAM in the middle of 2010, we gave you the ability to create and apply policies that control which users within your organization were able to access AWS APIs.

Later, we gave you the ability to use policies to control access to individual DynamoDB, Elastic Beanstalk, Glacier, Route 53, SNS, SQS, S3, SimpleDB, and Storage Gateway resources.

Today we are making IAM even more powerful with the introduction of resource-level permissions for Amazon EC2 and Amazon RDS. This feature is available for the RDS MySQL, RDS Oracle, and RDS SQL Server engines.

On the EC2 side, you can now construct and use IAM policies to control access to EC2 instances, EBS volumes, images, and Elastic IP addresses. On the RDS side, you can use similar policies to control access to DB instances, Read replicas, DB Event subscriptions, DB option groups, DB parameter groups, DB security groups, DB snapshots, and subnet groups.

Let’s take a closer look!

Resource-Level Permissions for EC2
You can now use IAM policies to support a number of important EC2 use cases. Here are just a few of things that you can do:

  • Allow users to act on a limited set of resources within a larger, multi-user EC2 environment.
  • Set different permissions for “development” and “test” resources.
  • Control which users can terminate which instances.
  • Require additional security measures, such as MFA authentication, when acting on certain resources.

This is a complex and far-reaching feature and we’ll be rolling it out in stages. In the first stage, the following actions on the indicated resources now support resource-level permissions:

  • Instances – Reboot, Start, Stop, Terminate.
  • EBS Volumes – Attach, Delete, Detach.

EC2 actions not listed above will not be governed by resource-level permissions at this time. We plan to add support for additional APIs throughout the rest of 2013.

We are also launching specific and wild-card ARNs (Amazon Resource Names) for all EC2 resources. You can refer to individual resources using ARNs such as arn:aws:ec2:us-east-1:1234567890:instance/i-i-96d811fe and groups of resources using ARNs of the form arn:aws:ec2:us-east-1:1234567890:instance/*.

EC2 policy statements can include reference to tags on EC2 resources. This gives you the power to use the same tagging model and schema for permissions and for billing reports.

In order to make it easier for you to test and verify the efficacy of your policies, we are extending the EC2 API with a new flag and a couple of new functions.  The new flag is the DryRun flag, available as a new general option for the EC2 APIs. If you specify this flag the API request will perform an authorization determination, but will not actually process the API request (for example, to determine whether a user has permission to terminate an instance without actually terminating the instance). 

In addition, when using API version 2013-06-15 and above, you will now receive encoded authorization messages along with authorization denied errors that can be used in combination with a new STS API, DecodeAuthorizationMessage, to learn more about the IAM policies and evaluation context that led to a particular authorization determination (permissions to the new STS API can be controlled using an IAM policy for the sts:DecodeAuthorizationMessage action).

The final piece of the EC2 side of this new release is an expanded set of condition tags that you can use in your policies. You can reference a number of aspects of each request including ec2:Region, ec2:Owner, and ec2:InstanceType (consult the EC2 documentation for a complete list of condition tags).

Resource Permissions for RDS
You can also use policies to support a set of important RDS use cases. Here’s a sampling:

  • Implement DB engine and Instance usage policies to specific group of users. For example, you may limit the usage of the m2.4xl instance type and Provisioned IOPS to users in the Staging users or Production users groups.
  • Permit a user to create a DB instance that uses specific DB parameter groups and security groups. For example, you may restrict Web application developers to use DB instances with Web application parameter groups and Web DB Security groups. These groups may contain specific DB options and security group settings you have configured.
  • Restrict a user or user group from using a specific parameter group to create a DB instance. For example, you may prevent members of the Test users group from using Production parameter groups while creating test DB instances.
  • Allow only specific users to operate DB instances that have a specific tag (e.g. DB instances that are tagged as “production”). For example, you may enable only Production DBAs to have access to DB instances tagged with Production label.

As you might have guessed from my final example, you can now make references to tags on any of the RDS resources (see my other post for information on the newly expanded RDS tagging facility), and you can use the same tags and tagging schema for billing purposes.

As we did for EC2, we have added a set of RDS condition tags for additional flexibility. You can reference values such as rds:DatabaseClass, rds:DatabaseEngine, and rds:Piops in your policies.

For more information, consult the Managing Access to Your Amazon RDS Resources and Database chapter in the RDS documentation.

The AWS Security Blog takes an even more detailed look at this feature and shows you how to use it!

Go For It
These new features are available now and you can start using them today.

— Jeff;

Running Riak on AWS – New White Paper

Continuing with our theme of publishing white papers to show you how to run popular relational and NoSQL databases on AWS, I am pleased to tell you that our new Riak on AWS white paper is now available.

Authored by AWS Solutions Architect Brian Holcomb (a one-time member of the Obama for America campaign), this 14 page paper shows you how to launch Riak from the AWS Marketplace and to set up clustering.

It covers architecture and scale, data distribution, replication, and node failure. The paper provides detailed guidelines on a number of operational considerations including EC2 instance sizing, storage configuration, and network configuration. It provides handy tips for simulating uprades, scaling, and failure states, and also addresses security.

— Jeff;

Running PostgreSQL on AWS – New White Paper

You have a plethora of options when you want to run a relational or NoSQL databases on AWS.

On the relational side, you can use the Relational Database Service (RDS) to run a MySQL, Oracle, or SQL Server database. RDS will take care of the scaling, backup, maintenance, patching, and failover for you so that you can focus on your application.

On the NoSQL side, DynamoDB can scale to accommodate any amount of data and any request rate, making it ideal for applications with an unlimited potential to grow.

The AWS Marketplace also contains a very well-curated selection of relational and NoSQL databases and caches.

You also have the option to launch the database of your choice on an Amazon EC2 instance. In order to provide you with the information that you need to do this while taking advantage of all of the flexibility that AWS has to offer, we have worked with our partners to write some very detailed white papers.

The first such white paper, RDBMS in the Cloud: PostgreSQL on AWS, was released a few days ago. This 23 page document was authored by AWS Solutions Architect Miles Ward and the senior staffers at PalominoDB.

The document covers installation, use of SSD storage, and use of EBS. It also addresses the operational side with detailed information about maintenance, backup and restore, storage of backup files, replication, and monitoring. It concludes with a detailed look at security, touching on disk encryption, row-level encryption, and SSL.

Stay tuned for additional white papers in this series!

— Jeff;



EBS Snapshot Copy Performance Improvement

The EBS Snapshot Copy feature gives you the power to copy EBS snapshots across AWS Regions. Effective today, we have made the snapshot copy even faster than before with support for incremental copies between Regions. It is now practical to copy snapshots to other regions more frequently, making it easier for you to develop applications that are highly available.

The first time you copy an EBS snapshot of a volume to a particular Region, all of the data will be copied.  The second and subsequent copies of snapshots from the same volume to the same destination region will be incremental: only the data that has changed since the last copy will be transferred. As a result, the snapshot will transfer less data and complete more quickly than before. 

The magnitude of the improvement will depend on the amount of data that has been changed since the last snapshot copy. To give you a sense for how much of a benefit you can expect, we measured the amount of change between snapshots across a wide variety of EBS volumes running a number of applications. Based on our findings, we expect to see a 50x speedup for the second and subsequent incremental copies of an EBS volume snapshot.

AWS customer Aptean uses EBS Snapshot Copy as part of their enterprise disaster recovery offering.  Aptean Vice President Mario Baldasserini told me that:

Aptean has been using EBS Snapshot Copy since its launch in providing innovative disaster recovery solutions for our worldwide customers.  We are thrilled with the incremental support availability as it will allow us to further reduce our recovery objectives in providing a worldwide product solution on AWS.

I look forward to hearing more about how you leverage the faster cross-Region EBS Snapshot Copy in your own applications.

Earlier this year, we launched the cross-region EC2 AMI Copy feature, which builds on the EBS Snapshot Copy.  Today’s enhancement also makes the AMI Copy faster when you copy EBS-backed AMIs.

Other than the speed and efficiency benefits mentioned above, this change is transparent and you need not do anything special in order to take advantage of it if you are making copies using the AWS Management Console, the ec2-copy-snapshot or ec2-copy-image commands, or the CopySnapshot or CopyImage API.

— Jeff;

The first time you copy an EBS snapshot, to a particular Region, all of the data will be copied.  For the second and subsequent copies of the same volume transferred to the same destination region [R1] will be incremental, resulting in faster copy to the same destination Region, only the data that has changed since the last copy will be transferred.  As a result, the snapshot will transfer less data and complete more quickly than before.  Of course, the magnitude of the improvement will depend on the amount of data that has been changed since the last snapshot copy.

 [R1]Added this to follow through in the example on the point made earlier that incremental snapshots are specific to a region pair.

Amazon EC2 Expansion – Additional Instance Types in Japan

I’m happy to announce that the following EC2 instance types are now available in the Asia Pacific (Tokyo) Region and that you can start using them today:

Cluster Compute Eight Extra Large (cc2.8xlarge) – With 60.5 GiB of RAM, a pair of Intel Xeon E5-2670 processors, and 3.3 TB of instance storage, the very high CPU performance and cluster networking features of this instance type make it a great fit for applications such as analytics, encoding, renderings, and High Performance Computing (HPC).

High Memory Cluster Eight Extra Large Instance (cr1.8xlarge) – Featuring 244 GiB of RAM, dual Xeon E5-2670’s, and 240 GB of SSD instance storage, you can run memory-intensive analytics, databases, HPC workloads, and other memory-bound applications on these instances.

High I/O Quadruple Extra Large (hi1.4xlarge) – 60.5 GiB of RAM and 2 TB of SSD storage, along with 16 virtual cores make this instance a perfect host for transactional systems and NoSQL databases like Cassandra and MongoDB that can benefit from very high random I/O performance.

High Storage Eight Extra Large (hs1.8xlarge) – 117 GiB of RAM, 48 TB of instance storage (24 drives, each with 2 TB), and 16 virtual cores provide high sequential I/O performance across very large data sets. You can build a data warehouse, run Hadoop jobs, and host cluster file systems on these instances.

All of the instances listed above also include 10 Gigabit Ethernet networking and feature very high network I/O performance. You can learn more about them on the EC2 Instance Types page. You may also find the EC2 Instance Types table handy.

— Jeff;

 PS – We are also launching Amazon Redshift in the Tokyo Region.