Category: Amazon EC2


Amazon Linux AMI 2013.03 Now Available

Max Spevack runs the team that produces the Amazon Linux AMI. Today he’s here to tell you about the newest version of this popular AMI.

— Jeff;


Following our usual six month release cycle, the Amazon Linux AMI 2013.03 is now available.

As always, our goal with the Amazon Linux AMI is to ensure that EC2 customers have a stable, secure, and simple Linux-based AMI that integrates well with other AWS offerings.

Here are some of the highlights of this release of the Amazon Linux AMI:

  • Kernel 3.4.37: We have upgraded the kernel to version 3.4.37 which is part of the long-term stable release 3.4 kernel series. This is a change from the previous releases of the Amazon Linux AMI, which were on the 3.2 kernel series.
  • OpenSSH 6: In response to customer requests, we have moved to OpenSSH 6 for this release of the Amazon Linux AMI. This enables the configuration option of AuthenticationMethods for requiring multi-factor authentication.
  • OpenSSL 1.0.1: Also based on customer requests, we have added OpenSSL 1.0.1 to this release while retaining compatibility with OpenSSL 1.0.0.
  • New AWS command line tools: We are excited to include the Developer Preview of the new AWS Command Line Interface. The aws-cli tool is written in Python, and provides a one-stop-shop for controlling multiple AWS services through the command line.
  • New and updated packages: We have included a number of new packages based on customer requests as well as updates to existing packages. Please see our Amazon Linux AMI 2013.03 release notes for more information.

The Amazon Linux AMI 2013.03 is available for launch in all regions. Users of 2012.09, 2012.03, and 2011.09 versions of the Amazon Linux AMI can easily upgrade using yum.

The Amazon Linux AMI is a rolling release, configured to deliver a continuous flow of updates that allow you to roll from one version of the Amazon Linux AMI to the next. In other words, Amazon Linux AMIs are treated as snapshots in time, with a repository and update structure that gives you the latest packages that we have built and pushed into the repository. If you prefer to lock your Amazon Linux AMI instances to a particular version, please see the Amazon Linux AMI FAQ for instructions.

As always, if you need any help with the Amazon Linux AMI, dont hesitate to post on the EC2 forum, and someone from the team will be happy to assist you.

— Max

PS – Help us to build the Amazon Linux AMI! We are actively hiring Linux Systems Engineer, Linux Software Development Engineer, and Linux Kernel Engineer positions:

Additional EBS-Optimized Instance Types for Amazon EC2

We are adding EBS-optimized support to four additional EC2 instance types. You can now request dedicated throughput between EC2 instances and your EBS (Elastic Block Store) volumes at launch time:

Instance Type
Dedicated Throughput
m1.large 500 Mbps
 m1.xlarge 1000 Mbps
 m2.2xlarge (new) 500 Mbps
 m2.4xlarge 1000 Mbps
 m3.xlarge (new) 500 Mbps
 m3.2xlarge (new) 1000 Mbps
 c1.xlarge (new) 1000 Mbps

The dedicated network throughput that you get when you request EBS-optimized support will make volume performance more predictable and more consistent. You can use them with Standard and Provisioned IOPS EBS volumes as your needs dictate.

With this change you now have additional choices to make when you design and build your AWS-powered applications. You could, for example, use Standard volumes for reference and low-volume log files, stepping up to Provisioned IOPS volumes for more demanding, I/O intensive loads such as databases. I would also advise you to spend a few minutes studying the implications of the EBS pricing models for the two volume types. When you use Standard volumes you pay a modest charge for each I/O request ($0.10 for 1 million requests in the US East Region, for example). However, when you use Provisioned IOPS, there is no charge for individual requests. Instead, you pay a monthly fee based on the number of IOPS provisioned for the volume (e.g. $0.10 per provisioned IOPS per month, again in US East). The AWS Calculator has been updated and you can use it to explore your options and the related costs.

EBS-optimized support is now available in the US East (Northern Virginia), US West (Northern California) US West (Oregon), EU West (Ireland), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and South America (So Paulo) Regions.

— Jeff;

 

Cross Region EC2 AMI Copy

We know that you want to build applications that span AWS Regions and we’re working to provide you with the services and features needed to do so. We started out by launching the EBS Snapshot Copy feature late last year. This feature gave you the ability to copy a snapshot from Region to Region with just a couple of clicks. In addition, last month we made a significant reduction (26% to 83%) in the cost of transferring data between AWS Regions, making it less expensive to operate in more than one AWS region.

Today we are introducing a new feature: Amazon Machine Image (AMI) Copy. AMI Copy enables you to easily copy your Amazon Machine Images between AWS Regions. AMI Copy helps enable several key scenarios including:

  • Simple and Consistent Multi-Region Deployment – You can copy an AMI from one region to another, enabling you to easily launch consistent instances based on the same AMI into different regions.
  • Scalability – You can more easily design and build world-scale applications that meet the needs of your users, regardless of their location.
  • Performance – You can increase performance by distributing your application and locating critical components of your application in closer proximity to your users. You can also take advantage of region-specific features such as instance types or other AWS services.
  • Even Higher Availability – You can design and deploy applications across AWS regions, to increase availability.

Console Tour
You can initiate copies from the AWS Management Console , the command line tools, the EC2 API or the AWS SDKs. Let’s walk through the process of copying an AMI using the Console.

From the AMIs view of the AWS Management console select the AMI and click on Copy:

Choose the Copy AMI operation and the Console will ask you where you would like to copy the AMI:

After you have made your selections and started the copy you will be provided with the ID of the new AMI in the destination region:

Once the new AMI is in an Available state the copy is complete.

A Few Important Notes
Here are a few things to keep in mind about this new feature:

  • You can copy any AMI that you own, EBS or instance store (S3) backed, and with any operating system.
  • The copying process doesn’t copy permissions or tags so you’ll need to make other arrangements to bring them over to the destination region.
  • The copy process will result in a separate and new AMI in the destination region which will have a unique AMI ID.
  • The copy process will update the Amazon RAM Image and Amazon Kernel Image references to point to equivalent images in the destination region.
  • You can copy the same AMI to multiple regions simultaneously.
  • The console-based interface is push-based; you log in to the source region and select where you’d like the AMI to end up. The API and the command line are, by contrast, are pull-based and you must run them against the destination region and specify the source region.

— Jeff;

AWS Elastic Beanstalk for Node.js

Im happy to be able to tell you that AWS Elastic Beanstalk now supports Node.js applications. You can now build event-driven Node.js applications and then use Elastic Beanstalk to deploy and manage them on AWS.

Elastic Beanstalk automatically configures the environment and resources using sensible defaults in order to run your Node.js application. You can focus on writing your application and let Elastic Beanstalk run it and scale it automatically.

Elastic Beanstalk for Node.js includes a bunch of features that are specific to the Node.js environment. Here are some of my favorites:

  • Choose Nginx or Apache as the reverse proxy to your Node.js application. You can even choose to not use any proxy if your application requires that the client establishes a direct connection.
  • Configure HTTP and TCP load balancing depending on what your application needs. If your application uses WebSockets, then TCP load balancing might be more appropriate for your workload.
  • Configure the Node.js stack by using the specific version of Node.js that your application needs or by providing the command that is used to launch your Node.js application. You can also manage dependencies using npm.
  • Help improve performance by configuring gzip compression and static files when using Nginx or Apache. With gzip compression, you can reduce the size of your response to the client to help create faster transfer speeds. With static files, you can let Nginx or Apache quickly serve your static assets (such as images or CSS) without having these requests take time away from the data-intensive processing that your Node.js application might be performing.
  • Seamlessly integrate your app with Amazon RDS to store and retrieve data from a relational data store.
  • Customize your EC2 instances or connect your app to AWS resources using Elastic Beanstalk configuration files (visit the AWS Elastic Beanstalk Developer Guide to learn more about configuration files).
  • Run your Node.js application inside an Amazon Virtual Private Cloud for additional networking control.

To get started, simply create a new Elastic Beanstalk application and select the Node platform:

You can configure all of the options for your Node.js environment from within Elastic Beanstalk:

To learn more about Elastic Beanstalk for Node.js, visit the AWS Elastic Beanstalk Developer Guide. The documentation also includes step-by-step guides for using the Express and Geddy frameworks with Elastic Beanstalk.

— Jeff;

 

Amazon EC2 Update – Virtual Private Clouds for Everyone!

If you use or plan to use Amazon EC2, you need to read this post!

History Lesson
Way back in the beginning (OK, 2006) we launched Amazon EC2. When you launched an instance we gave it a public IP address and a DNS hostname. You had the ability to create a Security Group for ingress filtering and attach it to the instance at launch time. Each instance could have just one public IP address, which could be an Elastic IP if desired. Later, we added private IP addresses and an internal DNS hostname to each instance. Let’s call this platform “EC2-Classic” (that will be on the quiz, so remember it).

In 2009 we introduced the Amazon Virtual Private Cloud, better known as the VPC. The VPC lets you create a virtual network of logically isolated EC2 instances and an optional VPN connection to your own datacenter.

In 2011 we announced a big upgrade to EC2’s networking features, with enhanced Security Groups (ingress and egress filtering and the ability to change membership on running instances), direct Internet connectivity, routing tables, and network ACLs to control the flow of traffic between subnets.

We also added lots of other features to VPC in the past couple of years including multiple IP addresses, multiple network interfaces, dedicated instances, and statically routed VPN connections.

How to Make it Easier to Get the Power of VPC and the Simplicity of EC2
We want every EC2 user to be able to benefit from the advanced networking and other features of Amazon VPC that I outlined above. To enable this, starting soon, instances for new AWS customers (and existing customers launching in new Regions) will be launched into the “EC2-VPC” platform. We are currently in the process of enabling this feature, one Region at a time, starting with the Asia Pacific (Sydney) and South America (So Paulo) Regions. We expect these roll-outs to occur over the next several weeks. We will update this post in the EC2 forum each time we enable a Region.

You dont need to create a VPC beforehand – simply launch EC2 instances or provision Elastic Load Balancers, RDS databases, or ElastiCache clusters like you would in EC2-Classic and well create a VPC for you at no extra charge.  Well launch your resources into that VPC and assign each EC2 instance a public IP address. You can then start taking advantage of the features I mentioned earlier: assigning multiple IP addresses to an instance, changing security group membership on the fly, and adding egress filters to your security groups. These VPC features will be ready for you to use, but you need not do anything new and different until you decide to do so.

We refer to the automatically provisioned VPCs as default VPCs.  They are designed to be compatible with your existing shell scripts, CloudFormation templates, AWS Elastic Beanstalk applications, and Auto Scaling configurations. You shouldnt need to modify your code because youre launching into a default VPC.

Default VPCs for (Almost) Everyone
The default VPC features are available to new AWS customers and existing customers launching instances in a Region for the first time. If youve previously launched an EC2 instance in a Region or provisioned ELB, RDS, or ElastiCache in a Region, we wont create a default VPC for you in that Region.

If you are an existing AWS customer and you want to start gaining experience with this new behavior, you have two options. You can create a new AWS account or you can pick a Region that you haven’t used (as defined above). You can see the set of available platforms in the AWS Management Console (this information is also available through the EC2 APIs and from the command line). Be sure to check the Supported Platforms and Default VPC values for your account to see how your account is configured in a specific Region.

You can determine if your account is configured for default VPC within a particular Region by glancing at the top right corner of the EC2 Dashboard in the AWS Management Console. Look for the Supported Platforms item.  EC2-VPC means your instances will be launched into Amazon VPC.

Here is what you will see if your AWS account is configured for EC2 Classic and EC2-VPC (without a default VPC):

You can also see the supported platforms and the default VPC values using the EC2 API and the Command Line tools.

All Set Up
As I noted earlier in this post, we’ll create a default VPC for you when you perform certain actions in a Region. It will have the following components:

  • One default subnet per Availability Zone.
  • A default route table, preconfigured to send traffic from the default subnets to the Internet.
  • An Internet Gateway to allow traffic to flow to and from the Internet.

Each VPC will have its own private IP range (172.31.0.0/16 to be precise); each subnet will be a “/20” (4096 IP addresses, minus a few that are reserved for the VPC).

EC2 instances created in the default VPC will also receive a public IP address (this turns out to be a very sensible default given the preconfigured default route table and Internet Gateway). This is a change from the existing VPC behavior, and is specified by a new PublicIP attribute on each subnet. We made this change so that we can support the EC2-Classic behavior in EC2-VPC. The PublicIP attribute can’t currently be set for existing subnets but we’ll consider allowing this in the future (let us know if you think that you would find this to be of use).

You can modify your default VPC as you see fit (e.g., creating or deleting subnets, creating or modifying route tables, adding VPN connections, etc.)  You can also create additional, nondefault VPCs just like the VPCs you can create today.

Once you are up and running in a VPC within an AWS Region, you’ll have access to all of the AWS services and instance types that are available in that Region (see the List of AWS Offerings by Region page for more information). This includes new and important services such as Amazon Redshift, AWS OpsWorks, and AWS Elastic Beanstalk.

New VPC Features
We’re also adding new features to VPC. These are available to all VPC users:

DNS Hostnames – All instances launched in a default VPC will have private and public DNS hostnames. DNS hostnames are disabled for existing VPCs, but can be enabled as needed.  If youre resolving a public hostname for another instance in the same VPC, it will resolve to the private IP of the target instance.  If youre resolving a public hostname for an instance outside of your VPC, it will resolve to the public IP address of that instance.

DNS Name Resolution – DNS resolution is enabled in all VPCs but weve added the ability for you to disable use of the Amazon provided DNS service in your VPC as needed.

ElastiCache – You can now create ElastiCache cache nodes within VPC (both default and nondefault VPCs).

RDS IP Addresses – RDS database instances in VPC can now be provisioned as Internet-facing or private-facing instances. Internet-facing RDS database instances have public IP addresses so that they can be accessed from EC2 Classic instances or on-premises systems. For more information on this feature, read the documentation on Amazon RDS and the Amazon Virtual Private Cloud Service.

Learning About VPC
To learn more about Amazon VPC, please consult the following resources:

Were happy to give you the advanced features of Amazon VPC coupled with the simplicity of Amazon EC2. If you have any questions, the AWS Support Team is ready, willing, and able to help.

— Jeff;

 

Modulus – Scalable Hosting of Node.js Apps With MongoDB Support

Charlie from Modulus wrote in to tell me that their new platform is now available, and that it runs on AWS.

Modulus was designed to let you build and host Node.js applications in a scalable fashion. Node.js is a server-side JavaScript environment built around an event-driven, non-blocking I/O model. Web applications built on  Node.js are able to handle a multitude of simultaneous client connections (e.g. HTTP requests) without forcing you to use threads or other explicit concurrency management models. Instead, Node.js uses a lightweight callback model which minimizes the per-connection memory and processing overhead.

Your Modulus application runs on one or more mini-servers which they call Servos. You can add or remove Servos from your application as needed as your load and your needs change; Modulus will automatically load balance traffic across your Servos, with built-in session affinity to make things run even more efficiently. Each Servo is allocated 396 MB of memory and 512 MB of swap space.

Your application can use MongoDB for database storage. Modulus includes a set of administrative and user management tools and also supports data export, all through their web portal. The triple-redundant database storage scales without bounds and you won’t need to take your application offline to scale up. Similarly, your application has access to a disk-based file system that is accessible from all of your Servos and scales as needed to accomodate your data.

All requests coming in to Modulus are tracked, stored, and made available for analysis so that you can locate bottlenecks and boost the efficiency and perfomance of your application. You can also access application log files.

You can communicate with code running in your user’s browser by using the built-in WebSocket support. You application can also make use of other AWS services such as DynamoDB, SQS, and SNS.

You can host the application at the domain name of your choice, you can create and use your own SSL certificates, and you can define a set of environment variables that are made available to your code.

Modulus is ready to use now and you can sign up here to start out with a $25 usage credit. You can also use the Modulus pricing calculator to get a better idea of how much it will cost to host and run your application.

This looks really cool and I think it has a bright future. Give it a shot and let me know what you think!

— Jeff;

 

Reserved Instance Price Reduction for Amazon EC2

The AWS team is always exploring ways to reduce costs and to pass the savings along to our customers. We’re more than happy to continue this tradition with our latest price reduction.

Starting today, we are reducing prices for new EC2 Reserved Instances running Linux/UNIX, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server by up to 27%. This reduction applies to the Standard (m1), Second-Generation Standard (m3), High-Memory (m2), and High-CPU (c1) instance families. As always, if you reserve more, you will save more. To be more specific, you will automatically receive additional savings when you have more than $250,000 in active upfront Reserved Instance fees.

With this price reduction, Reserved Instances will provide savings of up to 65% in comparison to On-Demand instances. Here are the price decreases by instance family and Region:

  Price Decrease (%)
Region m1 m2 m3 c1
 US East (Northern Virginia) 13.0% 23.2% 13.2% 10.1%
 US West (Northern California) 13.3% 27.7% 13.3% 10.0%
 US West (Oregon) 13.0% 23.2% 13.2% 10.1%
 AWS GovCloud (US) 0.6% 13.9% 1.1% 2.1%
 Europe (Ireland) 13.3% 27.7% 13.5% 10.0%
 Asia Pacific (Singapore) 4.9% 19.8% 4.9% 2.4%
 Asia Pacific (Tokyo) 4.9% 20.8% 5.0% 2.2%
 Asia Pacific (Sydney) 4.9% 19.8% 4.9% 2.4%
 South America (So Paulo) 4.9% 21.1% 4.9% 0.0%

These new prices apply to all three Reserved Instance models (Light,  Medium, and Heavy Utilization) for purchases made on or after March 5, 2013.

We recommend that you review your usage once a month to determine if you should alter your Reserved Instance footprint by buying additional Reserved Instances or selling them on the AWS Reserved Instance Marketplace. However, if you havent done it lately, now is the perfect opportunity to review your existing usage and determine if now is the right time to purchase new Reserved Instances. Here are some general guidelines to help you choose the most economical model:

  • If your server is running less than 15% of the time, use an On-Demand instance.
  • If your server is runninng 15% and 40% of the time, use a Light Utlization Reserved Instance.
  • If your server is running 40% to 80% of the time, use a Medium Utilization Reserved Instance.
  • If your server is running 80% to 100% of the time, use a Heavy Utilization Reserved Instance.

For more information on making the choice that is right for you, see my blog post on Additional Reserved Instance Options for Amazon EC2.

During the month of March, you can take advantage of a free trial of AWS Trusted Advisor to generate a personalized report on how you can optimize your bill by taking advantage of the new, lower Reserved Instance prices. For more information about Trusted Advisor, see my post about the AWS Trusted Advisor Update + Free Trial.

To learn more about this feature and other Amazon EC2 pricing options, please visit the Amazon EC2 Pricing and the Amazon EC2 Reserved Instance Page.

Jeff;

Available Now: Beta release of AWS Diagnostics for Microsoft Windows Server

Over the past few years, we have seen tremendous adoption of Microsoft Windows Server in AWS.

Customers such as the Department of Treasury, the United States Tennis Association, and Lionsgate Film and Entertainment are building and running interesting Windows Server solutions on AWS. To further our efforts to make AWS the best place to run Windows Server and Windows Server workloads, we are happy to announce today the beta release of AWS Diagnostics for Microsoft Windows Server.

AWS Diagnostics for Microsoft Windows Server addresses a common customer request to make the intersection between AWS and Windows Server easier for customers to analyze and troubleshoot.  For example, customers may have one setting for their AWS security groups that allows access to certain Windows Server applications, but inside of their Windows Server instances, the built-in Windows firewall may deny that access. Rather than having the customer track down the cause of the issue, the diagnostics tool will collect and understand the relevant information from Windows Server and AWS, and suggest troubleshooting and fixes to the customer.

The diagnostics tool can work on running Windows Server instances. You can also attach your Windows Server EBS volumes to an existing instance and the diagnostics tool will collect the relevant logs for troubleshooting Windows Server from the EBS volume.  In the end, we want to help customers spend more time using, rather than troubleshooting, their deployments.

To use the diagnostics tool, please visit http://aws.amazon.com/windows/awsdiagnostics. There you will find more information about the feature set and documentation about how to use the diagnostics tool.

As this is a beta release, please provide feedback on how we can make this tool more useful for you. You can fill out a survey here.

To help get you started, we have created a short video that shows the tool in action troubleshooting a Windows Server instance running in AWS.

Shankar Sivadasan, Senior Product Manager

CloudWatch Monitoring Scripts Updated

We have added three new features to the CloudWatch Monitoring Scripts for Linux. These scripts can be run in the background to periodically report system metrics to Amazon CloudWatch, where they will be stored for two weeks

When you install the scripts you can choose to report any desired combination of the following metrics:

  • Memory Utilization – Memory allocated by applications and the operating system, exclusive of caches and buffers, in percentages.
  • Memory Used – Memory allocated by applications and the operating system, exclusive of caches and buffers, in megabytes.
  • Memory Available – System memory available for applications and the operating system, in megabytes.
  • Disk Space Utilization – Disk space usage as percentages.
  • Disk Space Used – Disk space usage in gigabytes.
  • Disk Space Available – Available disk space in gigabytes.
  • Swap Space Utilization – Swap space usage as a percentage.
  • Swap Space Used – Swap space usage in megabytes.

You can measure and report on disk space for one or more mount points or directories.

Here’s what’s new:

  • IAM Role Support – The CloudWatch monitoring scripts now use AWS Identity and Access Management (IAM) roles to submit memory and disk metrics to CloudWatch.
  • Auto Scaling – You can now use the metrics generated by the scripts to drive scaling decisions in conjunction with EC2’s Auto Scaling feature. For example, you could choose to scale up when average memory utilization reaches a predetermined percentage.
  • Aggregate Metrics – The scripts can now report aggregate metrics. Metrics of this type allow you to monitor memory and disk usage for multiple EC2 instances. You could, for example, monitor total memory utilization across all of your instances in a single aggregate metrics.

You can download the CloudWatch Monitoring Scripts for Linux, read the documentation, and start using these new features today.

— Jeff;

 

AWS Marketplace – Support for Red Hat Enterprise Linux (RHEL)

The AWS Marketplace now supports software running on Red Hat Enterprise Linux, commonly known as RHEL. If you use RHEL on AWS, you can now find, buy, and then one-click deploy an ever-growing set of applications from top-tier software vendors.

The AWS Marketplace
As you may know, the AWS Marketplace makes it easy for you to get started with the software packages of your choice. You don’t have to worry about hardware provisioning or software installation. You simply locate the desired package, choose the location (an AWS Region) where you’d like to run it, select an EC2 instance type, and click to launch:

RHEL on AWS
If you choose to run RHEL applications on AWS, you get some important benefits.

You can easily upgrade to new versions of RHEL as they are released, and you can purchase and leverage  AWS Premium Support , backed up by Red Hat’s own support program.

If you are an ISV and you ship products for RHEL, you can now list those products in the AWS Marketplace and sell them to hundreds of thousands of active AWS customers all over the world. This can help shrink your sales cycle, and we’ll take care of all of the billing, disbursements, and billing for you.

In the Marketplace
The following products run on RHEL and are now available in the AWS Marketplace:

You can also view all RHEL products in the AWS Marketplace.

Jeff;