Category: Amazon EC2


Now Available: Amazon Linux AMI 2015.03

The Amazon Linux AMI is a supported and maintained Linux image for use on Amazon EC2.

We release new versions of the Amazon Linux AMI every six months after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum.

Launching 2015.03 Today
Today we are releasing the 2015.03 Amazon Linux AMI for use in PV and HVM mode, with support for EBS-backed and Instance Store-backed AMIs.

This AMI uses kernel 3.14.35 and is available in all AWS regions.

You can launch this new version of the AMI in the usual ways. You can also upgrade an existing EC2 instance by running

sudo yum clean all
sudo yum update

and then rebooting it.

New Features
The roadmap for the Amazon Linux AMI is driven in large part by customer requests. During this release cycle, we have added a number of features as a result of these requests; here’s a sampling:

  • Python 2.7 is now the default for core system packages, including yum and cloud-init; versions 2.6 and 3.4 are also available in the repositories as python26 and python34, respectively.
  • The nvidia package (required when you run the appropriate AMI on a G2 instance), is now DKMS-enabled. Updating to a new kernel will trigger a nvidia module rebuild for both the running kernel and the newly installed kernel.
  • Ruby 2.2 is now available in the repositories as ruby22; Ruby 2.0 is still the default.
  • PHP 5.6 is now available in the repositories as php56; it can run side-by-side with PHP 5.5.
  • Docker 1.5 is now included in the repositories.
  • Puppet 3.7 is now included. The Puppet 2 and Puppet 3 packages conflict with each other and cannot be installed at the same time.

The release notes contain a longer discussion of the new features and updated packages.

Things to Know
As we have discussed in the past, we are no longer producing new 32-bit AMIs. We are still producing 32-bit packages for customers that are still using the 2014.09 and earlier AMIs. We recommend the use of 64-bit AMIs for new projects.

We are no longer producing new “GPU” AMIs for the CG1 instance type. Once again, package updates are available and the G2 instance type should be used for new projects.

Jeff;

Now Available: 16 TB and 20,000 IOPS Elastic Block Store (EBS) Volumes

Last year I told you about Larger and Faster EBS Volumes and asked you to stay tuned for availability. Starting today you can create Provisioned IOPS (SSD) volumes that store up to 16 TB (terabytes) and process up to 20,000 IOPS, with a maximum throughput of 320 MBps (megabytes per second). You can also create General Purpose (SSD) volumes that store up to 16 TB and process up to 10,000 IOPS with a maximum throughput of 160 MBps.

To get started, simply specify the desired Size and IOPS using the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or by calling the EC2 API (no SDK or tool updates are needed):

As a refresher, EBS supports two SSD volume types:

  • Provisioned IOPS (SSD) volumes, introduced in 2012, are designed for I/O-intensive workloads that require consistent performance, such as relational and NoSQL databases. This volume type allows you to provision the exact level of consistent performance that you need and pay for only what you provision.
  • General Purpose (SSD) volumes, launched last June, are the default EBS volume type for Amazon EC2 instances and are suitable for a broad range of bursty workloads, including small- to medium-sized databases (either NoSQL or relational), dev and test environments, and boot volumes.

Both SSD volume types are designed to offer single-digit millisecond latencies and five 9s (99.999%) of availability.

This enhancement is a continuation of our promise to help customers to focus on their core business rather than on managing resources. With this release, you no longer need to stripe together several smaller volumes in order to run applications requiring large amounts of storage or high performance, including large transactional databases, big data analytics, and log processing systems. The volumes also make backing up your data easier, since you no longer need to coordinate snapshots across many striped volumes.

The following picture illustrates the reduction in complexity that is now possible. Instead of creating a RAID set composed of 16 EBS volumes (Before), you can now create a single, larger volume (After). This volume can host a 16 TB and 20,000 IOPS I/O-intensive transactional database requiring single digit millisecond latencies with consistent performance using Provisioned IOPS (SSD) on AWS:

Larger & Faster Volumes
With today’s launch, the General Purpose (SSD) volumes are now designed to deliver a consistent baseline performance of 3 IOPS/GB to a maximum of 10,000 IOPS, and provide up to 160 MBps of throughput per volume. Here are the rules:

  • Volumes smaller than 1 TB can still burst beyond their baseline IOPS to 3,000 IOPS.  For example, a 100 GB volume has a baseline of 300 IOPS and the ability to burst to 3,000 IOPS.
  • Volumes larger than 1,000 GB can have a baseline of up to 10,000 IOPS. For example, a 2,000 GB volume will have a baseline of 6,000 IOPS, and volumes from 3,334 GB up to 16,384 GB will all get a baseline of 10,000 IOPS.

My colleague Dave Veith illustrated the relationship between the values as follows:

The ability to burst IOPS has proven to be extremely useful to our customers. In fact, after looking at historical data, we found that the vast majority of our customers have never emptied their burst buckets!

Here’s another helpful picture from Dave:

The throughput values listed above apply when the volumes are attached to EBS-Optimized EC2 instances. The actual throughput that you will see in practice can vary based on instance type, file system type, and your application’s unique usage pattern. For more information, please take a look at Amazon EBS Volume Performance on Linux Instances and Amazon EBS Volume Performance on Windows Instances.

Looking Back, Moving Ahead
To recap a bit, in the past nine months, we have released several products and feature enhancements for our customers. In June 2014 we released our new default volume type, General Purpose (SSD), and three months later we doubled the maximum achievable throughput for Provisioned IOPS and GP2 volumes.  In addition to these releases, we offered additional data protection via seamless encryption of EBS data volumes and snapshots, and followed up with the ability for customers to create and manage their volume encryption keys.

Now we are increasing the maximum size and performance of these volume types. To say the least, it’s been a busy few months, but we have a lot more in store!

Now Hiring
If you are excited about the incredible opportunity cloud computing represents, have experience with distributed systems, and thrive on building and driving great teams to deliver high quality software, this is the team for you. For more information please visit the EBS Careers page.

EBS at the AWS Summits
Members of the EBS team will be available to meet with you at the upcoming AWS Summits. If you would like to schedule some time with one of them, please reach out to your AWS sales rep.

Available Now
The larger and faster EBS volumes are available now and you can start using them today in all AWS regions. For more information, please visit our technical documentation.

Jeff;

PS – In order to keep this post clean and straightforward, I have used the more common terabytes (10004 bytes) instead of the more accurate tebibytes (10244 bytes).

EC2 Container Service (ECS) Update – Access Private Docker Repos & Mount Volumes in Containers

Amazon EC2 Container Service  (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run distributed applications on a managed cluster of Amazon EC2 instances.

My colleague Chris Barclay sent a guest post to spread the word about two additions to the service. As Chris explains below, you can use images stored in private Docker repositories. You can also store and share information between containers using data volumes from the host. Let’s see what Chris has to say!

Jeff;


Use Images Stored in Private Docker Repositories
The Amazon ECS agent can now authenticate with Docker registries, including Docker Hub. Registry authentication lets you use Docker images from private repositories in your Task Definitions. Here’s how to set it up:

  1. Create a private S3 object named ecs.config in an S3 bucket with your repository’s credentials (you can get the credentials from your .dockercfg file):
    ECS_ENGINE_AUTH_TYPE=dockercfg
    ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"auth":"YOUR_AUTH_CODE","email":"email@example.com"}}
    
    
  2. Add a policy to the IAM role used by the Container Instances in your ECS cluster to provide access to the object created in step 1:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": [
            "s3:GetObject"
          ],
        "Sid": "Stmt0123456789",
        "Resource": [
          "arn:aws:s3:::YOUR_BUCKET/YOUR_OBJECT"
        ],
        "Effect": "Allow"
        }
      ]
    }
    
  3. When launching an EC2 instance, in the Advanced Details drop-down paste the following script into the User data text box:
    #!/bin/bash
    yum install -y aws-cli
    aws s3 cp s3://YOUR_BUCKET/YOUR_OBJECT /etc/ecs/ecs.config
    

The container instances launched in step 3 can now pull private images referenced in an ECS Task Definition.

Mount Volumes in Containers
ECS Task Definitions now provide a way for containers to store and share information using data volumes. For data that should persist between tasks, such as a download cache, you can reference a location on the host as shown in the following volume definition:

"volumes": [
   {
    "name": "cache",
    "host": {
      "sourcePath": "/var/lib/MyApp/cache/"
    }
  } ]

You can then reference the volume by name and specify the path to mount the volume in the container definition:

"containerDefinitions":[
  {
    "name": "webserver",
     ...
    "mountPoints": [
    {
      "sourceVolume": "cache",
      "containerPath": "/usr/src/app/cache"
    } ]
    ...

If you don’t need your data to persist between task runs then you can specify an “empty” volume. By letting Docker manage the host storage, you don’t need to worry about creating or deleting the volume on the host; Docker creates a volume that persists for the lifetime of the task. Here is a Task Definition snippet that creates a volume that is managed by Docker:

"volumes": [
    {
      "name": "logs",
      "host": {}
    }

Docker also supports the ability to share volumes between containers. For example, you may take the logs your Apache server writes to /var/log/www and push them to a central repository using a cron job running in the Apache container. Another option is to create a backup job container with a backup daemon that references the shared logs volume using the volumesFrom attribute. Now the backup daemon can tar the logs in the shared volume periodically and store them in Amazon Simple Storage Service (S3). Here’s how you would reference a shared volume:

"containerDefinitions":[
  {
    "name": "backupManager",
     ...    
    "volumesFrom": [
        {
            "sourceContainer": "Apache",
            "readOnly": true
        }
      ]
    }

Available Now
These features are available now and you can start using them today (to borrow Jeff’s phrase). For more information, read the documentation on private Docker repositories and mounting volumes in containers.

Chris Barclay, Principal Product Manager

Seamlessly Join EC2 Instances to a Domain

Way back in 2008 I announced that you could run Microsoft Windows on Amazon EC2. Since that time, we have made many additions to the initial offering. You now have your choice of several different versions of Windows Server including 2003 R2, 2008, 2008 R2, 2012, and 2012 R2.  You can build AWS-powered applications using the AWS SDK for .NET and you can use the AWS Tools for Windows PowerShell to script and automate your Windows-hosted, AWS-centric activities.

Today we are making Windows on EC2 even more powerful by giving you the ability to seamlessly join EC2 instances to a domain that you have configured with AWS Directory Service. After you configure this new feature using the AWS Management Console, the EC2 API, or the AWS Tools for Windows PowerShell you can choose which domain a new instance will join when it launches. You can also seamlessly join existing instances to a domain.

After you have joined your EC2 instances to a domain, you can use Domain Administrator credentials to access the instances via RDP (the generated local administrator password can still be used).

Joining a Domain at Launch Time
Here’s how you can choose to join a domain when you launch a new EC2 instance that’s running Windows. You will need to create a new IAM role (or modify an existing one) to allow the instance to access the EC2 SSM (Simple System Manager) API. I created a new IAM policy called allow-all-ssm and then used it to create a role called allow-ssm. Here’s the policy that I used:

Then I selected the VPC with my directory, requested an auto-assigned public IP address, and chose the role (all of these are prerequisites for this feature):

Simply choose one of your directories and the instance will seamlessly join it as part of the launch process.

For more information, read about joining a domain in the EC2 Documentation:

This feature will work with Windows AMI released on or after February 2015.

Joining a Domain for a Running Instance
The domain join functionality is implemented by the newest version (3.0 and above) of the EC2 Config Service (EC2Config for short). This service runs in the LocalSystem account and performs tasks on the instance to implement certain tasks that are best performed from within the instance.

You’ll need to upgrade your instances to the newest version of the service in order to be able to join them to domain. To do this, read the documentation on Installing the Latest Version of EC2Config. If you launched your instances using one of the most recent (February 2015 or newer) Windows AMIs the service is already installed and up to date.

Then you need to set some IAM permissions, create a configuration document (a very simple JSON file), and associate the configuration document with the desired instances. You can do this using the EC2 API or the Tools for Windows PowerShell.

To learn more, read the new documentation on Managing Windows Instance Configuration.

Available Now
This feature is available now in the US East (Northern Virginia) region and you can start using it today!

Jeff;

PS – Domain Join is just one of a number of features provided by the newest version of EC2Config. It can also run PowerShell scripts, and it can install, repair, or uninstall MSI packages. See the Simple Systems Manager documentation for more information.

System Center Virtual Machine Manager Add-In Update – Import & Launch Instances

We launched the AWS Systems Manager for Microsoft System Center Virtual Machine Manager (SCVMM) last fall. This add-in allows you to monitor and manage your on-premises VMs (Virtual Machines), as well as your Amazon Elastic Compute Cloud (EC2) instances (running either Windows or Linux) from within Microsoft System Center Virtual Machine Manager. As a refresher, here’s the main screen:

Today we are updating this add-in with new features that allow you to import existing virtual machines and to launch new EC2 instances without having to use the AWS Management Console.

Import Virtual Machines
Select an existing on-premises VM and choose Import to Amazon EC2 from the right-click menu. The VM must be running atop the Hyper-V hypervisor and it must be using a VHD (dynamically sized) disk no larger than 1 TB. These conditions, along with a couple of others, are verified as part of the import process. You will need to specify the architecture (32-bit or 64-bit) in order to proceed:

Launch EC2 Instances
Click on the Create Instance button to launch a new EC2 instance. Select the region and an AMI (Amazon Machine Image), an instance type, and a key pair:

You can click on Advanced Settings to reveal additional options:

Click on the Create button to launch the instance.

Available Now
This add-in is available now (download it at no charge) and you can start using it today!

Jeff;

New – Auto Recovery for Amazon EC2

An important rule when building a highly available and highly reliable system is to design for failure. In other words, your design model should assume that, as Amazon CTO Werner Vogels has said, “everything fails all the time.” Fortunately, modern data centers, networks, and servers are highly reliable, and failures are the exception rather than the rule. Nevertheless, you can build great systems if you take the occasional failure as a given and simply build a system that picks itself up and keeps going after something goes wrong.

New Auto Recovery
Today I would like to tell you about a new EC2 feature that will make it even easier for you to build systems that respond as desired when the hardware that hosts a particular EC2 instance becomes impaired. Behind the scenes, a number of system status checks (first introduced in 2012 and enhanced a couple of times since then) monitor the instance and the other components that need to be running in order for your instance to function as expected. Among other things, the checks look for loss of network connectivity, loss of system power, software issues on the physical host, and hardware issues on the physical host.

With this week’s launch, you can now arrange for automatic recovery of an EC2 instance when a system status check of the underlying hardware fails. The instance will be rebooted (on new hardware if necessary) but will retain its Instance Id, IP Address, Elastic IP Addresses, EBS Volume attachments, and other configuration details. In order for the recovery to be complete, you’ll need to make sure that the instance automatically starts up any services or applications as part of its initialization process.

Arranging for Auto Recovery
You can arrange for auto recovery of an existing instance with a couple of clicks (see the notes below for information on supported instance types and environments).  Simply create a CloudWatch alarm for the metric StatusCheckFailed_System and choose the Recover this instance action.

First, find and select the metric for the instance of interest:

Next, click on the Create Alarm button:

Delete the Notification action (unless you need it for some other reason) and add an EC2 action, then choose Recover this instance. Set the threshold value to be 1, set the Statistic to Minimum, and specify the number of consecutive periods to an appropriate value (two or three minutes is a good starting point, assuming that you are collecting metrics at one minute intervals):

Applicable Instance Types and Environments
This feature is currently available for the C3, C4, M3, R3, and T2 instance types running in the US East (Northern Virginia) region; we plan to make it available in other regions as quickly as possible.  The instances must be running within a VPC, must use EBS-backed storage, but cannot be Dedicated Instances.

There is no extra charge for the EC2 aspect of this feature. The usual CloudWatch charges apply (see the CloudWatch Pricing page for more information).

To learn more, read the Recover Your Instance documentation!

Jeff;

Now Available – New C4 Instances

Late last year ago I told you about the New Compute-Optimized EC2 Instances and asked you to stay tuned for additional pricing and technical information. I am happy to announce that we are launching these instances today in seven AWS Regions!

The New C4 Instance Type
The new C4 instances are based on the Intel Xeon E5-2666 v3 (code name Haswell) processor. This custom processor, optimized for EC2, runs at a base speed of 2.9 GHz, and can achieve clock speeds as high as 3.5 GHz with Intel® Turbo Boost (complete specifications are available here). These instances are designed to deliver the highest level of processor performance on EC2. Here’s the complete lineup:

Instance Name vCPU Count RAM Network Performance Dedicated EBS Throughput Linux On-Demand Price
c4.large 2 3.75 GiB Moderate 500 Mbps $0.116/hour
c4.xlarge 4 7.5 GiB Moderate 750 Mbps $0.232/hour
c4.2xlarge 8 15 GiB High 1,000 Mbps $0.464/hour
c4.4xlarge 16 30 GiB High 2,000 Mbps $0.928/hour
c4.8xlarge 36 60 GiB 10 Gbps 4,000 Mbps $1.856/hour

The prices listed above are for the US East (Northern Virginia) and US West (Oregon) regions (the instances are also available in the Europe (Ireland), Asia Pacific (Tokyo), US West (Northern California), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions). For more pricing information, take a look at the EC2 Pricing page.

As I noted in my original post, EBS Optimization is enabled by default for all C4 instance sizes. This feature provides 500 Mbps to 4,000 Mbps of dedicated throughput to EBS above and beyond the general purpose network throughput provided to the instance, and is available to you at no extra charge. Like the existing C3 instances, the new C4 instances also provide Enhanced Networking for higher packet per second (PPS) performance, lower network jitter, and lower network latency. You can also run two or more C4 instances within a placement group in order to arrange for low-latency connectivity within the group.

c4.8xlarge Goodies
EC2 uses virtualization technology to provide secure compute, network, and block storage resources that are easy to manage through Web APIs. For a compute optimized instance family like C4, our goal is to provide as much of the performance of the underlying hardware as safely possible, while still providing virtualized I/O with very low jitter. We are always working to make our systems more efficient, and through that effort we are able to deliver more cores in the form of 36 vCPUs on the c4.8xlarge instance type (some operating systems have a limit of 32 vCPUs and may not be compatible with the c4.8xlarge instance type. For more information, refer to our documentation on Operating System Support).

Like earlier Intel processors, the Intel Xeon E5-2666 v3 in the C4 instances support Turbo Boost. This technology allows the processor to run faster than the rated speed (2.9 GHz) as long as it stays within its design limits for power consumption and heat generation. The effect depends on the number of cores in use and the exact workload, and can boost the clock speed to as high as 3.5 GHz under optimal conditions. In general, workloads that use just a few cores are the most likely to benefit from this speedup. Turbo Boost is enabled by default and your applications can benefit from it with no effort on your part.

Here’s an inside look at an actual Haswell micro architecture die (this photo is of a version of the die that is similar to, but not an exact match for, the one used in the C4 instances). The cache is in the middle, flanked to the top and the bottom by the CPU cores:

If your workload is able to take advantage of all of those cores, you’ll get the rated 2.9 GHz speed, with help from Turbo Boost whenever the processor decides that it is able to raise the clock speed without exceeding any of the processor’s design limits for heat generation and dissipation.

In some cases, your workload might not need all 18 of the cores (each of which runs two hyperthreads, for a total of 36 vCPUs on c4.8xlarge). To tune your application for better performance, you can manage the power consumption on a per-core basis. This is known as C-state management, and gives you control over the sleep level that a core may enter when idle. Let’s say that your code needs just two cores. Your operating system can set the other 16 cores to a state that draws little or no power, thereby creating some thermal headroom that will give the remaining cores an opportunity to Turbo Boost. You also have control over the desired performance (CPU clock frequency); this is known as P-state management.  You should consider changing C-state settings to decrease CPU latency variability (cores in a sleep state consume less power, but deeper sleep states require longer to become active when needed) and consider changing P-state settings to adjust the variability in CPU frequency in order to best meet the needs of your application. Please note that C-state and P-state management requires operating system support and is currently available only when running Linux.

You can use the turbostat command (available on the Amazon Linux AMI) to display the processor frequency and C-state information.

Helpful resources for C-State and P-State management include Jeremy Eder’s post on processor.max_cstate, intel_idle.max_cstate and /dev/cpu_dma_latency, Dell’s technical white paper, Controlling Processor C-State Usage in Linux, and the discussion of Are hardware power management features causing latency spikes in my application? You should also read our new documentation on Processor State Control.

Intel® Xeon® Processor (E5-2666 v3) in Depth
The Intel Haswell micro architecture is a notable improvement on its predecessors. It is better at predicting branches and more efficient at prefetching instructions and data. It can also do a better job of taking advantage of opportunities to execute multiple instructions in parallel. This improves performance on integer math and on branches. This new processor also incorporates Intel’s Advanced Vector Extensions 2. AVX2 supports 256-bit integer vectors and can process 32 single precision or 16 double precision floating point operations per cycle. It also supports instructions for packing and extracting bit fields, decoding variable-length bit streams, gathering bits, arbitrary precision arithmetic, endian conversion, hashing, and cryptography. The AVX2 instructions and the updated microarchitecture can double the floating-point performance for compute-intensive workloads. The improvements to the microarchitecture can boost the performance of existing applications by 30% or more. In order to take advantage of these new features, you will need to use a development toolchain that knows how to generate code that makes use of these new instructions; see the Intel Developer Zone article, Write your First Program with Haswell new Instructions for more info.

Launch a C4 Instance Today
As I noted earlier, the new C4 instances are available today in seven AWS regions (and coming soon to the others). You can launch them as On-Demand, purchase Reserved Instances, and you can also access them via the Spot Market. You can also launch applications from the AWS Marketplace on C4 instances in any Region where they are supported.

We are always interested in hearing from our customers. If you have feedback on the C4 instance type and would like to share it with us, please send it to ec2-c4-feedback@amazon.com.

Jeff;

ClassicLink – Private Communication Between Classic EC2 Instances & VPC Resources

Amazon Virtual Private Cloud lets you create and run a logically isolated section of the Cloud. Running within a VPC combines the benefits of the cloud with the flexibility of a network topology designed to fit the unique needs of your in-house IT department. For example:

  • Isolation – You can create a network and exercise fine-grained control over internal and external connectivity.
  • Flexibility – You have full control over the IP address range, routing, subnets, and ACLs.
  • Features – Certain AWS features, including Enhanced Networking and the new T2 instances are available only within a VPC. The powerful C3 instances can make use of Enhanced Networking when run within a VPC.
  • Private Communication – You can connect to your existing on-premises or colo’ed infrastructure using AWS Direct Connect and a VPN connection.

You define a virtual network by specifying an IP address range using a CIDR block, partition the range in to one or more subnets, and setting up Access Control Lists (ACLs) to allow network traffic to flow between the subnets. After you define your virtual network, you can launch Amazon Elastic Compute Cloud (EC2) instances, Amazon Relational Database Service (RDS) DB instances,
Amazon ElastiCache nodes, and other AWS resources, each on a designated subnet.

Up until now, EC2 instances that were not running within a VPC (commonly known as EC2-Classic) had to use public IP addresses or tunneling to communicate with AWS resources in a VPC. They could not take advantage of the higher throughput and lower latency connectivity available for inter-instance communication. This model also resulted in additional bandwidth charges and has some undesirable security implications.

Hello, ClassicLink
In order to allow EC2-Classic instances to communicate with these resources, we are introducing a new feature known as ClassicLink. You can now enable this feature for any or all of your VPCs and then put your existing Classic instances in to VPC security groups.

ClassicLink will allow you to learn about and adopt VPC features, even if you are currently making good use of EC2-Classic. For example, you can use a new Amazon RDS T2 Instance (available only within a VPC) to launch a cost-effective DB instance that can easily accommodate bursts of traffic and queries. You can also take advantage of the additional control and flexibility available to you when you make use of the VPC security groups. For example, you can make use of outbound traffic filtering rules and you can change the security groups associated with a running instance.

Enabling & Using ClassicLink
You can enable ClassicLink on a per-VPC basis. Simply open up the VPC tab of the AWS Management Console, select the desired VPC, right-click, and choose Enable ClassicLink:

Now you can link any or all of your EC2 instances to the VPC by right-clicking and choosing Link to VPC from the ClassicLink menu:

Choose the appropriate security group and you will be good to go:

The new setting takes effect immediately; the instance is now part of the chosen group(s). You can remove the security group(s) from the instance at a later time if you no longer have a need for private communication from the EC2-Classic instance to the AWS resources in the VPC.

Cost and Availability
ClassicLink is accessible from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, and the AWS SDKs. To learn more, click here.

ClassicLink is available at no charge. If you are currently running in EC2-Classic and have been looking for an easy way to start taking advantage of VPC resources, please take a closer look.

Jeff;

New – EC2 Spot Instance Termination Notices

When potential users of AWS ask me about ways that it differs from their existing on-premises systems, I like to tell them about EC2 Spot Instances and the EC2 Spot Market. When they learn that they can submit bids for spare EC2 instances at the price of their choice, their eyes widen and they start to think about the ways that they can put this unique, powerful, and economical feature to use in their own applications.

Before we dive in, let’s review the life cycle of a Spot Instance:

  1. You (or an application running on your behalf) submits a bid to run a desired number of EC2 instances of a particular type. The bid includes the price that you are willing to pay to use the instance for an hour.
  2. When your bid price exceeds the current Spot price (which varies based on supply and demand), your instances are run.
  3. When the current Spot price rises above your bid price, the Spot instance is reclaimed by AWS so that it can be given to another customer.

New Spot Instance Termination Notice
Today we are improving the reclamation process with the addition of a two-minute warning, formally known as a Spot Instance Termination Notice.  Your application can use this time to save its state, upload final log files, or remove itself from an Elastic Load Balancer. This change will allow more types of applications to benefit from the scale and low price of Spot Instances.

The Termination Notice is accessible to code running on the instance via the instance’s metadata at http://169.254.169.254/latest/meta-data/spot/termination-time. This field will become available when the instance has been marked for termination (step 3, above), and will contain the time when a shutdown signal will be sent to the instance’s operating system. At that time, the Spot Instance Request’s bid status will be set to marked-for-termination. The bid status is accessible via the DescribeSpotInstanceRequests API for use by programs that manage Spot bids and instances.

We recommend that interested applications poll for the termination notice at five-second intervals. This will give the application almost two full minutes to complete any desired processing before it is reclaimed. Here’s a timeline to help you to understand the termination process (the “+” indicates a time relative to the start of the timeline):

  • +00:00 – Your Spot instance is marked for termination because the current Spot price has risen above the bid price. The bid status of your Spot Instance Request is set to marked-for-termination and the /spot/termination-time metadata is set to a time precisely two minutes in the future.
  • Between +00:00 and +00:05 – Your instance (assuming that it is polling at five-second intervals) learns that it is scheduled for termination.
  • Between +00:05 and +02:00 – Your application makes all necessary preparation for shutdown. It can checkpoint work in progress, upload final log files, and remove itself from an Elastic Load Balancer.
  • +02:00 – The instance’s operating system will be told to shut down and the bid status will be set to instance-terminated-by-price (be sure to read the documentation on Tracking Spot Requests with Bid Status Codes before writing code that depends on the values in this field).

Spot Instances in Action
Many AWS customers are making great use of Spot Instances and I’d like to encourage you to do the same! Here are a couple of examples:

Available Now
This feature is available now and you can start using it today! There is no charge for the HTTP requests that you will use to retrieve the instance metadata or for the calls to the DescribeSpotInstanceRequests API.

Jeff;

AWS GovCloud (US) Update – Glacier, VM Import, CloudTrail, and More

I am pleased to be able to announce a set of updates and additions to AWS GovCloud (US). We are making a number of new services available including Amazon Glacier, AWS CloudTrail, and VM Import. We are also enhancing the AWS Management Console with support for Auto Scaling and the Service Limits Report. As you may know, GovCloud (US) is an isolated AWS Region designed to allow US Government agencies and customers to move sensitive workloads in to the cloud. It adheres to the U.S. International Traffic in Arms Regulations (ITAR) regulations and well as the Federal Risk and Authorization Management Program (FedRampSM). AWS GovCloud (US) has received an Agency Authorization to Operate (ATO) from the US Department of Health and Human Services (HHS) utilizing a FedRAMP accredited Third Party Assessment Organization (3PAO) for the following services: EC2, S3, EBS, VPC, and IAM.

AWS customers host a wide variety of web and enterprise applications in GovCloud (US). They also run HPC workloads and count on the cloud for storage and disaster recovery.

Let’s take a look at the new features!

Amazon Glacier
Amazon Glacier is a secure and durable storage service designed for data archiving and online backup.  With prices that start at $0.013 per gigabyte per month in this Region, you can store any amount of data and retrieve it within hours. Glacier is ideal for digital media archives, financial and health care records, long term database backups. It is also a perfect place to store data that must be retained for regulatory compliance. You can store data directly in a Glacier vault or you can make use of lifecycle rules to move data from Amazon Simple Storage Service (S3) to Glacier.

AWS CloudTrail
AWS CloudTrail records calls made to the AWS APIs and publishes the resulting log files to S3.  The log files can be use as a compliance aid, allowing you to demonstrate that AWS resources have been managed according to rules and regulatory standards (see my blog post, AWS CloudTrail – Capture AWS API Activity, for more information). You can also use the log files for operational troubleshooting and to identity activities on AWS resources which failed due to inadequate permissions. As you can see from the blog post, you simply enable CloudTrail from the Console and point it at the S3 bucket of your choice. Events will be delivery to the bucket and stored in encrypted form, typically within 15 minutes after they take place. Within the bucket, events are organized by AWS Account Id, Region, Service Name, Date, and Time:

Our white paper, Security at Scale: Logging in AWS, will help you to understand how CloudTrail works and how to put it to use in your organization.

VM Import
VM Import allows you to import virtual machine images from your existing environment for use on Amazon Elastic Compute Cloud (EC2). This allows you to use build off of your existing investment in images that meet your IT security, configuration management, and compliance requirements.

You can import VMware ESX and VMware Workstation VMDK images, Citrix Xen VHD images and Microsoft Hyper-V VHD images for Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Centos 5.1-6.5, Ubuntu 12.04, 12.10, 13.04, 13.10, and Debian 6.0.0-6.0.8, 7.0.0-7.2.0.

Console Updates
The AWS Management Console in the GovCloud Region now supports Auto Scaling and the Service Limits Report.

Auto Scaling allows you to build systems that respond to changes in demand by scaling capacity up or down as needed.

The Service Limits Report makes it easy for you to view and manage the limits associated with your AWS account. It includes links that let you make requests for increases in a particular limit with a couple of clicks:

All of these new features are operational now and are available to GovCloud users today!

Jeff;