Category: Amazon EC2


EC2 Reserved Instance Update – Convertible RIs and Regional Benefit

We launched EC2 Reserved Instances almost eight years ago. The model that we originated in 2009 provides you with two separate benefits: capacity reservations and a significant discount on the use of specific instances in an Availability Zone. Over time, based on customer feedback, we have refined the model and made additional options available including Scheduled Reserved Instances, the ability to Modify Reserved Instances Reservations, and the ability to buy and sell Reserved Instances (RIs) on the Reserved Instance Marketplace.

Today we are enhancing the Reserved Instance model once again. Here’s what we are launching:

Regional Benefit -Many customers have told us that the discount is more important than the capacity reservation, and that they would be willing to trade it for increased flexibility. Starting today, you can choose to waive the capacity reservation associated with Standard RI, run your instance in any AZ in the Region, and have your RI discount automatically applied.

Convertible Reserved Instances -Convertible RIs give you even more flexibility and offer a significant discount (typically 45% compared to On-Demand). They allow you to change the instance family and other parameters associated with a Reserved Instance at any time. For example, you can convert C3 RIs to C4 RIs to take advantage of a newer instance type, or convert C4 RIs to M4 RIs if your application turns out to need more memory. You can also use Convertible RIs to take advantage of EC2 price reductions over time.

Let’s take a closer look…

Regional Benefit
Reserved Instances (either Standard or Convertible) can now be set to automatically apply across all Availability Zones in a region. The regional benefit automatically applies your RIs to instances across all Availability Zones in a region, broadening the application of your RI discounts. When this benefit is used, capacity is not reserved since the selection of an Availability Zone is required to provide a capacity reservation. In dynamic environments where you frequently launch, use, and then terminate instances this new benefit will expand your options and reduce the amount of time you spend seeking optimal alignment between your RIs and your instances. In horizontally scaled architectures using instances launched via Auto Scaling and connected via Elastic Load Balancing, this new benefit can be of considerable value.

After you click on Purchase Reserved Instances in the AWS Management Console, clicking on Search will display RI’s that have this new benefit:

You can check Only show offerings that reserve capacity if you want to shop for RIs that apply to a single Availability Zone and also reserve capacity:

Convertible RIs
Perhaps you, like many of our customers, purchase RIs to benefit from the best pricing for their workloads. However, if you don’t have a good understanding of your long-term requirements you may be able to make use of our new Convertible RI. If your needs change, you simply exchange your Convertible Reserved Instances for other ones. You can change into Convertible RIs that have a new instance type, operating system, or tenancy without resetting the term. Also, there’s no fee for making an exchange and you can do so as often as you like.

When you make the exchange, you must acquire new RIs that are of equal or greater value than those you started with; in some cases you’ll need to make a true-up payment in order to balance the books. The exchange process is based on the list value of each Convertible RI; this value is simply the sum of all payments you’ll make over the remaining term of the original RI.

You can shop for a Convertible RI by making sure that the Offering Class to Convertible before clicking on Search:

The Convertible RIs offer capacity assurance, are typically priced at a 45% discount when compared to On-Demand, and are available for all current EC2 instance types on a three year term. All three payment options (No Upfront, Partial Upfront, and All Upfront) are available.

Available Now
All of the purchasing and exchange options that I described above can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the Reserved Instance APIs (DescribeReservedInstances, PurchaseReservedInstances, ModifyReservedInstances, and so forth).

Convertible RIs and the regional benefit are available in all public AWS Regions, excluding AWS GovCloud (US) and China (Beijing), which are coming soon.

Jeff;

 

Expanding the M4 Instance Type – New M4.16xlarge

EC2’s M4 instances offer a balance of compute, memory, and networking resources and are a good choice for many different types of applications.

We launched the M4 instances last year (read The New M4 Instance Type to learn more) and gave you a choice of five sizes, from large up to 10xlarge. Today we are expanding the range with the introduction of a new m4.16xlarge with 64 vCPUs and 256 GiB of RAM. Here’s the complete set of specs:

Instance Name vCPU Count
RAM
Instance Storage Network Performance EBS-Optimized
m4.large 2 8 GiB EBS Only Moderate 450 Mbps
m4.xlarge 4 16 GiB EBS Only High 750 Mbps
m4.2xlarge 8 32 GiB EBS Only High 1,000 Mbps
m4.4xlarge 16 64 GiB EBS Only High 2,000 Mbps
m4.10xlarge 40 160 GiB EBS Only 10 Gbps 4,000 Mbps
m4.16xlarge 64 256 GiB EBS Only 20 Gbps 10,000 Mbps

The new instances are based on Intel Xeon E5-2686 v4 (Broadwell) processors that are optimized specifically for EC2. When used with Elastic Network Adapter (ENA) inside of a placement group, the instances can deliver up to 20 Gbps of low-latency network bandwidth. To learn more about the ENA, read my post, Elastic Network Adapter – High Performance Network Interface for Amazon EC2.

Like the m4.10xlarge, the m4.x16large allows you to control the C states to enable higher turbo frequencies when you are using just a few cores. You can also control the P states to lower performance variability (read my extended description in New C4 Instances to learn more about both of these features).

You can purchase On-Demand Instances, Spot Instances, and Reserved Instances; visit the EC2 Pricing page for more information.

Available Now
As part of today’s launch we are also making the M4 instances available in the China (Beijing), South America (São Paulo), and AWS GovCloud (US) regions.

Jeff;

Now Available – Amazon Linux AMI 2016.09

My colleague Sean Kelly is part of the team that produces the Amazon Linux AMI. He shared the guest post below in order to introduce you to the newest version!

Jeff;


The Amazon Linux AMI is a supported and maintained Linux image for use on Amazon EC2.

We offer new major versions of the Amazon Linux AMI after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum and we welcome feedback on them.

Launching 2016.09 Today
Today we launching the 2016.09 Amazon Linux AMI, which is supported in all regions and on all current-generation EC2 instance types. The Amazon Linux AMI supports both HVM and PV modes, as well as both EBS-backed and Instance Store-backed AMIs.

You can launch this new version of the AMI in the usual ways. You can also upgrade an existing EC2 instance by running the following commands:

$ sudo yum clean all
$ sudo yum update

And then rebooting the instance.

New Features
The Amazon Linux AMI’s roadmap is driven in large part by customer requests. We’ve added a number of features in this release in response to these requests and to keep our existing feature set up-to-date:

Nginx 1.10 – Based on numerous customer requests, the Amazon Linux AMI 2016.09 repositories include the latest stable Nginx 1.10 release. You can install or upgrade to the latest version with sudo yum install nginx.

PostgreSQL 9.5 – Many customers have asked for PostgreSQL 9.5, and it is now available as a separate package from our other PostgreSQL offerings. PostgreSQL 9.5 is available via sudo yum install postgresql95.

Python 3.5Python 3.5, the latest in the Python 3.x series, has been integrated with our existing Python experience and is now available in the Amazon Linux AMI repositories. This includes the associated virtualenv and pip packages, which can be used to install and manage dependencies. The default python version for /usr/bin/python can be managed via alternatives, just like our existing Python packages. Python 3.5 and the associated pip and virtualenv packages can be installed via sudo yum install python35 python35-virtualenv python35-pip.

Amazon SSM Agent – The Amazon SSM Agent allows you to use Run Command in order to configure and run scripts on your EC2 instances and is now available in the Amazon Linux 2016.09 repositories (read Remotely Manage Your Instances to learn more). Install the agent by running sudo yum install amazon-ssm-agent and start it with sudo /sbin/start amazon-ssm-agent.

Learn More
To learn more about all of the new features of the new Amazon Linux AMI, take a look at the release notes.

Sean Kelly, Amazon Linux AMI Team

PS – If you would like to work on future versions of the Amazon Linux AMI, check out our Linux jobs!

 

New – Auto Scaling for EC2 Spot Fleets

The EC2 Spot Fleet model (see Amazon EC2 Spot Fleet API – Manage Thousands of Spot Instances with one Request for more information) allows you to create a fleet of EC2 instances with a single request. You simply specify the fleet’s target capacity, enter a bid price per hour, and choose the instance types that you would like to have as part of your fleet.

Behind the scenes, AWS will maintain the desired target capacity (expressed in terms of instances or a vCPU count) by launching Spot instances that result in the best prices for you. Over time, as instances in the fleet are terminated due to rising prices, replacement instances will be launched using the specifications that result in the lowest price at that point in time.

New Auto Scaling
Today we are enhancing the Spot Fleet model with the addition of Auto Scaling. You can now arrange to scale your fleet up and down based on a Amazon CloudWatch metric. The metric can originate from an AWS service such as EC2, Amazon EC2 Container Service, or Amazon Simple Queue Service (SQS). Alternatively, your application can publish a custom metric and you can use it to drive the automated scaling. Either way, using these metrics to control the size of your fleet gives you very fine-grained control over application availability, performance, and cost even as conditions and loads change. Here are some ideas to get you started:

  • Containers – Scale container-based applications running on Amazon ECS using CPU or memory usage metrics.
  • Batch Jobs – Scale queue-driven batch jobs based on the number of messages in an SQS queue.
  • Spot Fleets – Scale a fleet based on Spot Fleet metrics such as MaxPercentCapacityAllocation.
  • Web Service – Scale web services based on measured response time and average requests per second.

You can set up Auto Scaling using the Spot Fleet Console, the AWS Command Line Interface (CLI), AWS CloudFormation, or by making API calls using one of the AWS SDKs.

I started by launching a fleet. I used the request type Request and maintain in order to be able to scale the fleet up and down:

My fleet was up and running within a minute or so:

Then (for illustrative purposes) I created an SQS queue, put some messages in it, and defined a CloudWatch alarm (AppQueueBackingUp) that would fire if there were 10 or more messages visible in the queue:

I also defined an alarm (AppQueueNearlyEmpty) that would fire if the queue was just about empty (2 messages or less).

Finally, I attached the alarms to the ScaleUp and ScaleDown policies for my fleet:

Before I started writing this post, I put 5 messages into the SQS queue. With the fleet launched and the scaling policies in place, I added 5 more, and then waited for the alarm to fire:

Then I checked in on my fleet, and saw that the capacity had been increased as expected. This was visible in the History tab (“New targetCapacity: 5”):

To wrap things up I purged all of the messages from my queue, watered my plants, and returned to find that my fleet had been scaled down as expected (“New targetCapacity: 2”):

Available Now
This new feature is available now and you can start using it today in all regions where Spot instances are supported.

Jeff;

 

New – Run SAP HANA on Clusters of X1 Instances

My colleague Steven Jones wrote the guest post below in order to tell you about an impressive new way to use SAP HANA for large-scale workloads.

Jeff;


Back in May we announced the availability of our new X1 instance type x1.32xlarge, our latest addition to the Amazon EC2 memory-optimized instance family with 2 TB of RAM, purpose built for running large-scale, in-memory applications and in-memory databases like SAP HANA in the AWS cloud.

At the same time, we announced SAP certification for single-node deployments of SAP HANA on X1 and since then many AWS customers have been making use of X1 across the globe for a broad range of HANA OLTP use cases including S/4HANA, Suite on HANA, Business Warehouse on HANA, and other OLAP based BI strategies.  Even so, many customers have been asking for the ability to use SAP HANA with X1 instances clustered together in scale-out fashion.

After extensive testing and benchmarking of scale-out HANA clusters in accordance with SAP’s certification processes we’re pleased to announce that today in conjunction with the announcement of BW/4HANA, SAP’s highly optimized next generation business warehouse, our AWS X1 instances are now certified by SAP for large scale-out OLAP deployments including BW/4HANA for up to 7 nodes or 14 TB of RAM. We are excited to be able to support the launch of SAP’s new flagship Business Warehouse offering BW4/HANA with new flexible, scalable, and cost effective deployment options.

Here’s a screenshot from HANA Studio showing a large (14 TB) scale-out cluster running on seven X1 instances:

And this is just the beginning; as indicated, we have plans to make X1 instances available in other sizes and we are testing even larger clusters in the range of 50 TB in our lab. If you need scale-out clusters larger than 14 TB, please contact us; we’d like to work with you.

Reduced Cost and Complexity
Many AWS customers have also been running SAP HANA in scale-out fashion across multiple R3 instances. This new certification brings the ability to consolidate larger scale-out deployments onto fewer larger instances, reducing both cost and complexity. See our SAP HANA Migration guide for details on consolidation strategies.

Flexible High-Availability Options
The AWS platform brings a wide variety of options depending on your needs for ensuring critical SAP HANA deployments like S/4HANA and BW/4HANA are highly available. In fact, customers who have run scale-out deployments of SAP HANA on premises, or with traditional hosting providers, tell us they often have to pay expensive maintenance contracts in addition to purchasing standby nodes or spare hardware to be able to rapidly respond to hardware failures. Others unfortunately forgo this extra hardware and hope nothing happens.

One particularly useful option customers are leveraging on AWS platform is a solution called Amazon EC2 Auto Recovery.  Customers simply create an Amazon CloudWatch alarm that monitors their EC2 instance(s) which automatically recovers the instance to a healthy host if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair. A recovered instance is identical to the original instance, including attached EBS storage volumes as well as other configurations such as hostname, IP address, and AWS instance IDs. Standard pricing for Amazon CloudWatch applies (for example $0.10 per alarm per month us-east). Essentially this allows you to leverage our spare capacity for rapid recovery while we take care of the unhealthy hardware.

Getting Started
You can deploy your own production ready single-node HANA or scale-out HANA solution on X1 using the updated AWS Quick Start Reference Deployment for SAP HANA in less than an hour using well-tested configurations.

Be sure to also review our SAP HANA Implementation and Operations Guide for other guidance and best practices when planning your SAP HANA implementation on Amazon Web Services.

Are you in the Bay Area on September 7 and want to join us for an exciting AWS and SAP announcement? Register here and we’ll see you in San Francisco!

Can’t make it? Join our livestream on September 7 at 9 AM PST and learn how AWS and SAP are working together to provide value for SAP customers.

We look forward to serving you.

Steven Jones, Senior Manager, AWS Solutions Architecture

Powerful AWS Platform Features, Now for Containers

Containers are great but they come with their own management challenges. Our customers have been using containers on AWS for quite some time to run workloads ranging from microservices to batch jobs. They told us that managing a cluster, including the state of the EC2 instances and containers, can be tricky, especially as the environment grows. They also told us that integrating the capabilities you get with the AWS platform, such as load balancing, scaling, security, monitoring, and more, with containers is a key requirement. Amazon ECS was designed to meet all of these needs and more.

We created Amazon ECS  to make it easy for customers to run containerized applications in production. There is no container management software to install and operate because it is all provided to you as a service. You just add the EC2 capacity you need to your cluster and upload your container images. Amazon ECS takes care of the rest, deploying your containers across a cluster of EC2 instances and monitoring their health. Customers such as Expedia and Remind have built Amazon ECS into their development workflow, creating PaaS platforms on top of it. Others, such as Prezi and Shippable, are leveraging ECS to eliminate operational complexities of running containers, allowing them to spend more time delivering features for their apps.

AWS has highly reliable and scalable fully-managed services for load balancing, auto scaling, identity and access management, logging, and monitoring. Over the past year, we have continued to natively integrate the capabilities of the AWS platform with your containers through ECS, giving you the same capabilities you are used to on EC2 instances.

Amazon ECS recently delivered container support for application load balancing (Today), IAM roles (July), and Auto Scaling (May). We look forward to bringing more of the AWS platform to containers over time.

Let’s take a look at the new capabilities!

Application Load Balancing
Load balancing and service discovery are essential parts of any microservices architecture. Because Amazon ECS uses Elastic Load Balancing, you don’t need to manage and scale your own load balancing layer. You also get direct access to other AWS services that support ELB such as AWS Certificate Manager (ACM) to automatically manage your service’s certificates and Amazon API Gateway to authenticate callers, among other features.

Today, I am happy to announce that ECS supports the new application load balancer, a high-performance load balancing option that operates at the application layer and allows you to define content-based routing rules. The application load balancer includes two features that simplify running microservices on ECS: dynamic ports and the ability for multiple services to share a single load balancer.

Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.

Previously, there was a one-to-one mapping between ECS services and load balancers. Now, a load balancer can be shared with multiple services, using path-based routing. Each service can define its own URI, which can be used to route traffic to that service. In addition, you can create an environment variable with the service’s DNS name, supporting basic service discovery. For example, a stock service could be http://example.com/stock and a weather service could be http://example.com/weather, both served from the same load balancer. A news portal could then use the load balancer to access both the stock and weather services.

IAM Roles for ECS Tasks
In Amazon ECS, you have always been able to use IAM roles for your Amazon EC2 container instances to simplify the process of making API requests from your containers. This also allows you to follow AWS best practices by not storing your AWS credentials in your code or configuration files, as well as providing benefits such as automatic key rotation.

With the introduction of the recently launched IAM roles for ECS tasks, you can secure your infrastructure by assigning an IAM role directly to the ECS task rather than to the EC2 container instance. This way, you can have one task that uses a specific IAM role for access to, let’s say, S3 and another task that uses an IAM role to access a DynamoDB table, both running on the same EC2 instance.

Service Auto Scaling
The third feature I want to highlight is Service Auto Scaling. With Service Auto Scaling and Amazon CloudWatch alarms, you can define scaling policies to scale your ECS services in the same way that you scale your EC2 instances up and down. With Service Auto Scaling, you can achieve high availability by scaling up when demand is high, and optimize costs by scaling down your service and the cluster, when demand is lower, all automatically and in real-time.

You simply choose the desired, minimum and maximum number of tasks, create one or more scaling policies, and Service Auto Scaling handles the rest. The service scheduler is also Availability Zone–aware, so you don’t have to worry about distributing your ECS tasks across multiple zones.

Available Now
These features are available now and you can start using them today!

Jeff;

New – AWS Application Load Balancer

We launched Elastic Load Balancing (ELB) for AWS in the spring of 2009 (see New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch to see just how far AWS has come since then). Elastic Load Balancing has become a key architectural component for many AWS-powered applications. In conjunction with Auto Scaling, Elastic Load Balancing greatly simplifies the task of building applications that scale up and down while maintaining high availability.

On the Level
Per the well-known OSI model, load balancers generally run at Layer 4 (transport) or Layer 7 (application).

A Layer 4 load balancer works at the network protocol level and does not look inside of the actual network packets, remaining unaware of the specifics of HTTP and HTTPS. In other words, it balances the load without necessarily knowing a whole lot about it.

A Layer 7 load balancer is more sophisticated and more powerful. It inspects packets, has access to HTTP and HTTPS headers, and (armed with more information) can do a more intelligent job of spreading the load out to the target.

Application Load Balancing for AWS
Today we are launching a new Application Load Balancer option for ELB. This option runs at Layer 7 and supports a number of advanced features. The original option (now called a Classic Load Balancer) is still available to you and continues to offer Layer 4 and Layer 7 functionality.

Application Load Balancers support content-based routing, and supports applications that run in containers. They support a pair of industry-standard protocols (WebSocket and HTTP/2) and also provide additional visibility into the health of the target instances and containers. Web sites and mobile apps, running in containers or on EC2 instances, will benefit from the use of Application Load Balancers.

Let’s take a closer look at each of these features and then create a new Application Load Balancer of our very own!

Content-Based Routing
An Application Load Balancer has access to HTTP headers and allows you to route requests to different backend services accordingly. For example, you might want to send requests that include /api in the URL path to one group of servers (we call these target groups) and requests that include /mobile to another. Routing requests in this fashion allows you to build applications that are composed of multiple microservices that can run and be scaled independently.

As you will see in a moment, each Application Load Balancer allows you to define up to 10 URL-based rules to route requests to target groups. Over time, we plan to give you access to other routing methods.

Support for Container-Based Applications
Many AWS customers are packaging up their microservices into containers and hosting them on Amazon EC2 Container Service. This allows a single EC2 instance to run one or more services, but can present some interesting challenges for traditional load balancing with respect to port mapping and health checks.

The Application Load Balancer understands and supports container-based applications. It allows one instance to host several containers that listen on multiple ports behind the same target group and also performs fine-grained, port-level health checks

Better Metrics
Application Load Balancers can perform and report on health checks on a per-port basis. The health checks can specify a range of acceptable HTTP responses, and are accompanied by detailed error codes.

As a byproduct of the content-based routing, you also have the opportunity to collect metrics on each of your microservices. This is a really nice side-effect that each of the microservices can be running in its own target group, on a specific set of EC2 instances. This increased visibility will allow you to do a better job of scaling up and down in response to the load on individual services.

The Application Load Balancer provides several new CloudWatch metrics including overall traffic (in GB), number of active connections, and the connection rate per hour.

Support for Additional Protocols & Workloads
The Application Load Balancer supports two additional protocols: WebSocket and HTTP/2.

WebSocket allows you to set up long-standing TCP connections between your client and your server. This is a more efficient alternative to the old-school method which involved HTTP connections that were held open with a “heartbeat” for very long periods of time. WebSocket is great for mobile devices and can be used to deliver stock quotes, sports scores, and other dynamic data while minimizing power consumption. ALB provides native support for WebSocket via the ws:// and wss:// protocols.

HTTP/2 is a significant enhancement of the original HTTP 1.1 protocol. The newer protocol feature supports multiplexed requests across a single connection. This reduces network traffic, as does the binary nature of the protocol.

The Application Load Balancer is designed to handle streaming, real-time, and WebSocket workloads in an optimized fashion. Instead of buffering requests and responses, it handles them in streaming fashion. This reduces latency and increases the perceived performance of your application.

Creating an ALB
Let’s create an Application Load Balancer and get it all set up to process some traffic!

The Elastic Load Balancing Console lets me create either type of load balancer:

I click on Application load balancer, enter a name (MyALB), and choose internet-facing. Then I add an HTTPS listener:

On the same screen, I choose my VPC (this is a VPC-only feature) and one subnet in each desired Availability Zone, tag my Application Load Balancer, and proceed to Configure Security Settings:

Because I created an HTTPS listener, my Application Load Balancer needs a certificate. I can choose an existing certificate that’s already in IAM or AWS Certificate Manager (ACM),  upload a local certificate, or request a new one:

Moving right along, I set up my security group. In this case I decided to create a new one. I could have used one of my existing VPC or EC2 security groups just as easily:

The next step is to create my first target group (main) and to set up its health checks (I’ll take the defaults):

Now I am ready to choose the targets—the set of EC2 instances that will receive traffic through my Application Load Balancer. Here, I chose the targets that are listening on port 80:

The final step is to review my choices and to Create my ALB:

After I click on Create the Application Load Balancer is provisioned and becomes active within a minute or so:

I can create additional target groups:

And then I can add a new rule that routes /api requests to that target:

Application Load Balancers work with multiple AWS services including Auto Scaling, Amazon ECS, AWS CloudFormation, AWS CodeDeploy, and AWS Certificate Manager (ACM). Support for and within other services is in the works.

Moving on Up
If you are currently using a Classic Load Balancer and would like to migrate to an Application Load Balancer, take a look at our new Load Balancer Copy Utility. This Python tool will help you to create an Application Load Balancer with the same configuration as an existing Classic Load Balancer. It can also register your existing EC2 instances with the new load balancer.

Availability & Pricing
The Application Load Balancer is available now in all commercial AWS regions and you can start using it today!

The hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a Classic Load Balancer.

When you use an Application Load Balancer, you will be billed by the hour and for the use of Load Balancer Capacity Units, also known as LCU’s. An LCU measures the number of new connections per second, the number of active connections, and data transfer. We measure on all three dimensions, but bill based on the highest one. One LCU is enough to support either:

  • 25 connections/second with a 2 KB certificate, 3,000 active connections, and 2.22 Mbps of data transfer or
  • 5 connections/second with a 4 KB certificate, 3,000 active connections, and 2.22 Mbps of data transfer.

Billing for LCU usage is fractional, and is charged at $0.008 per LCU per hour. Based on our calculations, we believe that virtually all of our customers can obtain a net reduction in their load balancer costs by switching from a Classic Load Balancer to an Application Load Balancer.

Jeff;

 

 

 

EC2 Run Command Update – Monitor Execution Using Notifications

We launched EC2 Run Command late last year and have enjoyed seeing our customers put it to use in their cloud and on-premises environments. After the launch, we quickly added Support for Linux Instances, the power to Manage & Share Commands, and the ability to do Hybrid & Cross-Cloud Management. Earlier today we made EC2 Run Command available in the China (Beijing) and Asia Pacific (Seoul) Regions.

Our customers are using EC2 Run Command to automate and encapsulate routine system administration tasks. They are creating local users and groups, scanning for and then installing applicable Windows updates, managing services, checking log files, and the like. Because these customers are using EC2 Run Command as a building block, they have told us that they would like to have better visibility into the actual command execution process. They would like to know, quickly and often in detail, when each command and each code block in the command begins executing, when it completes, and how it completed (successfully or unsuccessfully).

In order to support this really important use case, you can now arrange to be notified when the status of a command or a code block within a command changes. In order to provide you with several different integration options, you can receive notifications via CloudWatch Events or via Amazon Simple Notification Service (SNS).

These notifications will allow you to use EC2 Run Command in true building block fashion. You can programmatically invoke commands and then process the results as they arrive. For example, you could create and run a command that captures the contents of important system files and metrics on each instance. When the command is run, EC2 Run Command will save the output in S3. Your notification handler can retrieve the object from S3, scan it for items of interest or concern, and then raise an alert if something appears to be amiss.

Monitoring Executing Using Amazon SNS
Let’s run up a command on some EC2 instances and monitor the progress using SNS.

Following the directions (Monitoring Commands), I created an S3 bucket (jbarr-run-output), an SNS topic (command-status), and an IAM role (RunCommandNotifySNS) that allows the on-instance agent to send notifications on my behalf. I also subscribed my email address to the SNS topic, and entered the command:

And specified the bucket, topic, and role (further down on the Run a command page):

I chose All so that I would be notified of every possible status change (In Progress, Success, Timed Out, Cancelled, and Failed) and Invocation so that I would receive notifications as the status of each instance chances. I could have chosen to receive notifications at the command level (representing all of the instances) by selecting Command instead of Invocation.

I clicked on Run and received a sequence of emails as the commands were executed on each of the instances that I selected. Here’s a sample:

In a real-world environment you would receive and process these notifications programmatically.

Monitoring Execution Using CloudWatch Events
I can also monitor the execution of my commands using CloudWatch Events. I can send the notifications to an AWS Lambda functioon, an SQS queue, or a Amazon Kinesis stream.

For illustrative purposes, I used a very simple Lambda function:

I created a rule that would invoke the function for all notifications issued by the Run Command (as you can see below, I could have been more specific if necessary):

I saved the rule and ran another command, and then checked the CloudWatch metrics a few seconds later:

I also checked the CloudWatch log and inspected the output from my code:

Available Now
This feature is available now and you can start using it today.

Monitoring via SNS is available in all AWS Regions except Asia Pacific (Mumbai) and AWS GovCloud (US). Monitoring via CloudWatch Events is available in all AWS Regions except Asia Pacific (Mumbai), China (Beijing), and AWS GovCloud (US).

Jeff;

 

EC2 Run Command Update – Hybrid and Cross-Cloud Management

We launched EC2 Run Command late last year (read my post, New EC2 Run Command – Remote Instance Management at Scale to learn more). This feature was designed to allow developers, system administrators, and other IT professionals to easily and efficiently manage multiple EC2 instances running Windows or Linux. As I explained in my original post,  you can simply choose the desired command, select the desired instances by attributes, tags, or keywords, and then run the command on the selected instances. EC2 Run Command provides access to the output of the command and also retains a log so that you can see which commands were run on which instances. Last month we made EC2 Run Command even more useful by giving you the ability to create, manage, and share command documents with your colleagues or with all AWS users.

Our customers have taken a liking to EC2 Run Command and are making great use of it. Here are a few of the use cases that have been shared with us:

  • Create local users and groups.
  • Scan for missing Windows updates and install them.
  • Install all applicable Windows updates.
  • Manage (start, stop, restart) services.
  • Install packages and applications.
  • Access local log files.

Hybrid and Cross-Cloud Management
Many AWS customers also have some servers on-premises or on another cloud, and have been looking for a single, unified way to manage their hybrid environment at scale. In order to address this very common use case, we are now opening up Run Command to servers running outside of EC2.

We call these external servers Managed Instances. You can install the AWS SSM Agent on your external servers, activate the agent on each server, and then use your existing commands and command documents to manage them (you can also create new documents, of course).

The agent runs on the following operating systems:

  • Windows Server (32 and 64 bit) – 2003-2012, including R2 versions (more info).
  • Linux (64 bit) – Red Hat Enterprise Linux 7.1+, CentOS 7.1+ (more info).

If you run a virtualized environment using VMware ESXi, Microsoft Hyper-V, KVM or another hypervisor, you can install the agent on the guest operating system(s) as desired.

For simplicity, the agent needs nothing more than the ability to make HTTPS requests to the SSM endpoint in your desired region. These requests can be direct, or can be routed through a proxy or a gateway, as dictated by your network configuration. When the agent makes a request to AWS, it uses an IAM role to access the SSM API. You’ll set up this role when you activate your first set of servers.

The agent sends some identifying information to AWS. This information includes the fully qualified host name, the platform name and version, the agent version, and the server’s IP address. All of these values are stored securely within AWS, and will be deleted if you choose to unregister the server at some point in the future.

Setting up Managed Instances
The setup process is simple and you should be up and running pretty quickly. Here are the steps:

  1. Open up the EC2 Console, locate the Commands section, and click on Activations to create your first activation code. As part of this process the Console will prompt you to create the IAM role that I described above:
  2. Enter a description for the activation, choose a limit (you can activate up to 1000 servers at a time), set an expiration date, and assign a name that will help you to track the Managed Instances in the Console, then click on Create Activation:
  3. Capture the Activation Code and the Activation ID:
  4. Install the SSM Agent on the desired servers, and configure it using the values that you saved in the previous step. You simply download the agent, install it, and then enter the values, as detailed in the installation instructions.
  5. Return to the console and click on Managed Instances to verify that everything is working as expected:

Running Commands on Managed Instances
Now that your instances are managed by AWS, you can run commands on them. For example:
The status of the commands, along with the output, is available from the Console:

To learn more, read Manage Amazon EC2 Instances Remotely.

Available Now
This feature is available now and you can start using it today in all AWS Regions where Run Command is available (see the Run Command page for details). I am looking forward to hearing how you have put it to use in your environment; leave me a comment and let me know how it works out for you!

Jeff;

 

Amazon Elastic File System – Production-Ready in Three Regions

The portfolio of AWS storage products has grown increasingly rich and diverse over time. Amazon S3 started out with a single storage class and has grown to include storage classes for regular, infrequently accessed, and archived objects. Similarly, Amazon Elastic Block Store (EBS) began with a single volume type and now offers a choice of four types of SAN-style block storage, each designed to be a great for a particular set of access patterns and data types.

With object storage and block storage capably addressed by S3 and EBS, we turned our attention to the file system. We announced the Amazon Elastic File System (EFS) last year in order to provide multiple EC2 instances with shared, low-latency access to a fully-managed file system.

I am happy to announce that EFS is now available for production use in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions.

We are launching today after an extended preview period that gave us insights into an extraordinarily wide range of customer use cases. The EFS preview was a great fit for large-scale, throughput-heavy processing workloads, along with many forms of content and web serving. During the preview we received a lot of positive feedback about the performance of EFS for these workloads, along with requests to provide equally good support for workloads that are sensitive to latency and/or make heavy use of file system metadata. We’ve been working to address this feedback and today’s launch is designed to handle a very wide range of use cases. Based on what I have heard so far, our customers are really excited about EFS and plan to put it to use right away.

Why We Built EFS
Many AWS customers have asked us for a way to more easily manage file storage on a scalable basis. Some of these customers run farms of web servers or content management systems that benefit from a common namespace and easy access to a corporate or departmental file hierarchy. Others run HPC and Big Data applications that create, process, and then delete many large files, resulting in storage utilization and throughput demands that vary wildly over time. Our customers also insisted on high availability, and durability, along with a strongly consistent model for access and modification.

Amazon Elastic File System
EFS lets you create POSIX-compliant file systems and attach them to one or more of your EC2 instances via NFS. The file system grows and shrinks as necessary (there’s no fixed upper limit and you can grow to petabyte scale) and you don’t pre-provision storage space or bandwidth. You pay only for the storage that you use.

EFS protects your data by storing copies of your files, directories, links, and metadata in multiple Availability Zones.

In order to provide the performance needed to support large file systems accessed by multiple clients simultaneously, Elastic File System performance scales with storage (I’ll say more about this later).

Each Elastic File System is accessible from a single VPC, and is accessed by way of mount targets that you create within the VPC. You have the option to create a mount target in any desired subnet of your VPC. Access to each mount target is controlled, as usual, via Security Groups.

EFS offers two distinct performance modes. The first mode, General Purpose, is the default. You should use this mode unless you expect to have tens, hundreds, or thousands of EC2 instances access the file system concurrently. The second mode, Max I/O, is optimized for higher levels of aggregate throughput and operations per second, but incurs slightly higher latencies for file operations. In most cases, you should start with general purpose mode and watch the relevant CloudWatch metric (PercentIOLimit). When you begin to push the I/O limit of General Purpose mode, you can create a new file system in Max I/O mode, migrate your files, and enjoy even higher throughput and operations per second.

Elastic File System in Action
It is very easy to create, mount, and access an Elastic File System. I used the AWS Management Console; I could have used the EFS API, the AWS Command Line Interface (CLI), or the AWS Tools for Windows PowerShell as well.

I opened the console and clicked on the Create file system button:

Then I selected one of my VPCs and created a mount target in my public subnet:

My security group (corp-vpc-mount-target) allows my EC2 instance to access the mount point on port 2049. Here’s the inbound rule; the outbound one is the same:

I added Name and Owner tags, and opted for the General Purpose performance mode:

Then I confirmed the information and clicked on Create File System:

My file system was ready right away (the mount targets took another minute or so):

I clicked on EC2 mount instructions to learn how to mount my file system on an EC2 instance:

I mounted my file system as /efs, and there it was:

I copied a bunch of files over, and spent some time watching the NFS stats:

The console reports on the amount of space consumed by my file systems (this information is collected every hour and is displayed 2-3 hours after it is collected):

CloudWatch Metrics
Each file system delivers the following metrics to CloudWatch:

  • BurstCreditBalance – The amount of data that can be transferred at the burst level of throughput.
  • ClientConnections – The number of clients that are connected to the file system.
  • DataReadIOBytes – The number of bytes read from the file system.
  • DataWriteIOBytes -The number of bytes written to the file system.
  • MetadataIOBytes – The number of bytes of metadata read and written.
  • TotalIOBytes -The sum of the preceding three metrics.
  • PermittedThroughput -The maximum allowed throughput, based on file system size.
  • PercentIOLimit – The percentage of the available I/O utilized in General Purpose mode.

You can see the metrics in the CloudWatch Console:

EFS Bursting, Workloads, and Performance
The throughput available to each of your EFS file systems will grow as the file system grows. Because file-based workloads are generally spiky, with demands for high levels of throughput for short amounts of time and low levels the rest of the time, EFS is designed to burst to high throughput levels on an as-needed basis.

All file systems can burst to 100 MB per second of throughput. Those over 1 TB can burst to an additional 100 MB per second for each TB stored. For example, a 2 TB file system can burst to 200 MB per second and a 10 TB file system can burst to 1,000 MB per second of throughput. File systems larger than 1 TB can always burst for 50% of the time if they are inactive for the other 50%.

EFS uses a credit system to determine when a file system can burst. Each one accumulates credits at a baseline rate (50 MB per TB of storage) that is determined by the size of the file system, and spends them whenever it reads or writes data. The accumulated credits give the file system the ability to drive throughput beyond the baseline rate.

Here are some examples to give you a better idea of what this means in practice:

  • A 100 GB file system can burst up to 100 MB per second for up to 72 minutes each day, or drive up to 5 MB per second continuously.
  • A 10 TB file system can burst up to 1 GB per second for 12 hours each day, or drive 500 MB per second continuously.

To learn more about how the credit system works, read about File System Performance in the EFS documentation.

In order to gain a better understanding of this feature, I spent a couple of days copying and concatenating files, ultimately ending up using well over 2 TB of space on my file system. I watched the PermittedThroughput metric grow in concert with my usage as soon as my file collection exceeed 1 TB. Here’s what I saw:

As is the case with any file system, the throughput you’ll see is dependent on the characteristics of your workload. The average I/O size, the number of simultaneous connections to EFS, the file access pattern (random or sequential), the request model (synchronous or asynchronous), the NFS client configuration, and the performance characteristics of the EC2 instances running the NFS clients each have an effect (positive or negative). Briefly:

  • Average I/O Size – The work associated with managing the metadata associated with small files via the NFS protocol, coupled with the work that EFS does to make your data highly durable and highly available, combine to create some per-operation overhead. In general, overall throughput will increase in concert with the average I/O size since the per-operation overhead is amortized over a larger amount of data. Also, reads will generally be faster than writes.
  • Simultaneous Connections – Each EFS file system can accommodate connections from thousands of clients. Environments that can drive highly parallel behavior (from multiple EC2 instances) will benefit from the ability that EFS has to support a multitude of concurrent operations.
  • Request Model – If you enable asynchronous writes to the file system by including the async option at mount time, pending writes will be buffered on the instance and then written to EFS asynchronously. Accessing a file system that has been mounted with the sync option or opening files using an option that bypasses the cache (e.g. O_DIRECT) will, in turn, issue synchronous requests to EFS.
  • NFS Client Configuration – Some NFS clients use laughably small (by today’s standards) values for the read and write buffers by default. Consider increasing it to 1 MiB (again, this is an option to the mount command). You can use an NFS 4.0 or 4.1 client with EFS; the latter will provide better performance.
  • EC2 Instances – Applications that perform large amounts of I/O sometimes require a large amount of memory and/or compute power as well. Be sure that you have plenty of both; choose an appropriate instance size and type. If you are performing asynchronous reads and writes, the kernel use additional memory for caching. As a side note, the performance characteristics of EFS file systems are not dependent on the use of EBS-optimized instances.

Benchmarking of file systems is a blend of art and science. Make sure that you use mature, reputable tools, run them more than once, and make sure that you examine your results in light of the considerations listed above. You can also find some detailed data regarding expected performance on the Amazon Elastic File System page.

Available Now
EFS is available now in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions and you can start using it today.  Pricing is based on the amount of data that you store, sampled several times per day and charged by the Gigabyte-month, pro-rated as usual, starting at $0.30 per GB per month in the US East (Northern Virginia) Region. There are no minimum fees and no setup costs (see the EFS Pricing page for more information). If you are eligible for the AWS Free Tier, you can use up to 5 GB of EFS storage per month at no charge.

Jeff;