Category: Amazon EC2


New Spot Fleet Option – Distribute Your Fleet Across Multiple Capacity Pools

Last week I spoke to a technically-oriented audience at the Pacific Northwest PHP Conference. As part of my talk, I described cloud computing as a mix of technology and business, and made my point by talking about Spot instances. The audience looked somewhat puzzled at first, but as I explained further I could see their eyes light up as they started to think about the ways that they could save money for their companies by way of creative coding!

Earlier this year I wrote about the Spot Fleet API, and showed you how to use it to manage thousands of Spot instances with a single call to the RequestSpotFleet function. Today we are introducing a new “allocation strategy” option for that API. This option will allow you to create a Spot fleet that contains instances drawn from multiple capacity pools (a set of instances of a given type, within a particular region and Availability Zone).

As part of your call to RequestSpotFleet, you can include up to 20 launch specifications. If you make an untargeted request (by not specifying an Availability Zone or a subnet), you can target multiple capacity pools within an AWS region. This gives you access to a lot of EC2 capacity, and allows you to set up fleets that are a good match for your application.

You can set the allocation strategy to either of the following values:

  • lowestPrice – This is the default strategy. It will result in a Spot fleet that contains instances drawn from the lowest priced pool(s) specified in your request.
  • diversified – This is the new strategy, and it must be specified as part of your request. It will result in a Spot fleet that contains instances drawn from all of the pools specified in your request, with the exception of those where the current Spot price is above the On-Demand price.

This option allows you to choose the strategy that most closely matches your goals for each Spot fleet. The following table can be used as a guide:

lowestPrice diversified
Fleet Size Fine for modest-sized fleets. However, a request for a large fleet can affect pricing in the pool with the lowest price. Works well with larger fleets.
Total Fleet Operating Cost Can be unexpectedly high if pricing in the pool spikes. Should average 70%-80% off of On-Demand over time.
Consequence of Capacity Fluctuation in a Pool Entire fleet subject to possible interruption and subsequent replenishment. Fraction of fleet (1/Nth of total capacity) subject to possible interruption and subsequent replenishment.
Application Characteristics Short-running.
Not time sensitive.
Long-running.
Time sensitive.
Typical Applications Scientific simulations, research computations. Transcoding, customer-facing web servers, HPC, CI/CD.

If you create a fleet using the diversified strategy and use it to host your web servers, it is a good idea to select multiple pools and to have a fallback option in case all of them become unavailable.

Diversified allocation works really well in conjunction with the new resource-oriented bidding feature that we launched last month. When you use resource-oriented bidding and specify diversified allocation, each of the capacity pools in your launch specification will include the same number of capacity units.

To make use of this new strategy, simply include it in your CLI or API-driven request. If you are using the CLI, simply add the following entry to your configuration file:

"AllocationStrategy": "diversified"

If you are using the API, specify the same value in your SpotFleetRequestConfigData.

This option is available now and you can start using it today.

Jeff;

Elastic Load Balancing Update – More Ports & Additional Fields in Access Logs

Many AWS applications use Elastic Load Balancing to distribute traffic to a farm of EC2 instances. An architecture of this type is highly scalable since instances can be added, removed, or replaced in a non-disruptive way. Using a load balancer also gives the application the ability to keep on running if an instance encounters an application or system problem of some sort.

Today we are making Elastic Load Balancing even more useful with the addition of two new features: support for all ports and additional fields in access logs.

Support for All Ports
When you create a new load balancer, you need to configure one or more listeners for it. Each listener accepts connection requests on a specific port. Until now, you had the ability to configure listeners for a small set of low-numbered, well-known ports (25, 80, 443, 465, and 587) and to a much larger set of ephemeral ports (1024-65535).

Effective today, load balancers that run within a Virtual Private Cloud (VPC) can have listeners for any port (1-65535). This will give you the flexibility to create load balancers in front of services that must run on a specific, low-numbered port.

You can set this up in all of the usual ways: the ELB API, AWS Command Line Interface (CLI) / AWS Tools for Windows PowerShell, a CloudFormation template, or the AWS Management Console. Here’s how you would define a load balancer for port 143 (the IMAP protocol):

To learn more, read about Listeners for Your Load Balancer in the Elastic Load Balancing Documentation.

Additional Fields in Access Logs
You already have the ability to log the traffic flowing through your load balancers to a location in S3:

In order to allow you to know more about this traffic, and to give you some information that will be helpful as you contemplate possible configuration changes, the access logs now include some additional information that is specific to a particular protocol. Here’s the scoop:

  • User Agent – This value is logged for TCP requests that arrive via the HTTP and HTTPS ports.
  • SSL Cipher and Protocol – These values are logged for TCP requests that arrive via the HTTPS and SSL ports.

You can use this information to make informed decisions when you think about adding or removing support for particular web browsers, ciphers, or SSL protocols. Here’s a sample log entry:

2015-05-13T23:39:43.945958Z my-loadbalancer 192.168.131.39:2817 10.0.0.1:80 0.000086 0.001048 0.001337 200 200 0 57 "GET https://www.example.com:443/ HTTP/1.1" "curl/7.38.0" DHE-RSA-AES128-SHA TLSv1.2

You can also use tools from AWS Partners to view and analyze this information. For example, Splunk shows it like this:

And Sumo Logic shows it like this:

 

To learn more about access logging, read Monitor Your Load Balancer Using Elastic Load Balancing Access Logs.

Both of these features are available now and you can start using them today!

Jeff;

New – Resource-Oriented Bidding for EC2 Spot Instances

Earlier this year we introduced the EC2 Spot fleet API. As I noted in my earlier post, this API allows you to launch and manage an entire fleet of Spot instances with one request. You specify the fleet’s target capacity, a bid price per hour, and tell Spot what instance type(s) you would like to launch. Spot fleet will find the lowest price spare EC2 capacity available, and then work to maintain the requested target capacity.

Today we are making the Spot fleet API even more powerful with the addition of bidding weights. This new feature allows you to create and place bids that are better aligned with the instance types and Availability Zones that are the best fit for your application. Each call to RequestSpotFleet includes a single bid price (expressed on a per-instance basis). This was simple to use, but the simplicity disallowed certain desirable features. For example, there was no way to launch a fleet that had at least 488 GiB of memory spread across two or more R3 (Memory Optimized) instances or least 128 vCPUs spread across a combination of C3 and C4 (both Compute Optimized) instances.

New Resource-Oriented Bidding
Our new resource-oriented bidding model will allow you to make Spot fleet requests of this type. Think of each instance in a fleet as having a number of “capacity units” of some resource that affects how many instances you need to create a fleet of the proper size. In my examples above, the resources might be GiBs of RAM or vCPUs. They could also represent EBS-optimized bandwidth, compute power, networking performance, or another (perhaps more abstract) application-specific unit. You can now request a certain number of capacity units in your Spot fleet, and you can indicate how many such units each instance type contributes to the total.

As a result, you can now use resources of multiple instance types, possibly spread across multiple Availability Zones, without having to be concerned about the instance types that are actually provisioned. Each call to RequestSpotFleet already includes one or more launch specifications, each one requesting instances of a particular type, running a specified AMI. You can now include one or both of the following values in each launch specification:

WeightedCapacity – How many capacity units an instance of the specified type contributes to the fleet. This value is multiplied by the bid price per unit when Spot fleet makes bids for Spot instances on your behalf. You can use this, for example, to indicate that you are willing to pay twice as much for an r3.xlarge instance with 30.5 GiB of memory as for an r3.large instance with  15.25 GiB of memory.

SpotPrice – An override of the bid price per unit specified in the request. You can use this value to introduce a bias into the selection process, allowing you to express a preference (for or against) particular instance types, Availability Zones, or subnets.

Here’s a launch specification that would represent the memory-centric example that I introduced earlier (I have omitted the other values for clarity):

Instance Type Instance Weight
 r3.large  15.25
 r3.xlarge  30.5
 r3.2xlarge  61
 r3.4xlarge  122
 r3.8xlarge  244

You would then specify a Target Capacity of 488 (representing the desired fleet capacity in GiB) in your call to RequestSpotFleet, along with a Bid Price that represents the price (per GiB-hour) that you are willing to pay for the capacity.In this example, you are indicating that you are willing to pay 16 times as much for an r3.8xlarge instance as for r3.large.

EC2 will use this information to build the fleet using the most economical combination of available instance types, looking for instances that have the lowest Spot price per unit. This could be as simple as one of the following, using a single instance type:

  • 2 x r3.8xlarge
  • 4 x r3.4xlarge
  • 8 x r3.2xlarge
  • 16 x r3.xlarge
  • 32 x r3.large

Or something more complex and heterogeneous, such as:

  • 1 x r3.8xlarge and 2 x r3.4xlarge
  • 2 x r3.4xlarge and 8 x r3.xlarge
  • 8 x r3.xlarge and 16 x r3.large

Over time, as prices change and instances are interrupted due to rising prices, replacement instances will be launched as needed. This example assumes that your application is able to sense the instance type (and the amount of memory available to it) and to adjust accordingly. Note that that the fleet might be overprovisioned by a maximum of one instance in order to meet your target capacity using the available resources. In my example above, this would happen if you requested a fleet capable of storing 512 GiB. It could also happen if you make a small request and the cheapest price (per unit) is available on a large instance.

About Those Units
The units are arbitrary, and need not map directly to a physical attribute of the instance. Suppose you did some benchmarking and measured the transaction rate (in TPS) for a number of different instance types over time. You could then request a fleet capable of processing the desired number of transactions per second, while knowing that EC2 will give you the of instance types that are the most economical at any given point in time. As I have pointed out in the past, the Spot mechanism sits at the intersection of technology and business, and gives you the power to build systems and to write code that improves the bottom-line economics of your business! There’s a lot of room to be creative and innovative (and to save up to 90% over On-Demand prices) here.

You can also use this mechanism to prioritize specific Availability Zones by specifying a higher WeightedCapacity value in the desired zone. In this case, your launch specification would include two or more entries for the same instance type, with distinct Availability Zones and weights.

Requests can be submitted using the AWS Command Line Interface (CLI) or via calls to RequestSpotFleet.

Available Now
This new functionality is available now and you can start using it today in all public AWS regions where Spot is available.

Jeff;

PS – For more information about Spot instances, take a look at two of my recent posts: Focusing on Spot Instances and Building Price-Aware Applications.

Subscribe to AWS Public IP Address Changes via Amazon SNS

Last year we announced that the AWS Public IP Address Ranges Were Available in JSON Form. This was a quick, Friday afternoon post that turned out to be incredibly popular! Many AWS users are now polling this file on a regular basis and using it to manage their on-premises firewall rules or to track the growth of the AWS network footprint over time.  If you are using AWS Direct Connect, you can use the file to update your route tables to reflect the prefixes that are advertised for Direct Connect public connections.

Today we are making it even easier for you to make use of this file. You can now subscribe to an Amazon Simple Notification Service (SNS) topic and receive notifications when the file is updated. Your code can then retrieve the file, parse it, and make any necessary updates to your local environment.

Simply subscribe to topic arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged and confirm the subscription in the usual way (you can use any protocol supported by SNS):

You will receive a notification that looks like this each time the IP addresses are changed:

{
  "create-time":"yyyy-mm-ddThh:mm:ss+00:00",
  "synctoken":"0123456789",
  "md5":"6a45316e8bc9463c9e926d5d37836d33",
  "url":"https://ip-ranges.amazonaws.com/ip-ranges.json"
}

You can also build a AWS Lambda function that responds to the changes:

In either case, your app will be responsible for fetching the file, parsing the JSON, and extracting the desired information. To learn more about the file, read about AWS IP Address Ranges.

If you build something useful (environment updates) and/or cool (an intriguing visualization) that you would like to share with the other readers of this blog, please feel free to leave me a comment!

Jeff;

PS – My count shows 13,065,550 IP addresses in the EC2 range as of August 25, 2015.

Building Price-Aware Applications Using EC2 Spot Instances

Last month I began writing what I hope to be a continuing series of posts of EC2 Spot Instances by talking about some Spot Instance Best Practices. Today I spoke to two senior members of the EC2 Spot Team to learn how to build price-aware applications using Spot Instances. I met with Dmitry Pushkarev (Head of Tech Development) and Joshua Burgin (General Manager) and would like to recap our conversation in interview form!

Me: What does price really mean in the Spot world?

Joshua: Price and price history are important considerations when building Spot applications. Using price as a signal about availability helps our customers to deploy applications in the most available capacity pools, reduces the chance of interruption and improves the overall price-performance of the application.

Prices for instances on the Spot Market are determined by supply and demand. A low price means that there is a more capacity in the pool than demand. Consistently low prices and low price variance means that pool is consistently underutilized. This is often the case for older generations of instances such as m1.small, c1.xlarge, and cc2.8xlarge.


Me: How do our customers build applications that are at home in this environment?

Dmitry: It is important to architect your application for fault tolerance and to make use of historical price information. There are probably as many placement strategies as there are customers, but generally we see two very successful use patterns: one is choosing capacity pools (instance type and availability zone) with low price variance and the other is to distribute capacity across multiple capacity pools.

There is a good analogy with the stock market – you can either search for a “best performing” capacity pool and periodically revisit your choice or to diversify your capacity across multiple uncorrelated pools and greatly reduce your exposure to risk of interruption.


Me: Tell me a bit more about these placement strategies.

Joshua:  The idea here is to analyze the recent Spot price history in order to find pools with consistently low price variance. One way to do this is by ordering capacity pools by duration of time that elapsed since the last time Spot price exceeded your preferred bid – which is the maximum amount you’re willing to pay per hour. Even though past performance certainly doesn’t guarantee future results it is a good starting point.  This strategy can be used to make bids on instances that can be used for dev environments and long running analysis jobs. It is also good for adding supplemental capacity to Amazon EMR clusters. We also recommend that our customers revisit their choices over time in order to ensure that they continue to use the pools that provide them with the most benefit.

Me: How can our customers access this price history?

Dmitry: It’s available through the console as well as programmatically through SDKs and  the AWS Command Line Interface (CLI).

We’ve also created a new web-based Spot Bid Advisor that can be accessed from the Spot page. This tool presents the relevant statistics averaged across multiple availability zones making it easy to find instance types with low price volatility. You can choose the region, operating system, and bid price (25%, 50%, or 100% of On-Demand) and then view historical frequency of being outbid for last week or a month.

Another example can be found in the aws-spot-labs repo on GitHub. The get_spot_duration.py script demonstrates how spot price information can be obtained programmatically and used to order instance types and availability zones based on the duration since the price last exceeded your preferred bid price.


Me: Ok, and then I pick one of the top instance pools and periodically revisit my choice?

Dmitry: Yes, that’s a great way to get started. As you get more comfortable with Spot typically next step is to start using multiple pools at the same time and distribute capacity equally among them. Because capacity pools are physically separate, prices often do not correlate among them, and it’s very rare that more than one capacity pool will experience a price increase within a short period of time.

This will reduce the impact of interruptions and give you plenty of time to restore the desired level of capacity.

Joshua: Distributing capacity this way also improves long-term price/performance: if capacity is distributed evenly across multiple instance types and/or availability zones then the hourly price is averaged across multiple pools which results in really good overall price performance.


Me: Ok, sounds great.  Now let’s talk about the second step, bidding strategies.

Joshua: It is important to place a reasonable bid at a price that you are willing to pay. It’s better to achieve higher availability by carefully selecting multiple capacity pools and distributing your application across the instances therein than by placing unreasonably high spot bids. When you see increasing prices within a capacity pool, this is a sign that demand is increasing. You should start migrating your workload to less expensive pools or shut down idle instances with high prices in order to avoid getting interrupted.

Me: Do you often see our customers use more sophisticated bidding tactics?

Dmitry: For many of our customers the ability to leverage Spot is an important competitive advantage and some of them run their entire production stacks on it – which certainly requires additional engineering to hit their SLA. One interesting way to think about Spot is to view is it as a significant reward for engineering applications that are “cloud friendly.”  By that I mean fault tolerant by design, flexible, and price aware. Being price aware allows the application to deploy itself to the pools with the most spare capacity available. Startups in particular often get very creative with how they use Spot which allows them to scale faster and spend less on compute infrastructure.

Joshua: Tools like Auto Scaling, Spot fleet, and Elastic MapReduce offer Spot integration and allow our customers to use multiple capacity pools simultaneously without adding significant development effort.


Stay tuned for even more information about Spot Instances! In the meantime, please feel free to leave your own tips (and questions) in the comments.

Jeff;

 

New Metrics for EC2 Container Service: Clusters & Services

The Amazon EC2 Container Service helps you to build, run, and scale Docker-based applications. As I noted in an earlier post (EC2 Container Service – Latest Features, Customer Successes, and More), you will benefit from easy cluster management, high performance, flexible scheduling, extensibility, portability, and AWS integration while running in an AWS-powered environment that is secure and efficient.

Container-based applications are built from tasks. A task is one or more Docker containers that run together on the same EC2 instance; instances are grouped in to a cluster. The instances form a pool of resources that can be used to run tasks.

This model creates some new measuring and monitoring challenges. In order to keep the cluster at an appropriate size (not too big and not too small), you need to watch memory and CPU utilization for the entire cluster rather than for individual instances. This becomes even more challenging when a single cluster contains EC2 instances with varied amounts of compute power and memory.

New Cluster Metrics
In order to allow you to properly measure, monitor, and scale your clusters, we are introducing new metrics that are collected from individual instances, normalized based on the instance size and the container configuration, and then reported to Amazon CloudWatch. You can observe the metrics in the AWS Management Console and you can use them to drive Auto Scaling activities.

The ECS Container Agent runs on each of the instances. It collects the CPU and memory metrics at the instance and task level, and sends them to a telemetry service for normalization. The normalization process creates blended metrics that represent CPU and memory usage for the entire cluster. These metrics give you a picture of overall cluster utilization.

Let’s take a look! My cluster is named default and it has one t2.medium instance:

At this point no tasks are running and the cluster is idle:

I ran two tasks (as a service) with the expectation that they will consume all of the CPU:

I took a short break to water my garden while the task burned some CPU and the metrics accumulated! I came back and here’s what the CPU Utilization looked like:

 

Then I launched another t2.medium instance into my cluster, and checked the utilization again. The additional processing power reduced the overall utilization to 50%:

 

The new metrics (CPUUtilization and MemoryUtilization) are available via CloudWatch and can also be used to create alarms. Here’s how to find them:

New Service Metrics
Earlier this year we announced that the EC2 Container Service supports long-running applications and load balancing. The Service scheduler allows you to manage long-running applications and services by keeping them healthy and scaled to the desired level. CPU and memory utilization metrics are now collected and processed on a per-service basis, and are visible in the Console:

The new cluster and server metrics are available now and you can start using them today!

Jeff;

New – Monitor Your AWS Free Tier Usage

I strongly believe that you need to make a continuous investment in learning about new tools and technologies that will enhance your career. When I began my career in the software industry, the release cycles for hardware and software were measured in months, quarters, or years. Back then (the 1980’s, to be precise) you could spend some time learning about a new language, database, or operating system and then make use of that knowledge for quite some time. Today, the situation is different. Not only has the pace of innovation increased, but the model has changed. In the old days, physical distribution via tapes, floppy disks, or CDs ruled the day. The need to produce and ship these items in volume led to a model where long periods of stasis were punctuated by short, infrequent bursts of change.

Today’s cloud-based distribution model means that new features can be deployed and made available to you in days. Punctuated equilibrium (to borrow a term from evolutionary biology) has given way to gradualism. Systems become a little bit better every day, sometimes in incremental steps that can mask major changes if you are not paying attention. If you are a regular reader of this blog, you probably have a good sense for the pace of AWS innovation. We add incremental features almost every day (see the AWS What’s New for more info), and we take bigger leaps into the future just about every month. If you want to stay at the top of your game, you should plan to spend some time using these new features and gaining direct, personal experience with them.

Use the Free Tier
The AWS Free Tier should help you in this regard. You can create and use EC2 instances, EBS volumes, S3 storage, DynamoDB tables, Lambda functions, RDS databases, transcoding, messaging, load balancing, caching, and much more. Some of these benefits are available to you for 12 months after you sign up for a free AWS account; others are available to you regardless of the age of your account. You can use these AWS resources to build and host a static website, deploy a web app (on Linux or Node.js), host a .NET application, learn about the AWS APIs via our AWS SDKs, create interesting demos, and explore our newest services. If you are new to AWS, our Getting Started with AWS page should point you in the right direction.

New Free Tier Monitoring
You receive a fairly generous allowance of AWS resources as part of the Free Tier (enough to host and run a website for a year with enough left over for additional experimentation); you will not be billed unless your usage exceeds those allowances.

Today we are adding a new feature that will allow you to keep better track of your AWS usage and to see where you are with respect to the Free Tier allowances for each service. You can easily view your actual usage (month to date) and your forecasted usage (up to the end of the month) for the services that you are using that are eligible for the Free Tier. This feature applies to the offerings that are made available to you during the first year of AWS usage, and will be visible to you only if your account is less than one year old.

You can also see your consumption on a percentage basis. All of this information is available to you in the Billing and Cost Management Dashboard. Simply click on your name in the Console’s menu bar, choose Billing and Cost Management:

You will see your Free Tier usage for the top 5 services:

You can hover your mouse over any of the values to learn more via a tooltip:

You can also see your usage across all services by clicking on View All:

You can also get tooltips for the items on this page.

Using the Information
You can look at and interpret this page in two ways. If you must stay within the Free Tier for budgetary reasons, you can use it to restrain your enthusiasm. If you are interested in getting as much value as you can from the Free Tier and learning as much as possible, you can spend some time looking for services that you have not yet used,  and focus your efforts there. If the last screen shot above represented your actual account, you might want to dive in to AWS Lambda to learn more about server-less computing!

Getting Started with AWS
I sometimes meet with developers who have read about AWS and cloud computing, but who have yet to experience it first-hand. There’s a general sense that cloud computing is nothing more than a different form or hosting or colocation, and that they can simply learn on the job when it is time for them to move their career forward. That might be true, but I am confident that they’ll be in a far better position to progress in their career if they proactively decide to learn about and gain hands on experience now. Reading about how you can create a server or a database in minutes is one thing, doing it for yourself (and seeing just how quick and easy it is) is another. If you are ready, I would encourage you to sign up for AWS, read our getting started with AWS tutorials, watch some our instructional videos, and consider our self-paced hands-on labs.

Available Now
This information is available now in all public AWS Regions!

Jeff;

AWS OpsWorks Update – Provision & Manage ECS Container Instances; Run RHEL 7

AWS OpsWorks makes it easy for you to deploy applications of all shapes and sizes. It provides you with an integrated management experience that spans the entire application lifecycle including resource provisioning, EBS volume setup, configuration management, application deployment, monitoring, and access control (read my introductory post, AWS OpsWorks – Flexible Application Management in the Cloud Using Chef for more information).

Amazon EC2 Container Service is a highly scalable container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon Elastic Compute Cloud (EC2) instances (again, I have an introductory post if you’d like to learn more: Amazon EC2 Container Service (ECS) – Container Management for the AWS Cloud).

ECS and RHEL Support
Today, in the finest “peanut butter and chocolate” tradition, we are adding support for ECS Container Instances to OpsWorks. You can now provision and manage ECS Container Instances that are running Ubuntu 14.04 LTS or the Amazon Linux 2015.03 AMI.

We are also adding support for Red Hat Enterprise Linux (RHEL) 7.1.

Let’s take a closer look at both features!

Support for ECS Container Instances
The new ECS Cluster layer type makes it easy for you to provision and configure ECS Container Instances.  You simply create the layer, specify the name and instance type for the cluster (which must already exist), define and attach EBS volumes as desired, and you are good to go. The instances will be provisioned with Docker, the ECS agent, and the OpsWorks agent, and will be registered with the ECS cluster associated with the ECS Cluster layer.

It is really easy to get started. Simply add a new layer and select the ECS Cluster Layer type:

Then choose a cluster and a profile:

The next step is to add instances to the cluster. This takes just a couple of clicks per instance:

As is always the case with OpsWorks, the instances are initially in the Stopped state, and can be started with a click on Start All Instances (individual instances can also be started):

Once the instances are up and running you can run Chef recipes on them.  You can also install operating system (Linux only) and package updates (read Run AWS OpsWorks Stack Commands to learn more) on the instances in the cluster. Finally, take a look at Using OpsWorks to Perform Operational Tasks to learn how to envelop shell commands in a simple JSON wrapper and run them.

For more information on this and other features, take a look at the OpsWorks User Guide. To learn more about how to run ECS tasks on Container Instances that have been provisioned by OpsWorks, read the ECS Getting Started Guide.

RHEL 7.1 Support
OpsWorks now supports version 7.1 of Red Hat Enterprise Linux (RHEL). Many AWS customers have asked us to support this OS and we are happy to oblige, as we did earlier this year when we announced OpsWorks support for Windows. You can launch and manage EC2 instances running RHEL 7. You can also manage existing, on-premises instances that are running RHEL 7.

You have several launch options. You can choose RHEL 7 as the default when you launch a new stack, and you can set it as the default for an existing stack. You can also leave the default as-is and choose to run RHEL 7 when you launch new instances. Here’s how you select RHEL 7 as the default when you launch a new stack:

As you probably know already, you can manage instances that are not running on EC2 for a modest hourly fee. You can take advantage of the monitoring and management tools provided by OpsWorks while managing all of your instances using a single user interface. To do this, you add additional compute power to a layer by registering an existing instance instead of launching a new one:

Step through the wizard; the final step will show you how to install the OpsWorks agent on your instance and register with OpsWorks:

When you run the command it will download the agent, install any necessary packages, and start the agent. The agent will register itself with OpsWorks and the instance will become part of the stack specified on the command line. At that point the instance will be registered as part of the stack but not assigned to a layer or configured in any particular way. You can use OpsWorks user-management feature to create users, manage permissions, and provide them with SSH access if necessary.

Installing the agent also sets up one-minute CloudWatch metrics:

After you have configured the instances and verified that they are being monitored, you can assign them to a layer:

Available Now
These features are available now and you can start using them today.

Jeff;

PS – Special thanks are due to my colleagues Mark Rambow and Cyrus Amiri for their help with this post.

Joining a Linux Instance to a Simple AD (AWS Directory Service)

If you are tasked with providing and managing user logins to a fleet of Amazon Elastic Compute Cloud (EC2) instances running Linux, I have some good news for you!

You can now join these instances to an AWS Directory Service Simple AD directory and manage credentials for your user logins using standard Active Directory tools and techniques. Your users will be able to log in to all of the instances in the domain using the same set of credentials. You can exercise additional control by creating directory groups.

We have published complete, step-by-step instructions to help you get started. You’ll need to be running a recent version of the Amazon Linux AMI, Red Hat Enterprise Linux, Ubuntu Server, or CentOS on EC2 instances that reside within a Amazon Virtual Private Cloud, and you’ll need to have an AWS Directory Service Simple AD therein.

You simply create a DHCP Options Set for the VPC and point it at the directory, install and configure a Kerberos client, join the instance to the domain, and reboot it. After you have done this you can SSH to it and log in using an identity from the directory. The documentation also shows you how to log in using domain credentials, add domain administrators to the sudo’ers list, and limit access to members of specific groups.

Jeff;

New Amazon CloudWatch Action – Reboot EC2 Instance

Amazon CloudWatch monitors your cloud resources and applications, including Amazon Elastic Compute Cloud (EC2) instances. You can track cloud, system, and application metrics, see them in graphical form, and arrange to be notified (via a CloudWatch alarm) if they cross a threshold value that you specify. You can also stop, terminate, or recover an EC2 instance when an alarm is triggered (see my blog post, Amazon CloudWatch – Alarm Actions for more information on alarm actions).

New Action – Reboot Instance
Today we are giving you a fourth action. You can now arrange to reboot an EC2 instance when a CloudWatch alarm is triggered. Because you can track and alarm on cloud, system, and application metrics, this new action gives you a lot of flexibility.

You could reboot an instance if an instance status check fails repeatedly. Perhaps the instance has run out of memory due to a runaway application or service that is leaking memory. Rebooting the instance is a quick and easy way to remedy this situation; you can easily set this up using the new alarm action. In contrast to the existing recovery action which is specific to a handful of EBS-backed instance types and is applicable only when the instance state is considered impaired, this action is available on all instance types and is effective regardless of the instance state.

If you are using the CloudWatch API or the AWS Command Line Interface (CLI) to track application metrics, you can reboot an instance if the application repeatedly fails to respond as expected. Perhaps a process has gotten stuck or an application server has lost its way. In many cases, hitting the (virtual) reset switch is a clean and simple way to get things back on track.

Creating an Alarm
Let’s walk through the process of creating an alarm that will reboot one of my instances if the CPU Utilization remains above 90% for an extended period of time. I simply locate the instance in the AWS Management Console, focus my attention on the Alarm Status column, and click on the icon:

Then I click on Take the action, choose Reboot this instance, and set the parameters (90% or more CPU Utilization for 15 minutes in this example):

If necessary, the console will ask me to confirm the creation of an IAM role as part of this step (this is a new feature):

The role will have permission to call the “Describe” functions in the CloudWatch and EC2 APIs. It also has permission to reboot, stop, and terminate instances

I click on Create Alarm and I am all set!

This feature is available now and you can start using it today in all public AWS regions.

Jeff;