Category: Amazon Elastic Load Balancer


New – Host-Based Routing Support for AWS Application Load Balancers

Last year I told you about the new AWS Application Load Balancer (an important part of Elastic Load Balancing) and showed you how to set it up to route incoming HTTP and HTTPS traffic based on the path element of the URL in the request. This path-based routing allows you to route requests to, for example, /api to one set of servers (also known as target groups) and /mobile to another set. Segmenting your traffic in this way gives you the ability to control the processing environment for each category of requests. Perhaps /api requests are best processed on Compute Optimized instances, while /mobile requests are best handled by Memory Optimized instances.

Host-Based Routing & More Rules
Today we are giving you another routing option. You can now create Application Load Balancer rules that route incoming traffic based on the domain name specified in the Host header. Requests to api.example.com can be sent to one target group, requests to mobile.example.com to another, and all others (by way of a default rule) can be sent to a third. You can also create rules that combine host-based routing and path-based routing. This would allow you to route requests to api.example.com/production and api.example.com/sandbox to distinct target groups.

In the past, some of our customers set up and ran a fleet of proxy servers and used them for host-based routing. With today’s launch, the proxy server fleet is no longer needed since the routing can be done using Application Load Balancer rules. Getting rid of this layer of processing will simplify your architecture and reduce operational overhead.

Application Load Balancer already provides several features that support container-based applications including port mapping, health checks, and service discovery. The ability to route on both host and path allows you to build and efficiently scale applications that are comprised of multiple microservices running in individual Amazon EC2 Container Service containers. You can use host-based routing to further simplify your service discovery mechanism by aligning your service names and your container names.

As part of today’s launch we are raising the maximum number of rules per Application Load Balancer from 10 to 75, and also introducing a new rule editor. I’ll start with the following target groups:

The Load Balancing Console shows the listeners that are associated with my Application Load Balancer: From there I simply click on View/edit rules to access the new rule editor:

I already have a default rule that forwards all requests to my web-target-production target:

I click on the Insert icon (the “+” sign) and then select a location. Rules are processed in the order that they are displayed:

I click on Insert Rule and define my new rule. Rules can reference a host, a path, or both. I’ll start with just a host:

I add two rules for host-based routing and the editor now looks like this:

If I want to route production and sandbox traffic to distinct targets, I can create new rules that reference the path. Here’s the first one:

With a few more clicks and a little typing I can create a powerful set of rules:

Rules that match the Host header can include up to three “*” (match 0 or more characters) or “?” (match 1 character) wildcards. Let’s say that I give each of my large customers a unique host name for tracking purposes. I can write rules that route all of the requests to the same target group, regardless of the final portion of the host name. Here’s a simple example:

The pencil icon in the rule editor allows me to make changes to the rule sequence. I select rules, move them to a new position, and then save the updated sequence:

I can also edit existing rules or delete unneeded ones.

Available Now
This feature is available today in all 15 AWS public AWS regions.

There is no extra charge for the first 10 rules (host-based, path-based, or both) evaluated by each load balancer. After that you will be charged based on the number of rule evaluations (this is a new dimension added to the Load Balancer Capacity units (LCU) model that I described in an earlier post). Each LCU supports up to 1000 rule evaluations. We measure on all four dimensions of the LCU, but you are charged only for the dimension with the highest usage in the given hour. Rules that are configured, but not processed will not be charged.

Jeff;

 

AWS IPv6 Update – Global Support Spanning 15 Regions & Multiple AWS Services

We’ve been working to add IPv6 support to many different parts of AWS over the last couple of years, starting with Elastic Load Balancing, AWS IoT, AWS Direct Connect, Amazon Route 53, Amazon CloudFront, AWS WAF, and S3 Transfer Acceleration, all building up to last month’s announcement of IPv6 support for EC2 instances in Virtual Private Clouds (initially available for use in the US East (Ohio) Region).

Today I am happy to share the news that IPv6 support for EC2 instances in VPCs is now available in a total of fifteen regions, along with Application Load Balancer support for IPv6 in nine of those regions.

You can now build and deploy applications that can use IPv6 addresses to communicate with servers, object storage, load balancers, and content distribution services. In accord with the latest guidelines for IPv6 support from Apple and other vendors, your mobile applications can now make use of IPv6 addresses when they communicate with AWS.

IPv6 Now in 15 Regions
IPv6 support for EC2 instances in new and existing VPCs is now available in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), South America (São Paulo), Canada (Central), EU (Ireland), EU (Frankfurt), EU (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Mumbai), and AWS GovCloud (US) Regions and you can start using it today!

You can enable IPv6 from the AWS Management Console when you create a new VPC:

Application Load Balancer
Application Load Balancers in the US East (Northern Virginia), US West (Northern California), US West (Oregon), South America (São Paulo), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and AWS GovCloud (US) Regions now support IPv6 in dual-stack mode, making them accessible via IPv4 or IPv6 (we expect to add support for the remaining regions within a few weeks).

Simply enable the dualstack option when you configure the ALB and then make sure that your security groups allow or deny IPv6 traffic in accord with your requirements. Here’s how you select the dualstack option:

You can also enable this option by running the set-ip-address-type command or by making a call to the SetIpAddressType function. To learn more about this new feature, read the Load Balancer Address Type documentation.

IPv6 Recap
Here are the IPv6 launches that we made in the run-up to the launch of IPv6 support for EC2 instances in VPCs:

CloudFront, WAF, and S3 Transfer Acceleration – This launch let you enable IPv6 support for individual CloudFront distributions. Newly created distributions supported IPv6 by default and existing distributions could be upgraded with a couple of clicks (if you using Route 53 alias records, you also need to add an AAAA record to the domain). With IPv6 support enabled, the new addresses will show up in the CloudFront Access Logs. The launch also let you use AWS WAF to inspect requests that arrive via IPv4 or IPv6 addresses and to use a new, dual-stack endpoint for S3 Transfer Acceleration.

Route 53 – This launch added support for DNS queries over IPv6 (support for the requisite AAAA records was already in place). A subsequent launch added support for Health Checks of IPv6 Endpoints, allowing you to monitor the health of the endpoints and to arrange for DNS failover.

IoT – This product launch included IPv6 support for message exchange between devices and AWS IoT.

S3 – This launch added support for access to S3 buckets via dual-stack endpoints.

Elastic Load Balancing – This launch added publicly routable IPv6 addresses for Elastic Load Balancers.

Direct Connect – Supports single and dualstack configurations on public and private VIFs (virtual interfaces).

Jeff;

 

AWS Web Application Firewall (WAF) for Application Load Balancers

I’m still catching up on a couple of launches that we made late last year!

Today’s post covers two services that I’ve written about in the past — AWS Web Application Firewall (WAF) and AWS Application Load Balancer:

AWS Web Application Firewall (WAF) – Helps to protect your web applications from common application-layer exploits that can affect availability or consume excessive resources. As you can see in my post (New – AWS WAF), WAF allows you to use access control lists (ACLs), rules, and conditions that define acceptable or unacceptable requests or IP addresses. You can selectively allow or deny access to specific parts of your web application and you can also guard against various SQL injection attacks. We launched WAF with support for Amazon CloudFront.

AWS Application Load Balancer (ALB) – This load balancing option for the Elastic Load Balancing service runs at the application layer. It allows you to define routing rules that are based on content that can span multiple containers or EC2 instances. Application Load Balancers support HTTP/2 and WebSocket, and give you additional visibility into the health of the target containers and instances (to learn more, read New – AWS Application Load Balancer).

Better Together
Late last year (I told you I am still catching up), we announced that WAF can now help to protect applications that are running behind an Application Load Balancer. You can set this up pretty quickly and you can protect both internal and external applications and web services.

I already have three EC2 instances behind an ALB:

I simple create a Web ACL in the same region and associate it with the ALB. I begin by naming the Web ACL. I also instruct WAF to publish to a designated CloudWatch metric:

Then I add any desired conditions to my Web ACL:

For example, I can easily set up several SQL injection filters for the query string:

After I create the filter I use it to create a rule:

And then I use the rule to block requests that match the condition:

To pull it all together I review my settings and then create the Web ACL:

Seconds after I click on Confirm and create, the new rule is active and WAF is protecting the application behind my ALB:

And that’s all it takes to use WAF to protect the EC2 instances and containers that are running behind an Application Load Balancer!

Learn More
To learn more about how to use WAF and ALB together, plan to attend the Secure Your Web Applications Using AWS WAF and Application Load Balancer webinar at 10 AM PT on January 26th.

You may also find the Secure Your Web Application With AWS WAF and Amazon CloudFront presentation from re:Invent to be of interest.

Jeff;

Application Performance Percentiles and Request Tracing for AWS Application Load Balancer

In today’s guest post, my colleague Colm MacCárthaigh discusses the new CloudWatch Percentile Statistics and the implications and benefits load balancing. He also introduces a new request tracing feature that will make it easier for you to track the progress of individual requests as they arrive at a load balancer and make their way to the application.

Jeff;


Application Performance Percentile Metrics
AWS Elastic Load Balancers have long included Amazon CloudWatch metrics that provide visibility into the health and performance of the instances and software running behind the load balancer. For instance, ELBs measure the response time of every request sent to the application being load balanced. Many customers use the built-in average and maximum latency metrics to monitor the performance of their application. It’s also common to use these metrics to trigger Auto Scaling to add or remove instances or containers as load fluctuates.

Based on the new CloudWatch Percentile support, newly created Application Load Balancers (ALBs) now provide more fine-grained data for application response time. These measurements allow you to get a better sense of performance across a range, for example if the 10th percentile value is 2 milliseconds, then 10% of requests took 2 milliseconds or less, if the 99.9th percentile value is 50 ms, that means that 99.9% of requests took 50 ms or less and so on.

For a real-world example of how useful percentiles are, suppose an application serves the majority of requests from a fast cache in one or two milliseconds, but when the cache is empty the requests are served much more slowly, perhaps in the 100 ms to 200 ms range. The traditional maximum metric will reflect only the slowest case, around 200 ms, and the average will reflect a blend of the fast and slow responses without giving much information about how many requests are fast, how many are slow, and what the spread within each group is. Percentile metrics illuminate the application’s performance across the range of requests and give a more comprehensive picture.

Percentile metrics can be used to set more meaningful and more aggressive performance targets for your application. For example, by using the 99th percentile as an Auto Scaling trigger or CloudWatch alarm, you can set a scaling target that will provide enough resources so that no more than 1% of the requests take 2 ms or longer to process.

Integrated Request Tracing
While percentiles provide greater visibility into your application with a “big picture” summary of performance across all requests, AWS Application Load Balancer’s new request tracing feature allows you to dive deep into application performance and health at the nitty-gritty level of individual requests.

ALBs now look for a new custom X-Amzn-Trace-ID HTTP header on all requests to an ALB. If this header is absent on an incoming request, then an ALB will generate one that looks like this:

X-Amzn-Trace-Id: Root=1-58211399-36d228ad5d99923122bbe354 

and pass it on to the application. Because the header was absent on the incoming request, this request is considered a “root” request and the new header that ALB sends will include a randomly-generated Root attribute.

If the header is present on an incoming request, then its contents will be preserved, so the header can also be used to pass and preserve tracing and diagnostic state, such as ancestry and timing data, between requests. For example, a request that is generated like this:

$ curl -H \
  "X-Amzn-Trace-Id: Root=1-58211399-36d228ad5d99923122bbe354; TimeSoFar=112ms; CalledFrom=Foo" \
  request-tracing-demo-1290185324.elb.amazonaws.com 

will cause ALB to generate a slightly different header:

X-Amzn-Trace-Id: Self=1-582113d1-1e48b74b3603af8479078ed6;\
  Root=1-58211399-36d228ad5d99923122bbe354;\
  TotalTimeSoFar=112ms;CalledFrom=Foo 

If, as in this example, the incoming header included its own Root attribute, then ALB will preserve it and add a new randomly-generated “Self” attribute instead. The TotalTimeSoFar and CalledFrom attributes in this example are arbitrary and optional, and are actually meaningless to the Load Balancer itself, but are preserved as-is to allow applications and tracing frameworks to pass data between requests.

For applications that don’t set the header on incoming requests, ALB’s request tracing acts as a simple unique identifier for every request. For applications that pass the header between requests, tracing allows you to easily identify the entry point into the overall system, to find related requests with a Root attribute, and it enables applications to create a trail of “breadcrumbs” that spans requests.

The contents of the X-Amzn-Trace-ID header are logged to Application Load Balancer access logs, which contain error indicators and performance measurements for each request, so the unique identifiers can be used to pinpoint issues with individual requests.

http 2016-11-08T00:04:13.109451Z app/request-tracing-demo/f2c895fd3ea253c5 \
  72.21.217.129:20555 172.31.51.164:80 0.001 0.001 0.000 200 200 276 550 \
  "GET http://request-tracing-demo-1290185324.elb.amazonaws.com:80/ HTTP/1.1" \
  "curl/7.24.0 (x86_64-redhat-linux-gnu) libcurl/7.24.0 NSS/3.16.1 \
  Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2" - - \
  arn:aws:elasticloadbalancing:us-east-1:99999999999:targetgroup/echoheader/21a6f1501d2df426 \
  "Root=1-5821167d-1d84f3d73c47ec4e58577259"

Percentile statistics are available today for all new Application Load Balancers and can be accessed from the CloudWatch Console, AWS SDKs, and API. Support for existing Application Load Balancers and all Classic Load Balancers will be available in the coming weeks. Both new features are included at no extra cost.

Colm MacCárthaigh, AWS Elastic Load Balancer

New – AWS Application Load Balancer

We launched Elastic Load Balancing (ELB) for AWS in the spring of 2009 (see New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch to see just how far AWS has come since then). Elastic Load Balancing has become a key architectural component for many AWS-powered applications. In conjunction with Auto Scaling, Elastic Load Balancing greatly simplifies the task of building applications that scale up and down while maintaining high availability.

On the Level
Per the well-known OSI model, load balancers generally run at Layer 4 (transport) or Layer 7 (application).

A Layer 4 load balancer works at the network protocol level and does not look inside of the actual network packets, remaining unaware of the specifics of HTTP and HTTPS. In other words, it balances the load without necessarily knowing a whole lot about it.

A Layer 7 load balancer is more sophisticated and more powerful. It inspects packets, has access to HTTP and HTTPS headers, and (armed with more information) can do a more intelligent job of spreading the load out to the target.

Application Load Balancing for AWS
Today we are launching a new Application Load Balancer option for ELB. This option runs at Layer 7 and supports a number of advanced features. The original option (now called a Classic Load Balancer) is still available to you and continues to offer Layer 4 and Layer 7 functionality.

Application Load Balancers support content-based routing, and supports applications that run in containers. They support a pair of industry-standard protocols (WebSocket and HTTP/2) and also provide additional visibility into the health of the target instances and containers. Web sites and mobile apps, running in containers or on EC2 instances, will benefit from the use of Application Load Balancers.

Let’s take a closer look at each of these features and then create a new Application Load Balancer of our very own!

Content-Based Routing
An Application Load Balancer has access to HTTP headers and allows you to route requests to different backend services accordingly. For example, you might want to send requests that include /api in the URL path to one group of servers (we call these target groups) and requests that include /mobile to another. Routing requests in this fashion allows you to build applications that are composed of multiple microservices that can run and be scaled independently.

As you will see in a moment, each Application Load Balancer allows you to define up to 10 URL-based rules to route requests to target groups. Over time, we plan to give you access to other routing methods.

Support for Container-Based Applications
Many AWS customers are packaging up their microservices into containers and hosting them on Amazon EC2 Container Service. This allows a single EC2 instance to run one or more services, but can present some interesting challenges for traditional load balancing with respect to port mapping and health checks.

The Application Load Balancer understands and supports container-based applications. It allows one instance to host several containers that listen on multiple ports behind the same target group and also performs fine-grained, port-level health checks

Better Metrics
Application Load Balancers can perform and report on health checks on a per-port basis. The health checks can specify a range of acceptable HTTP responses, and are accompanied by detailed error codes.

As a byproduct of the content-based routing, you also have the opportunity to collect metrics on each of your microservices. This is a really nice side-effect that each of the microservices can be running in its own target group, on a specific set of EC2 instances. This increased visibility will allow you to do a better job of scaling up and down in response to the load on individual services.

The Application Load Balancer provides several new CloudWatch metrics including overall traffic (in GB), number of active connections, and the connection rate per hour.

Support for Additional Protocols & Workloads
The Application Load Balancer supports two additional protocols: WebSocket and HTTP/2.

WebSocket allows you to set up long-standing TCP connections between your client and your server. This is a more efficient alternative to the old-school method which involved HTTP connections that were held open with a “heartbeat” for very long periods of time. WebSocket is great for mobile devices and can be used to deliver stock quotes, sports scores, and other dynamic data while minimizing power consumption. ALB provides native support for WebSocket via the ws:// and wss:// protocols.

HTTP/2 is a significant enhancement of the original HTTP 1.1 protocol. The newer protocol feature supports multiplexed requests across a single connection. This reduces network traffic, as does the binary nature of the protocol.

The Application Load Balancer is designed to handle streaming, real-time, and WebSocket workloads in an optimized fashion. Instead of buffering requests and responses, it handles them in streaming fashion. This reduces latency and increases the perceived performance of your application.

Creating an ALB
Let’s create an Application Load Balancer and get it all set up to process some traffic!

The Elastic Load Balancing Console lets me create either type of load balancer:

I click on Application load balancer, enter a name (MyALB), and choose internet-facing. Then I add an HTTPS listener:

On the same screen, I choose my VPC (this is a VPC-only feature) and one subnet in each desired Availability Zone, tag my Application Load Balancer, and proceed to Configure Security Settings:

Because I created an HTTPS listener, my Application Load Balancer needs a certificate. I can choose an existing certificate that’s already in IAM or AWS Certificate Manager (ACM),  upload a local certificate, or request a new one:

Moving right along, I set up my security group. In this case I decided to create a new one. I could have used one of my existing VPC or EC2 security groups just as easily:

The next step is to create my first target group (main) and to set up its health checks (I’ll take the defaults):

Now I am ready to choose the targets—the set of EC2 instances that will receive traffic through my Application Load Balancer. Here, I chose the targets that are listening on port 80:

The final step is to review my choices and to Create my ALB:

After I click on Create the Application Load Balancer is provisioned and becomes active within a minute or so:

I can create additional target groups:

And then I can add a new rule that routes /api requests to that target:

Application Load Balancers work with multiple AWS services including Auto Scaling, Amazon ECS, AWS CloudFormation, AWS CodeDeploy, and AWS Certificate Manager (ACM). Support for and within other services is in the works.

Moving on Up
If you are currently using a Classic Load Balancer and would like to migrate to an Application Load Balancer, take a look at our new Load Balancer Copy Utility. This Python tool will help you to create an Application Load Balancer with the same configuration as an existing Classic Load Balancer. It can also register your existing EC2 instances with the new load balancer.

Availability & Pricing
The Application Load Balancer is available now in all commercial AWS regions and you can start using it today!

The hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a Classic Load Balancer.

When you use an Application Load Balancer, you will be billed by the hour and for the use of Load Balancer Capacity Units, also known as LCU’s. An LCU measures the number of new connections per second, the number of active connections, and data transfer. We measure on all three dimensions, but bill based on the highest one. One LCU is enough to support either:

  • 25 connections/second with a 2 KB certificate, 3,000 active connections, and 2.22 Mbps of data transfer or
  • 5 connections/second with a 4 KB certificate, 3,000 active connections, and 2.22 Mbps of data transfer.

Billing for LCU usage is fractional, and is charged at $0.008 per LCU per hour. Based on our calculations, we believe that virtually all of our customers can obtain a net reduction in their load balancer costs by switching from a Classic Load Balancer to an Application Load Balancer.

Jeff;

 

 

 

Elastic Load Balancing Update – More Ports & Additional Fields in Access Logs

Many AWS applications use Elastic Load Balancing to distribute traffic to a farm of EC2 instances. An architecture of this type is highly scalable since instances can be added, removed, or replaced in a non-disruptive way. Using a load balancer also gives the application the ability to keep on running if an instance encounters an application or system problem of some sort.

Today we are making Elastic Load Balancing even more useful with the addition of two new features: support for all ports and additional fields in access logs.

Support for All Ports
When you create a new load balancer, you need to configure one or more listeners for it. Each listener accepts connection requests on a specific port. Until now, you had the ability to configure listeners for a small set of low-numbered, well-known ports (25, 80, 443, 465, and 587) and to a much larger set of ephemeral ports (1024-65535).

Effective today, load balancers that run within a Virtual Private Cloud (VPC) can have listeners for any port (1-65535). This will give you the flexibility to create load balancers in front of services that must run on a specific, low-numbered port.

You can set this up in all of the usual ways: the ELB API, AWS Command Line Interface (CLI) / AWS Tools for Windows PowerShell, a CloudFormation template, or the AWS Management Console. Here’s how you would define a load balancer for port 143 (the IMAP protocol):

To learn more, read about Listeners for Your Load Balancer in the Elastic Load Balancing Documentation.

Additional Fields in Access Logs
You already have the ability to log the traffic flowing through your load balancers to a location in S3:

In order to allow you to know more about this traffic, and to give you some information that will be helpful as you contemplate possible configuration changes, the access logs now include some additional information that is specific to a particular protocol. Here’s the scoop:

  • User Agent – This value is logged for TCP requests that arrive via the HTTP and HTTPS ports.
  • SSL Cipher and Protocol – These values are logged for TCP requests that arrive via the HTTPS and SSL ports.

You can use this information to make informed decisions when you think about adding or removing support for particular web browsers, ciphers, or SSL protocols. Here’s a sample log entry:

2015-05-13T23:39:43.945958Z my-loadbalancer 192.168.131.39:2817 10.0.0.1:80 0.000086 0.001048 0.001337 200 200 0 57 "GET https://www.example.com:443/ HTTP/1.1" "curl/7.38.0" DHE-RSA-AES128-SHA TLSv1.2

You can also use tools from AWS Partners to view and analyze this information. For example, Splunk shows it like this:

And Sumo Logic shows it like this:

 

To learn more about access logging, read Monitor Your Load Balancer Using Elastic Load Balancing Access Logs.

Both of these features are available now and you can start using them today!

Jeff;

Attach and Detach Elastic Load Balancers from Auto Scaling Groups

I enjoy reading the blog posts that I wrote in the early days of AWS. Way back in 2009, I wrote a post to launch Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch. Here’s what I said at the time:

“As soon as you launch some EC2 instances, you want visibility into resource utilization and overall performance. You want your application to be able to scale on demand based on traffic and system load. You want to spread the incoming traffic across multiple web servers for high availability and better performance.”

All of these requirements remain. In the six years since that blog post, we have added many features to each of these services. Since this post focuses on Elastic Load Balancing and Auto Scaling, I thought I’d start with a quick recap of some of the features that we have recently added to those services.

Many of these features were added in response to customer feedback (we love to hear from you; don’t be shy). Today’s feature is no exception!

Attach and Detach Load Balancers
You can now attach and detach elastic load balancers from auto scaling groups. This gives you additional operational flexibility. Attaching a load balancer to an auto scaling group allows the load balancer to send traffic to the EC2 instances in the group. Detaching a load balancer from a group stops it from sending traffic.

The ability to easily attach and detach load balancers from your auto scaling groups will simplify your fleet management tasks. For example, you can do blue-green deployments and upgrade SSL certificates more easily and with less downtime.

You can access this feature from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, and the EC2 API. Let’s take a look at the console. I have two load balancers:

Initially, the first load balancer (MyLB-v1) is attached to my auto scaling group:

To make a change, I simply select the auto scaling group, and click on the Edit action in the menu:

Then I make any desired changes, and click on Save:

The changes take effect within a minute or so. You can check the Activity History to confirm that the change is complete:

This feature is available now and you can start using it today in all public AWS regions (support for AWS GovCloud (US) is on the way).

Jeff;

Tag Your Elastic Load Balancers

Elastic Load Balancing helps you to build applications that are resilient and easy to scale. You can create both public-facing and internal load balancers in the AWS Management Console with a couple of clicks.

Today we are launching a helpful new feature for Elastic Load Balancing. You can now add up to ten tags (name/value pairs) to each of your load balancers. You can add tags to new load balancers when you create them. You can also add, remove, and change tags on existing load balancers. Tag names can consist of up to 128 Unicode characters; values can have up to 256.

Tags can be used for a number of different purposes including tracking of identity, role or owner. Tagging items also allows them to be grouped and segregated for billing and cost tracking. Once you tag your load balancers, you can visualize your spending patterns and analyze costs by tags using the Cost Explorer in the AWS Management Console.

You can manage tags from the AWS Management Console, Elastic Load Balancing API , or the AWS Command Line Interface (CLI). Here’s how you add tags from the Console when you create a new Elastic Load Balancer:

You can see the tags on each of your Elastic Load Balancers at a glance:

You can edit (add, remove, and change) tags just as easily:

This new feature is available now and you can start using it today. To learn more, read about ELB Tagging in the Elastic Load Balancing Developer Guide.

Jeff;

Elastic Load Balancing Connection Timeout Management

When your web browser or your mobile device makes a TCP connection to an Elastic Load Balancer, the connection is used for the request and the response, and then remains open for a short amount of time for possible reuse. This time period is known as the idle timeout for the Load Balancer and is set to 60 seconds. Behind the scenes, Elastic Load Balancing also manages TCP connections to Amazon EC2 instances; these connections also have a 60 second idle timeout.

In most cases, a 60 second timeout is long enough to allow for the potential reuse that I mentioned earlier. However, in some circumstances, different idle timeout values are more appropriate. Some applications can benefit from a longer timeout because they create a connection and leave it open for polling or extended sessions. Other applications tend to have short, non- recurring requests to AWS and the open connection will hardly ever end up being reused.

In order to better support a wide variety of use cases, you can now set the idle timeout for each of your Elastic Load Balancers to any desired value between 1 and 3600 seconds (the default will remain at 60). You can set this value from the command line or through the AWS Management Console.

Here’s how to set it from the command line:


$ elb-modify-lb-attributes myTestELB --connection-settings "idletimeout=120" --headers

And here is how to set it from the AWS Management Console:

This new feature is available now and you can start using it today! Read the documentation to learn more.

Jeff;

AWS CloudTrail Update – Seven New Services & Support From CloudCheckr

AWS CloudTrail records the API calls made in your AWS account and publishes the resulting log files to an Amazon S3 bucket in JSON format, with optional notification to an Amazon SNS topic each time a file is published.

Our customers use the log files generated CloudTrail in many different ways. Popular use cases include operational troubleshooting, analysis of security incidents, and archival for compliance purposes. If you need to meet the requirements posed by ISO 27001, PCI DSS, or FedRAMP, be sure to read our new white paper, Security at Scale: Logging in AWS, to learn more.

Over the course of the last month or so, we have expanded CloudTrail with support for additional AWS services. I would also like to tell you about the work that AWS partner CloudCheckr has done to support CloudTrail.

New Services
At launch time, CloudTrail supported eight AWS services. We have added support for seven additional services over the past month or so. Here’s the full list:

 Here’s an updated version of the diagram that I published when we launched CloudTrail:

News From CloudCheckr
CloudCheckr (an AWS Partner) integrates with CloudTrail to provide visibility and actionable information for your AWS resources. You can use CloudCheckr to analyze, search, and understand changes to AWS resources and the API activity recorded by CloudTrail.

Let’s say that an AWS administrator needs to verify that a particular AWS account is not being accessed from outside a set of dedicated IP addresses. They can open the CloudTrail Events report, select the month of April, and group the results by IP address. This will display the following report:

As you can see, the administrator can use the report to identify all the IP addresses that are being used to access the AWS account. If any of the IP addresses were not on the list, the administrator could dig in further to determine the IAM user name being used, the calls being made, and so forth.

CloudCheckr is available in Freemium and Pro versions. You can try CloudCheckr Pro for 14 days at no charge. At the end of the evaluation period you can upgrade to the Pro version or stay with CloudCheckr Freemium.

— Jeff;