Category: Amazon EC2


EC2 Maintenance Update

Today Ive received a few questions about a maintenance update were performing late this week through early next week, so I thought it would be useful to provide an update.

Yesterday we started notifying some of our customers of a timely security and operational update we need to perform on a small percentage (less than 10%) of our EC2 fleet globally. AWS customers know that security and operational excellence are our top two priorities. These updates must be completed by October 1st before the issue is made public as part of an upcoming Xen Security Announcement (XSA).

Following security best practices, the details of this update are embargoed until then. The issue in that notice affects many Xen environments, and is not specific to AWS. As we explained in emails to the small percentage of our customers who are affected and on our forums, the instances that need the update require a system restart of the underlying hardware and will be unavailable for a few minutes while the patches are being applied and the host is being rebooted.

While most software updates are applied without a reboot, certain limited types of updates require a restart. Instances requiring a reboot will be staggered so that no two regions or availability zones are impacted at the same time and they will restart with all saved data and all automated configuration intact. Most customers should experience no significant issues with the reboots.

We understand that for a small subset of customers the reboot will be more inconvenient; we wouldn’t inconvenience our customers if it wasn’t important and time-critical to apply this update. Customers who aren’t sure if they are impacted should go to the Events page on the EC2 console, which will list any pending instance reboots for their AWS account. As always, we are here to help walk customers through this or to answer questions after the maintenance update completes. Just open a support case.

P.S. Note that this update is not in any way associated with what is being called the Bash Bug in the news today. For information on that issue, see this security bulletin on the AWS security center.

Jeff;

Amazon Linux AMI 2014.09 Now Available

The Amazon Linux AMI is a supported and maintained Linux image for use on Amazon EC2.

We release new versions of the Amazon Linux AMI every six months after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum and are available to all EC2 users in all AWS Regions.

Launching 2014.09 Today
Today we are releasing the 2014.09 Amazon Linux AMI for use in PV and HVM mode, with support for EBS-backed and Instance Store-backed AMIs. This AMI is supported on all EC2 instance types and in all AWS Regions.

You can launch this new version of the AMI in the usual ways. You can also upgrade existing EC2 instances by running yum update and then rebooting your instance.

Updated Kernel
The Amazon Linux AMI uses version 3.14.19 of the Linux kernel. This is the latest long-term-supported upstream Linux kernel; it includes plenty of new features and fixes. Important features added since the last release of the AMI include low latency networking polling, zswap (compressed swap) zram (in-kernel memory compression module), RAID multithreading (more IOPS on fast devices), numerous memory management scalability improvements, numerous improvements to popular file systems (ext4, xfs, btrfs), support for nftables (a successor to iptables), and numerous networking improvements (for example, TCP Fast Open is now enabled by default). In order to fully appreciate all of the changes that have taken place between the 3.10 and 3.14 kernels, you will need to spend some time studying the Linux 3.11, Linux 3.12, Linux 3.13, and Linux 3.14 release notes.

Instances that run HVM AMIs will now restart 30 seconds after encountering a kernel panic instead of hanging indefinitely. By contrast, PV AMIs have always restarted in this situation. If your system is dependent on the old behavior, set kernel.panic to 0 in /etc/sysctl.conf.

New Features
The roadmap for the Amazon Linux AMI is driven in large part by customer requests. During this release cycle, we have added a number of features as a result of these requests; here’s a sampling:

Other updates include Docker 1.2, Nginx 1.6.1, and the latest versions of PHP 5.3, 5.4, and 5.5. Many of these updates were made as a result of customer requests in the EC2 Forum. If you need an updated package (or an entirely new one) for the Amazon Linux AMI, never hesitate to let us know.

Other New Packages
Based on customer requests, we have added the following packages to the Amazon Linux AMI:

  • NCDU – A disk space usage analyzer with ncurses support (screen shot at right).
  • ClamAV – An open source antivirus engine.
  • LLVM – Compiler infrastructure (libraries, a C/C++/Objective-C compiler, C++ runtime, a debugger, and much more).
  • Shorewall – A gateway / firewall configuration tool to simplify the use of iptables.
  • Stress – A simple, configurable workload generator.

More Info
For additional information on this release of the Amazon Linux AMI, please check out the release notes.

Jeff;

Five More EC2 Instance Types for AWS GovCloud (US)

AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud. Today we are enhancing this Region with the addition of five more EC2 instance types. Instances of these types can be launched directly or through Auto Scaling groups.

Let’s take a look at the newly available instance types and review the use cases for each one.

HS1 – High Storage Density & Sequential I/O
EC2’s HS1 instances provide very high storage density and high sequential read and write performance per instance. They also offer higher storage density than other EC2 instances along with the lowest cost per GB of storage. These instances are ideal for data warehousing, Hadoop/MapReduce applications, and parallel file systems. To learn more about this instance type, read my blog post, The New EC2 High Storage Instance Family.

C3 – High Compute Capacity
The C3 instances are ideal for applications that benefit from a higher amount of compute capacity relative to memory (in comparison to the General Purpose instances), and are recommended for high performance web servers, and other scale-out compute-intensive applications. To learn more about the C3 instances, read A New Generation of EC2 Instances for Compute-Intensive Workloads.

R3 – Memory Optimized
R3 instances are the latest generation of memory-optimized instances. We recommend them for high performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis, larger deployments of Microsoft SharePoint and other enterprise applications. The R3 instances support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only. My recent post, Now Available – New Memory-Optimized EC2 Instances contains more information.

I2 – High Storage & Random I/O
EC2’s I2 instances are high storage instances that provide very fast SSD-backed instance storage optimized for very high random I/O performance, and provide high IOPS at a low cost. You can use I2 instances for transactional systems and high performance NoSQL databases like Cassandra and MongoDB. Like the R3 instances, the I2 instances currently support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only. I described these instances in considerable detail last year in Amazon EC2’s New I2 Instance Type – Available Now!.

T2 – Economical Base + Full-Core Burst
Finally, the T2 instances are built around a processing allocation model that provides you a generous, assured baseline amount of processing power coupled with the ability to automatically and transparently scale up to a full core when you need more compute power. Your ability to burst is based on the concept of “CPU Credits” that you accumulate during quiet periods and spend when things get busy. You can provision an instance of modest size and cost and still have more than adequate compute power in reserve to handle peak demands for compute power. To learn more about these instances, read my recent blog post, New Low Cost EC2 Instances with Burstable Performance.

Available Now
These instance types are available now to everyone who uses AWS GovCloud (US). Visit the AWS GovCloud (US) EC2 Pricing Page to learn more.

Jeff;

Query Your EC2 Instances Using Tag and Attribute Filtering

As an Amazon Elastic Compute Cloud (EC2) user, you probably know just how simple and easy it is to launch EC2 instances on an as-needed basis. Perhaps you got your start by manually launching an instance or two, and later moved to a model where you launch instances through a AWS CloudFormation template, Auto Scaling, or in Spot form.

Today we are launching an important new feature for the AWS Management Console. You can now find the instance or instances that you are looking for by filtering on tags and attributes, with some advanced options including inverse search, partial search, and regular expressions.

Instance Tags
Regardless of the manner in which you launch them, you probably want to track the role (development, test, production, and so forth) internal owner, and other attributes of each instance. This becomes especially important as your fleet grows to hundreds or thousands of instances. We have long supported tagging of EC2 instances (and other resources) for many years. As you probably know already, you can add up to ten tags (name/value pairs) to many types of AWS resources. While I can sort by the tags to group like-tagged instances together, there’s clearly room to do even better! With today’s launch, you can use the tags that you assign, along with the instance attributes, to locate the instance or instances that you are looking for.

Query With Tags & Attributes
As I was writing this post, I launched ten EC2 instances, added Mode and Owner tags to each one (supplementing the default Name, and then configured the console to show the tags and their values:

The new filter box offers many options. I’ll do my best to show them all to you!

In the examples that follow, I will filter my instances using the tags that I assigned to the instances. I’ll start with simple examples and work up to some more complex ones. I can filter by keyword. Let’s say that I am looking for an instance and can only recall part of the instance id (this turns out to be a very popular way to search). I enter the partial id (“2a27”) in to the filter box and press Enter to find it:

Let’s say that I want to find all of the instances where I am listed as Owner. I click in the Filter box for some guidance:

I select the Owner tag and select from among the values presented to me:

Here are the results:

I can add a second filter if I want to see only the instances where I am the owner and the Mode is “Production”:

I can also filter by any of the attributes of the instance. For example, I can easily find all of the instances that are in the Stopped state:

And I can, of course, combine this with a filter on a tag. I can find all of my stopped instances:

I can use an inverse search to find everyone else’s stopped instances (I simply prefix the value with an exclamation mark):

I can also use regular expressions to find instances owned by Kelly or Andy:

And I can do partial matches to compensate for inconsistent naming:

I can even filter by launch date to find instances that are newer or older than a particular time:

Finally, the filter information is represented in the console URL so that you can bookmark your filters or share them with your colleagues:

Filter Now
This feature is available now and you can start using it today. It works for EC2 instances now; we expect to make it available for other types of EC2 resources before too long.

Jeff;

Enhanced Throughput for Provisioned IOPS (SSD) and General Purpose (SSD) EBS Volumes

Back in the old, pre-cloud days, updating your data center to use the latest and greatest hardware was expensive, somewhat risky, and resource intensive. You would have to make the capital investment to acquire new hardware based on your usual 3 or 5 year refresh cycle, field test it, and then migrate your systems and applications. The time between “I saw this cool thing and it could benefit our work” and “we are using this cool thing and it is benefitting our work” was often measured in quarters or years. Delays or inefficiencies in this process have the potential to affect the competitive position, health, and overall viability of your organization.

As I have said before, the cloud changes this model for the better. First of all, your cloud provider has an incentive to bring the newest and most powerful technology to market on a timely basis. Second, the dynamic nature of the cloud makes it easy for you to launch, test, and measure the performance of your existing applications on the new technology without disrupting your production systems.

General Purpose (SSD) Adoption is Strong
I would like to share some interesting numbers with you that illustrate the game-changing nature of the Cloud. In mid-June we announced SSD-Backed Elastic Block Storage and made it available in all AWS Regions. We knew that our customers would find this new offering attractive but we were not quite sure (despite plenty of market research and modeling) just how popular it would turn out to be.

In less than three months, the General Purpose (SSD) EBS storage has grown to the extent that it is now one of the fastest adopted services in the history of AWS! Here are two data points:

  1. Within a few weeks of the launch, over 25% of the EBS customer base was already making use of the new General Purpose (SSD) EBS volumes in some way.
  2. Today, most of our customers are now using General Purpose (SSD) volumes to meet their need for general purpose block storage. In fact, about 90% of the newly created block storage is now on SSD volumes.

Looking at this another way, the easy and capital-free migration made possible by the cloud has allowed the vast majority of our customers to move to a new generation of storage in a little over two months. Any way you look at it, this is a rapid upgrade cycle!

Throughput Enhancement
To celebrate this huge step forward (and because we love to innovate), we are improving data transfer throughput for General Purpose (SSD) and Provisioned IOPS (SSD) volumes. Here’s what’s new:

  1. The maximum attainable throughput to each volume has been doubled. Each General Purpose (SSD) and Provisioned IOPS (SSD) volume can now sustain up to 128 megabytes per second of read or write traffic.
  2. An I/O request of up to 256 kilobytes is now counted as a single I/O operation (IOP). In other words, a single IOP is now up to 16 times as cost-effective and performant as before (prior this enhancement, each IOP represented at most 16 kilobytes of data transfer). If you attach multiple General Purpose (SSD) or Provisioned IOPS (SSD) volumes to a single c3.8xlarge EC2 instance you can achieve up to 800 megabytes per second of aggregate throughput per instance.

These changes will improve your I/O performance and can also dramatically reduce your storage costs. If your application has a need for 128 megabytes per second of data transfer, you can now meet this need by provisioning 500 IOPS instead of 8000 IOPS.

As I noted above, an I/O request for up to 256 kilobytes is now counted as a single I/O operation. In some cases you can configure your application or your operating environment to make large read and write requests. For example, you can configure the size of requests made by Hadoop by altering the dfs.blocksize parameter. If you are building your own applications, you can read or write large blocks.

As part of this launch we have also updated the EC2 AMIs for Microsoft Windows. The new AMIs (“2014.08.13”) will use SSD volumes exclusively and have an updated PV driver for increased performance. The Microsoft Security Updates are current to August 2014, the PowerShell Tools have been updated.

This enhancement is now in effect in all AWS Regions. If you are using General Purpose (SSD) or Provisioned IOPS (SSD) volumes then you are already reaping the benefits. We expect this enhancement to improve performance on many types of I/O-intensive workloads including those which involve database loads and scans across large tables.

Jeff;

Rapidly Deploy SharePoint on AWS With New Deployment Guide and Templates

Building on top of our earlier work to bring Microsoft SharePoint to AWS, I am happy to announce that we have published a comprehensive Quick Start Reference and a set of AWS CloudFormation templates.

As part of today’s launch, you get a reference deployment, architectural guidance, and a fully automated way to deploy a production-ready installation of SharePoint with a couple of clicks in under an hour, all in true self-service form.

Before you run this template, you need to run our SQL Quick Start (also known as “Microsoft Windows Server Failover Clustering and SQL Server AlwaysOn Availability Groups”). It will set up Microsoft SQL Server 2012 or 2014 instances configured as a Windows Server Failover Cluster.

The template we are announcing today runs on top of this cluster. The template deploys and configures all of the “moving parts” including the Microsoft Active Directory Domain Services infrastructure and a SharePoint farm comprised of multiple Amazon Elastic Compute Cloud (EC2) instances spread across several Availability Zones within a Amazon Virtual Private Cloud.

Reference Deployment Architecture
The Reference Deployment document will walk you through all of the steps necessary to end up with a highly available SharePoint Server 2013 environment! If you use the default parameters, you will end up with the following environment, all running in the AWS Cloud.

The reference deployment incorporates the best practices for SharePoint deployment and AWS security. It contains the following AWS components:

  • An Amazon Virtual Private Cloud spanning multiple Availability Zones, containing a pair of private subnets and a DMZ on a pair of public subnets.
  • An Internet Gateway to allow external connections to the public subnets.
  • EC2 instances in the DMZ with support for RDP to allow for remote administration.
  • An Elastic Load Balancer to route traffic to the EC2 instances running the SharePoint front-end.
  • Additional EC2 instances to run the SharePoint back-end.
  • Additional EC2 instances to run the Active Directory Domain Controller.
  • Preconfigured VPC security groups and Network ACLs

The document walks you through each component of the architecture and explains what it does and how it works. It also details an optional “Streamlined” deployment topology that can be appropriate for certain use cases along an “Office Web Apps” model that supports browser-based editing of Office documents that are stored in SharePoint libraries. There’s even an option to create an Intranet deployment that does not include an Internet-facing element.

The entire setup process is automated and needs almost no manual intervention. You will need to download SharePoint from a source that depends on your current licensing agreement with Microsoft. By default, the installation uses a trial key for deployment. In order to deploy a licensed version of SharePoint Server, you can use License Mobility Through Software Assurance.

CloudFormation Template
The CloudFormation template will prompt you for all of the information needed to start the setup process:

The template is fairly complex (over 4600 lines of JSON) and is a good place to start when you are looking for examples of best practices for the use of CloudFormation to automate the instantiation of complex systems.

Jeff;

Auto Scaling Update – Lifecycle Management, Standby State, and DetachInstances

Auto Scaling is a key AWS service. You can use it to build resilient, highly scalable applications that react to changes in load by launching or terminating Amazon EC2 instances as needed, all driven by system or user-defined metrics collected and tracked by Amazon CloudWatch.

Today we are enhancing Auto Scaling with the addition of three features that give you additional control over the EC2 instances managed by each of your Auto Scaling Groups. You can now exercise additional control of the instance launch and termination process using Lifecycle Hooks. You can remove instances from an Auto Scaling Group and you can now put instances into the new Standby state for troubleshooting or maintenance.

Lifecycle Actions & Hooks
Each EC2 instance in an Auto Scaling Group goes through a defined set of states and state transitions during its lifetime. In response to a Scale Out Event, instances are launched, attached to the group, and become operational. Later, in response to a Scale In Event, instances are removed from the group and then terminated. With today’s launch we are giving you additional control of the instance lifecycle at the following times:

  • After it has been launched but before it is attached to the group (Auto Scaling calls this state Pending). This is your opportunity to perform any initialization operations that are needed to fully prepare the instance. You can install and configure software, create, format, and attach EBS volumes, connect the instance to message queues, and so forth.
  • After it has been detached from the group but before it has been terminated (Auto Scaling calls this state Terminating). You can do any additional work that is needed to fully decommission the instance. You can capture a final snapshot of any work in progress, move log files to long-term storage, or hold malfunctioning instances off to the side for debugging.

You can configure a set of Lifecycle actions for each of your Auto Scaling Groups. Messages will be sent to a notification target for the group (an SQS queue or an SNS topic) each time an instance enters the Pending or Terminating state. Your application is responsible for handling the messages and implementing the appropriate initialization or decommissioning operations.

After the message is sent, the instance will be in the Pending:Wait or Terminating:Wait state, as appropriate. Once the instance enters this state, your application is given 60 minutes to do the work. If the work is going to take more than 60 minutes, your application can extend the time by issuing a “heartbeat” to Auto Scaling. If the time (original or extended) expires, the instance will come out of the wait state.

After the instance has been prepared or decommissioned, your application must tell Auto Scaling that the lifecycle action is complete, and that it can move forward. This will set the state of the instance to Pending:Proceed or Terminating:Proceed.

You can create and manage your lifecycle hooks from the AWS Command Line Interface (CLI) or from the Auto Scaling API. Here are the most important functions:

  1. PutLifecycleHook – Create or update a lifecycle hook for an Auto Scaling Group. Call this function to create a hook that acts when instances launch or terminate.
  2. CompleteLifecycleAction – Signify completion of a lifecycle action for a lifecycle hook. Call this function when your hook has successfully set or up decommissioned an instance.
  3. RecordLifecycleActionHeartbeat – Record a heartbeat for a lifecycle action. Call this function to extend the timeout for a lifecycle action.

Standby State
You can now move an instance from the InService state to the Standby state, and back again. When an instance is standing by, it is still managed by the Auto Scaling Group but it is removed from service until you set it back to the InService state. You can use this state to update, modify, or troubleshoot instances. You can check on the state of the instance after specific events, and you can set it aside in order to retrieve important logs or other data.

If there is an Elastic Load Balancer associated with the Auto Scaling Group, the transition to the standby state will deregister the instance from the Load Balancer. The transition will not take effect until traffic ceases; this may take some time if you enabled connection draining for the Load Balancer.

DetachInstances
You can now remove an instance from an Auto Scaling Group and manage it independently. The instance can remain unattached, or you can attach it to another Auto Scaling Group if you’d like. When you call the DetachInstances function, you can also request a change in the desired capacity for the group.

You can use this new functionality in a couple of different ways. You can move instances from one Auto Scaling Group to another to effect an architectural change or update. You can experiment with a mix of different EC2 instance types, adding and removing instances in order to find the best fit for your application.

If you are new to the entire Auto Scaling concept, you can use this function to do some experimentation and to gain some operational experience in short order. Create a new Launch Configuration using the CreateLaunchConfiguration and a new Auto Scaling Group using CreateAutoScalingGroup, supplying the Instance Id of an existing EC2 instance in both cases. Do your testing and then call DetachInstances to take the instance out of the Auto Scaling Group.

You can also use the new detach functionality to create an “instance factory” of sorts. Suppose your application assigns a fresh, fully-initialized EC2 instance to each user when they log in. Perhaps the application takes some time to initialize, but you don’t want your users to wait for this work to complete. You could create an Auto Scaling Group and set it up so that it always maintains several instances in reserve, based on the expected login rate. When a user logs in, you can allocate an instance, detach it from the Auto Scaling Group, and dedicate it to the user in short order. Auto Scaling will add fresh instances to the group in order to maintain the desired amount of reserve capacity.

Available Now
All three of these new features are available now and you can start using them today. They are accessible from the AWS Command Line Interface (CLI) and the Auto Scaling API.

Jeff;

Elastic Load Balancing Connection Timeout Management

When your web browser or your mobile device makes a TCP connection to an Elastic Load Balancer, the connection is used for the request and the response, and then remains open for a short amount of time for possible reuse. This time period is known as the idle timeout for the Load Balancer and is set to 60 seconds. Behind the scenes, Elastic Load Balancing also manages TCP connections to Amazon EC2 instances; these connections also have a 60 second idle timeout.

In most cases, a 60 second timeout is long enough to allow for the potential reuse that I mentioned earlier. However, in some circumstances, different idle timeout values are more appropriate. Some applications can benefit from a longer timeout because they create a connection and leave it open for polling or extended sessions. Other applications tend to have short, non- recurring requests to AWS and the open connection will hardly ever end up being reused.

In order to better support a wide variety of use cases, you can now set the idle timeout for each of your Elastic Load Balancers to any desired value between 1 and 3600 seconds (the default will remain at 60). You can set this value from the command line or through the AWS Management Console.

Here’s how to set it from the command line:


$ elb-modify-lb-attributes myTestELB --connection-settings "idletimeout=120" --headers

And here is how to set it from the AWS Management Console:

This new feature is available now and you can start using it today! Read the documentation to learn more.

Jeff;

Store and Monitor OS & Application Log Files with Amazon CloudWatch

When you move from a static operating environment to a dynamically scaled, cloud-powered environment, you need to take a fresh look at your model for capturing, storing, and analyzing the log files produced by your operating system and your applications. Because instances come and go, storing them locally for the long term is simply not appropriate. When running at scale, simply finding storage space for new log files and managing expiration of older ones can become a chore. Further, there’s often actionable information buried within those files. Failures, even if they are one in a million or one in a billion, represent opportunities to increase the reliability of your system and to improve the customer experience.

Today we are introducing a powerful new log storage and monitoring feature for Amazon CloudWatch. You can now route your operating system, application, and custom log files to CloudWatch, where they will be stored in durable fashion for as long as you’d like. You can also configure CloudWatch to monitor the incoming log entries for any desired symbols or messages and to surface the results as CloudWatch metrics. You could, for example, monitor your web server’s log files for 404 errors to detect bad inbound links or 503 errors to detect a possible overload condition. You could monitor your Linux server log files to detect resource depletion issues such as a lack of swap space or file descriptors. You can even use the metrics to raise alarms or to initiate Auto Scaling activities.

Vocabulary Lesson
Before we dig any deeper, let’s agree on some basic terminology! Here are some new terms that you will need to understand in order to use CloudWatch to store and monitor your logs:

  • Log Event – A Log Event is an activity recorded by the application or resource being monitored. It contains a timestamp and raw message data in UTF-8 form.
  • Log Stream – A Log Stream is a sequence of Log Events from the same source (a particular application instance or resource).
  • Log Group – A Log Group is a group of Log Streams that share the same properties, policies, and access controls.
  • Metric Filters – The Metric Filters tell CloudWatch how to extract metric observations from ingested events and turn them in to CloudWatch metrics.
  • Retention Policies – The Retention Policies determine how long events are retained. Policies are assigned to Log Groups and apply to all of the Log Streams in the group.
  • Log Agent – You can install CloudWatch Log Agents on your EC2 instances and direct them to store Log Events in CloudWatch. The Agent has been tested on the Amazon Linux AMIs and the Ubuntu AMIs. If you are running Microsoft Windows, you can configure the ec2config service on your instance to send systems logs to CloudWatch. To learn more about this option, read the documentation on Configuring a Windows Instance Using the EC2Config Service.

Getting Started With CloudWatch Logs
In order to learn more about CloudWatch Logs, I installed the CloudWatch Log Agent on the EC2 instance that I am using to write this blog post! I started by downloading the install script:


$ wget https://s3.amazonaws.com/aws-cloudwatch/downloads/awslogs-agent-setup-v1.0.py

Then I created an IAM user using the policy document provided in the documentation and saved the credentials:

I ran the installation script. The script downloaded, installed, and configured the AWS CLI for me (including a prompt for AWS credentials for my IAM user), and then walked me through the process of configuring the Log Agent to capture Log Events from the /var/log/messages and /var/log/secure files on the instance:


Path of log file to upload [/var/log/messages]: 
Destination Log Group name [/var/log/messages]: 

Choose Log Stream name:
  1. Use EC2 instance id.
  2. Use hostname.
  3. Custom.
Enter choice [1]: 

Choose Log Event timestamp format:
  1. %b %d %H:%M:%S    (Dec 31 23:59:59)
  2. %d/%b/%Y:%H:%M:%S (10/Oct/2000:13:55:36)
  3. %Y-%m-%d %H:%M:%S (2008-09-08 11:52:54)
  4. Custom
Enter choice [1]: 1

Choose initial position of upload:
  1. From start of file.
  2. From end of file.
Enter choice [1]: 1

The Log Groups were visible in the AWS Management Console a few minutes later:

Since I installed the Log Agent on a single EC2 instance, each Log Group contained a single Log Stream. As I specified when I installed the Log Agent, the instance id was used to name the stream:

The Log Stream for /var/log/secure was visible with another click:

I decided to track the “Invalid user” messages so that I could see how often spurious login attempts were made on my instance. I returned to the list of Log Groups, selected the stream, and clicked on Create Metric Filter. Then I created a filter that would look for the string “Invalid user” (the patterns are case-sensitive):

As you can see, the console allowed me to test potential filter patterns against actual log data. When I inspected the results, I realized that a single login attempt would generate several entries in the log file. I was fine with this, and stepped ahead, named the filter and mapped it to a CloudWatch namespace and metric:

I also created an alarm to send me an email heads-up if the number of invalid login attempts grows to a suspiciously high level:

With the logging and the alarm in place, I fired off a volley of spurious login attempts from another EC2 instance and waited for the alarm to fire, as expected:

I also have control over the retention period for each Log Group. As you can see, logs can be retained forever (see my notes on Pricing and Availability to learn more about the cost associated with doing this):

Elastic Beanstalk and CloudWatch Logs
You can also generate CloudWatch Logs from your Elastic Beanstalk applications. To get you going with a running start, we have created a sample configuration file that you can copy to the .ebextensions directory at the root of your application. You can find the files at the following locations:

Place CWLogsApache-us-east-1.zip in the folder, then build and deploy your application as normal. Click on the Monitoring tab in the Elastic Beanstalk Console, and then press the Edit button to locate the new resource and select it for monitoring and graphing:

Add the desired statistic, and Elastic Beanstalk will display the graph:

To learn more, read about Using AWS Elastic Beanstalk with Amazon CloudWatch Logs.

Other Logging Options
You can push log data to CloudWatch from AWS OpsWorks, or through the CloudWatch APIs. You can also configure and use logs using AWS CloudFormation.

In a new post on the AWS Application Management Blog, Using Amazon CloudWatch Logs with AWS OpsWorks, my colleague Chris Barclay shows you how to use Chef recipes to create a scalable, centralized logging solution with nothing more than a couple of simple recipes.

To learn more about configuring and using CloudWatch Logs and Metrics Filters through CloudFormation, take a look at the Amazon CloudWatch Logs Sample. Here’s an excerpt from the template:


"404MetricFilter": {
    "Type": "AWS::Logs::MetricFilter",
    "Properties": {
        "LogGroupName": {
            "Ref": "WebServerLogGroup"
        },
        "FilterPattern": "[ip, identity, user_id, timestamp, request, status_code = 404, size, ...]",
        "MetricTransformations": [
            {
                "MetricValue": "1",
                "MetricNamespace": "test/404s",
                "MetricName": "test404Count"
            }
        ]
    }
}

Your code can push a single Log Event to a Long Stream using the putLogEvents function. Here’s a PHP snippet to get you started:


$result = $client->putLogEvents(array(
    'logGroupName'  => 'AppLog,
    'logStreamName' => 'ThisInstance',
    'logEvents'     => array(
        array(
            'timestamp' => round(microtime(true) * 1000),
            'message'   => 'Click!',
        )
    ),
    'sequenceToken' => 'string',
));

Pricing and Availability
This new feature is available now in US East (Northern Virginia) Region and you can start using it today.

Pricing is based on the volume of Log Entries that you store and how long you choose to retain them. For more information, please take a look at the CloudWatch Pricing page. Log Events are stored in compressed fashion to reduce storage charges; there is 26 bytes of storage overhead per Log Event.

Jeff;

PeachDish – Login, Pay, Cook, and Eat With AWS

PeachDish is an AWS-powered dinner delivery service!

After you sign up, you receive a nicely packed box full of fresh ingredients and complete cooking directions for two generously-proportioned meals for two people. Each pair of meals is shipped in a box that measures exactly one cubic foot. The perishable ingredients are packed in an insulated container and chilled with an ice pack while in transit.

In order to write a full and accurate blog post, I subscribed to the service and opened up a brand new AWS Test Kitchen in my home. My wife Carmen agreed to help out with this post and was kind enough to model for the photos!

PeachDish Architecture
PeachDish makes use of a multitude of AWS and Amazon services. Here’s a sampling (diagram courtesy of PeachDish):

  • Amazon Route 53 – A Route 53 hosted zone manages the DNS records for the peachdish.com domain.
  • Amazon S3 – Application code and static objects are durably stored in S3.
  • Amazon CloudFront – Static content from S3 and dynamic content generated on EC2 instances is made available with low latency via CloudFront.
  • AWS Elastic Beanstalk – The application is deployed and scaled through Elastic Beanstalk. It manages an Auto Scaling group comprised of a collection of Amazon EC2 instances and also manages code deployment and rollback.
  • Amazon RDS – Hosts the MySQL database and read replicas. Several of the outside services that work with PeachDish drive a substantial amount of read traffic.
  • Amazon Payments PeachDish uses the new Login and Pay feature of Amazon Payments to simplify and streamline the process of subscribing to the service.

The developers at PeachDish made good use of Elastic Beanstalk’s version management facility. In their own words:

We work in a different time zone than our tech team. They will generally deploy a new application version after midnight EST. If our customer service team finds a bug sometime in the morning, they can use the AWS Console to initiate a rollback very easily, with no effect on our customers.

We also use this feature when we switch the Login and Pay sandbox into production mode. As part of the process, we can do a temporary rollback while the new site gains entrance to the Login and Pay whitelist (usually a matter of a couple of seconds).

They also had a good story to tell about their discovery of read replicas for Amazon RDS:

We began to see some load issues when we ran our MySQL-powered reports. Because we are a startup and don’t have any database experts on staff, we were not sure how to address this issue. Fortunately, we logged in to the RDS console and saw the following message:

We read the RDS documentation to learn more about read replicas and realized that we could solve our load problem by moving the heavy queries to a read replica! We never imagined that we could create a read replica with just a couple of clicks, without having to spend several days building a test instance and working through all of the technical details. AWS was our technical expert. In this case, they encoded all of their technical knowledge behind a couple of simple clicks. This allowed us to solve the problem quickly so that we could focus on what we are best at.

Login and Pay
In order for a subscription-based service to succeed, it must be easy for potential users to subscribe and pay for it. Ideally, they can do so with a couple of clicks, and they can do it using a payment system that is already familiar to them.

Login and Pay with Amazon Automatic Payments is ideal for this purpose. It provides a simple and seamless customer experience and allows the owner of the site to control and customize the overall site experience and presentation. Customers can login with their existing Amazon credentials and initiate their subscription in minutes.

Here is the signup flow for PeachDish. Note that Login and Pay supplies the content and the widgets, and that I remain on the PeachDish site throughout the entire process:

When I click the Pay with Amazon button I get to choose a shipping address and a payment method. This content, along with the subscription summary, is supplied by Login and Pay:

Login and Pay integrates with your site using a set of widgets and APIs. The login feature makes use of OAuth 2.0. Once the customer has logged in, their name, email address, and zip code are provided to the client site to aid in the account creation process. The Login and Pay with Amazon Integration Guide contains the information that you will need to implement the payment and subscription features on your own site.

How Does it Taste?
With all of this infrastructure in place to deliver the fixings for a good meal to your kitchen (or at least to your doorstep), the final question is, how is the product?

We unpacked everything and laid it out on the counter (this is for two complete meals):

Carmen and I followed the directions with care and ended up with a meal that actually fed three hungry adults:

Our dinner tasted as good as it looked:

As you might be able to tell from our kitchen, we are somewhat fanatical about our food. We shop daily and grow lots of stuff in our backyard. With that said, we were very happy with the four PeachDish meals that we received, cooked, and devoured. The recipes were easy to follow, the ingredients were fresh and plentiful, and everything came together really nicely.

Jeff;

PS – Are you a startup company interested in integrating Login and Pay with Amazon? If so, You’re in luck! There is a limited time special offer that provides free processing on your first $10,000 or $100,000 transactions over 12 months. Visit the Amazon Toolbox Exclusive Offers and mention the AWS Blog!