Category: Amazon EC2

Resource Groups and Tagging for AWS

For many years, AWS customers have used tags to organize their EC2 resources (instances, images, load balancers, security groups, and so forth), RDS resources (DB instances, option groups, and more), VPC resources (gateways, option sets, network ACLS, subnets, and the like) Route 53 health checks, and S3 buckets. Tags are used to label, collect, and organize resources and become increasingly important as you use AWS in larger and more sophisticated ways. For example, you can tag relevant resources and then take advantage AWS Cost Allocation for Customer Bills.

Today we are making tags even more useful with the introduction of a pair of new features: Resource Groups and a Tag Editor. Resource Groups allow you to easily create, maintain, and view a collection of resources that share common tags. The new Tag Editor allows you to easily manage tags across services and Regions. You can search globally and edit tags in bulk, all with a couple of clicks.

Let’s take a closer look at both of these cool new features! Both of them can be accessed from the new AWS menu:

Tag Editor
Until today, when you decided to start making use of tags, you were faced with the task of stepping through your AWS resources on a service-by-service, region-by-region basis and applying tags as needed. The new Tag Editor centralizes and streamlines this process.

Let’s say I want to find and then tag all of my EC2 resources. The first step is to open up the Tag Editor and search for them:

The Tag Editor searches my account for the desired resource types across all of the selected Regions and then displays all of the matches:

I can then select all or some of the resources for editing. When I click on the Edit tags for selected button, I can see and edit existing tags and add new ones. I can also see existing System tags:

I can see which values are in use for a particular tag by simply hovering over the Multiple values indicator:

I can change multiple tags simultaneously (changes take effect when I click on Apply changes):

Resource Groups
A Resource Group is a collection of resources that shares one or more tags. It can span Regions and services and can be used to create what is, in effect, a custom console that organizes and consolidates the information you need on a per-project basis.

You can create a new Resource Group with a couple of clicks. I tagged a bunch of my AWS resources with Service and then added the EC2 instances, DB instances, and S3 buckets to a new Resource Group:

My Resource Groups are available from within the AWS menu:

Selecting a group displays information about the resources in the group, including any alarm conditions (as appropriate):

This information can be further expanded:

Each identity within an AWS account can have its own set of Resource Groups. They can be shared between identities by clicking on the Share icon:

Down the Road
We are, as usual, very interested in your feedback on this feature and would love to hear from you! To get in touch, simply open up the Resource Groups Console and click on the Feedback button.

Available Now
Resource Groups and the Tag Editor are available now and you can start using them today!


EC2 Container Service In Action

We announced the Amazon Amazon EC2 Container Service at AWS re:Invent and invited you to join the preview. Since that time, we’ve seen a lot of interest and a correspondingly high signup rate for the preview. With the year winding down, I thought it would be fun to spend a morning putting the service through its paces. We have already approved all existing requests to join the preview; new requests are currently being approved within 24 hours.

As I noted in my earlier post, this new service will help you to build, run, and scale Docker-based applications. You’ll benefit from easy cluster management, high performance, flexible scheduling, extensibility, portability, and AWS integration while running in an AWS-powered environment that is secure and efficient.

Quick Container Review
Before I dive in, let’s take a minute to review some of the terminology and core concepts implemented by the Container Service.

  • Cluster – A logical grouping of Container Instances that is used to run Tasks.
  • Container Instance – An EC2 instance that runs the ECS Container Agent and that has been registered into a Cluster. The set of instances running within a Cluster create a pool of resources that can be used to run Tasks.
  • Task Definition – A description of a set of Containers. The information contained in a Task Description defines one or more Containers. All of the Containers defined in a particular Task Definition are run on the same Container Instance.
  • Task – An instantiation of a Task Definition.
  • Container – A Docker container that was created as part of a Task.

The ECS Container Agent runs on Container Instances. It is responsible for starting Containers on behalf of ECS. The agent itself runs within a Docker container (available on Docker Hub) and communicates with the Docker daemon running on the Instance.

When talking about a cluster or container service, “scheduling” refers to the process of assigning tasks to instances. The Container Service provides you with three scheduling options:

  1. Automated – The RunTask function will start a Task (as specified by a Task Definition) on a Cluster using random placement.
  2. Manual – The StartTaskfunction will start a Task (again, as specified by a Task Definition) on a specified Container Instance (or Instances).
  3. Custom – You can use the ListContainerInstances and DescribeContainerInstances functions to gather information about available resources within a Cluster, implement the “brain” of the schedule (in other words, use the available information to choose a suitable Container Instance), and then call StartTask to start a task on the Instance. When you do this you are, in effect, creating your own implementation of RunTask.

EC2 Container Service in Action
In order to gain some first-hand experience with ECS, I registered for the preview and then downloaded, installed, and configured a preview version of the AWS CLI. Then I created an IAM Role and a VPC and set about to create my cluster (ECS is currently available in US East (Northern Virginia) with support for other Regions expected in time). I ran the following command:

$ aws ecs create-cluster --cluster-name MyCluster --profile jbarr-cli

The command returned information about my new cluster as a block of JSON:

    "cluster": {
        "clusterName": "MyCluster", 
        "status": "ACTIVE", 
        "clusterArn": "arn:aws:ecs:us-east-1:348414629041:cluster/MyCluster"

Then I launched a couple of EC2 instances into my VPC using an ECS-enabled AMI that had been shared with me as part of the preview process (this is a very lightweight version of the Amazon Linux AMI, optimized and tuned for ECS). I chose my new IAM Role (ecs) as part of the launch process:

I also edited the instance’s User Data to make the instance launch in to my cluster:

After the instances launched I was able to see that they were part of my cluster:

$ aws ecs list-container-instances --cluster MyCluster --profile jbarr-cli
    "containerInstanceArns": [

I can choose an instance and query it to find out more about the registered and available CPU and memory resources:

$ aws ecs describe-container-instances --cluster MyCluster \
  --container-instances arn:aws:ecs:us-east-1:348414629041:container-instance/4cf62484-da62-49a5-ad32-2015286a6d39 \
  --profile jbarr-cli

Here’s an excerpt from the returned data:

            "registeredResources": [
                    "integerValue": 1024, 
                    "longValue": 0, 
                    "type": "INTEGER", 
                    "name": "CPU", 
                    "doubleValue": 0.0
                    "integerValue": 3768, 
                    "longValue": 0, 
                    "type": "INTEGER", 
                    "name": "MEMORY", 
                    "doubleValue": 0.0

Following the directions in the Container Service Developer Guide, I created a simple task definition and registered it:

$ aws ecs register-task-definition --family sleep360 \
  --container-definitions file://$HOME/tmp/task.json \
  --profile jbarr-cli

Then I ran 10 copies of the task:

aws ecs run-task --cluster MyCluster --task-definition sleep360:1 --count 10 --profile jbarr-cli

And I listed the running tasks:

$ aws ecs list-tasks --cluster MyCluster --profile jbarr-cli

This is what I saw:

    "taskArns": [

I spent some time describing the tasks and wrapped up by shutting down the instances. After going through all of this (and making a mistake or two along the way due to being so eager to get a cluster up and running), I’ll leave you with three simple reminders:

  1. Make sure that your VPC has external connectivity enabled.
  2. Make sure to use the proper, ECS-enabled AMI.
  3. Make sure to launch the AMI with the requisite IAM Role.

ECS Quickstart Template
We have created an ECS Quickstart Template for CloudFormation to help you to get up and running even more quickly. The template creates an IAM Role and an Instance Profile for the Role. The Role supplies the permission that allows the ECS Agent to communicate with ECS. The template launches an instance using the Role and returns an SSH command that can be used to access the instance. You can launch the instance in to an existing cluster, or you can use the name “default” to create (if necessary) a default cluster. The instance is always launched within your Default VPC.

Contain Yourself
If you would like to get started with ECS, just register now and we’ll get you up and running as soon as possible.

To learn more about ECS, spend 30 minutes watching this session from re:Invent (one caveat: the video is already a bit dated; for example, Task Definitions are no longer versioned):

You can also register for our upcoming (January 14th, 2015) webinar, Amazon EC2 Container Service Deep Dive. In this webinar, my colleague Deepak Singh will will talk about why we built EC2 Container Service, explain some of the core concepts, and show you how to use the service for your applications.

CoreOS is a new Linux distribution designed to support the needs of modern infrastructure stacks. The CoreOS AMI now supports ECS; you can read the Amazon ECS on CoreOS documentation to learn more.

As always, we are interested in your feedback. With ECS still in preview mode, now is the perfect time for you to let us know more about your needs. You can post your feedback to the ECS Forum. you can also create AWS Support cases if you are in need of assistance.


AWS OpsWorks Update – Support for Existing EC2 Instances and On-Premises Servers

My colleague Chris Barclay sent a guest post to introduce two powerful new features for AWS OpsWorks.


New Modes for OpsWorks
I have some good news for users that operate compute resources outside of AWS: you can now use AWS OpsWorks to deploy and operate applications on any server with an Internet connection including virtual machines running in your own data centers. Previously, you could only deploy and operate applications on Amazon EC2 instances created by OpsWorks. Now, OpsWorks can also manage existing EC2 instances created outside of OpsWorks. You may know that OpsWorks is a service that helps you automate tasks like code deployment, software configuration, operating system updates, database setup, and server scaling using Chef. OpsWorks gives you the flexibility to define your application architecture and resource configuration and handles the provisioning and management of resources for you. Click here to learn more about the benefits of OpsWorks.

Customers with on-premises servers no longer need to operate separate application management tools or pay up-front licensing costs but can instead use OpsWorks to manage applications that run on-premises, on AWS, or that span environments. OpsWorks can configure any software that is scriptable and includes integration with AWS services such as Amazon CloudWatch.

Use Cases & Benefits
OpsWorks can enhance the management processes for your existing EC2 instances or on-premises servers. For example:

  • With a single command, OpsWorks can update operating systems and software packages to the latest version across your entire fleet, making it easy to keep up with security updates.
  • Instead of manually running commands on each instance/server in your fleet, let OpsWorks run scripts or Chef recipes for you. You control who can run scripts and are able to view a history of each script that has been run.
  • Instead of using one user login per instance/server, you can manage operating system users and ssh/sudo access. This makes it easier to add and remove an individual user’s access to your instances.
  • Create alarms or scale instances/servers based on custom Amazon CloudWatch metrics for CPU, memory and load from one instance/server or aggregated across a collection of instances/servers.

Getting Started
Let’s walk through process of registering existing on-premises or EC2 instances. Got to the OpsWorks Management Console and click Register Instances:

Select whether you want to register EC2 instances or on-premises servers. You can use both types, but the wizard operates with one class at a time.

Give your collection of instances a Name, select a Region, and optionally choose a VPC and IAM role. If you are registering EC2 instances, select from the table before proceeding to the next step.

Install the AWS CLI on your desktop (if you have already installed an older version of the CLI, you may need to update it in order to use this feature).

Run the command displayed in the Run Register Command section using the CLI installed in the previous step. This uses the CLI installed on your desktop to install the OpsWorks agent onto the selected instances. You will need the instance’s ssh user name and private key in order to perform the installation. See the documentation if you want to run the CLI on the server you are registering. Once the registration process is complete, the instances will appear in the list as “registered.”

Click Done. You can now use OpsWorks to manage your instances! You can view and perform actions on your instances in the Instances view. Navigate to the Monitoring view to see the 13 included custom CloudWatch metrics for the instances you have registered.

You can learn more about using OpsWorks to manage on-premises and EC2 instances by taking a look at the examples in the Application Management Blog or the documentation.

Pricing and Availability
OpsWorks costs $0.02 per hour for each on-premises server on which you install the agent, and is available at no additional charge for EC2 instances. See the OpsWorks Pricing page to learn more about our free tier and other pricing information.

— Chris Barclay, Principal Product Manager

AWS Data Transfer Price Reduction

I am happy to announce that we are reducing the rates for several types of AWS data transfers, effective December 1, 2014, as follows:

  • Outbound Data Transfer – Pricing for data transfer from AWS to the Internet is now 6% to 43% lower, depending on the Region and the amount of data transferred per month.
  • Data Transfer to CloudFront – Data transfer from AWS to Amazon CloudFront is now free of charge.
  • Data Transfer from CloudFront – Pricing for data transfer out of CloudFront edge locations in the United States, Europe, Japan and Australia is now 4% to 29% lower, depending on the edge location and usage tier.

Price Reduction – Outbound Data Transfer
Here is a summary of the price reductions for outbound data transfer (See the EC2 pricing and S3 pricing pages for more information):

Price Tier US Standard,
US West (Oregon) &
US West (Northern California)
EU (Ireland),
EU (Frankfurt)
Asia Pacific (Singapore) Asia Pacific (Tokyo) Asia Pacific (Sydney)
First 10 TB/month -25% -25% -37% -30% -26%
Next 40 TB/month -6% -6% -43% -15% -21%
Next 100 TB/month -37% -5% -13%
Next 350 TB/month -33% -6% -14%

The prices for the first 10 TB/month take effect after the bandwidth provided as part of the AWS Free Tier has been consumed.

Price Reduction – Data Transfer from CloudFront
Here is a summary of the price reductions for outbound data transfer from CloudFront to different parts of the world (see the CloudFront pricing, pages for more information):

Price Tier United States Europe Hong Kong, Philippines, South Korea, Singapore, Taiwan Japan Australia
First 10 TB/month -29% -29% -26% -26% -26%
Next 40 TB/month -4% -4% -4%

These prices take effect after the bandwidth provided as part of the AWS Free Tier has been consumed.

As I have noted in the past, we focus on driving down our costs over time. As we do this, we pass the savings along to you!


Simplifying the EC2 Reserved Instance Model

EC2‘s Reserved Instance model provides you with two benefits: capacity assurance and a lower effective hourly rate in exchange for an upfront payment. After combining customer feedback with an analysis of purchasing patterns that goes back to when we first launched Reserved Instances in 2009, we have decided to simplify the model and are introducing an important set of changes today.

The New Reserved Instance Model
There is now a single type of Reserved Instance and it has three payment options. All of the options continue to provide capacity assurance and discounts that are typically around 63% for a three year term when compared to On-Demand prices.

There are three payment options so that you can decide how you would like to pay for your Reserved Instance throughout the term (in descending order of effective discount):

  • All Upfront – You pay for the entire Reserved Instance term (one or three years) with one upfront payment and get the best effective hourly price when compared to On-Demand.
  • Partial Upfront – You pay for a portion of the Reserved Instance upfront, and then pay for the remainder over the course of the one or three year term. This option balances the RI payments between upfront and hourly.
  • No Upfront – You pay nothing upfront but commit to pay for the Reserved Instance over the course of the Reserved Instance term, with discounts (typically about 30%) when compared to On-Demand. This option is offered with a one year term.

Learning More
If you have any questions about these changes, please take a look at the updated Reserved Instance FAQ. As always, AWS Support is also ready, willing, and able to assist!


Larger and Faster Elastic Block Store (EBS) Volumes

As Werner just announced from the stage at AWS re:Invent, we have some great news for users of Amazon Elastic Block Store (EBS). We are planning to support EBS volumes that are larger and faster than ever before! Here are the new specs:

  • General Purpose (SSD) – You will be able to create volumes that store up to 16 TB and provide up to 10,000 baseline IOPS (up from 1 TB and 3,000 baseline IOPS). Volumes of this type will continue to support bursting to even higher performance levels (see my post on New SSD-Backed Elastic Block Storage for more information).
  • Provisioned IOPS (SSD) – You will be able to create volumes that store up to 16 TB and provide up to 20,000 Provisioned IOPS (up from 1 TB and 4,000 Provisioned IOPS).

Newly created volumes will transfer data more than twice as fast, with a maximum throughput of 160 MBps for General Purpose (SSD) and 320 MBps for Provisioned IOPS (SSD).

With more room to store data and the ability to get to it even more rapidly, you can now run demanding, large-scale workloads without having to stripe multiple volumes together or to do a complex dance when it comes time to create and coordinate snapshots. You can just create the volume and turn your attention to your data and your application.

Stay tuned for more information on availability!


New Compute-Optimized EC2 Instances

Our customers continue to increase the sophistication and intensity of the compute-bound workloads that they run on the Cloud. Applications such as top-end website hosting, online gaming, simulation, risk analysis, and rendering are voracious consumers of CPU cycles and can almost always benefit from the parallelism offered by today’s multicore processors.

The New C4 Instance Type
Today we are pre-announcing the latest generation of compute-optimized Amazon Elastic Compute Cloud (EC2) instances. The new C4 instances are based on the Intel Xeon E5-2666 v3 (code name Haswell) processor. This custom processor, designed specifically for EC2, runs at a base speed of 2.9 GHz, and can achieve clock speeds as high as 3.5 GHz with Turbo boost. These instances are designed to deliver the highest level of processor performance on EC2. If you’ve got the workload, we’ve got the instance!

Here’s the lineup (these specs are preliminary and could change a bit before launch time):

Instance Name vCPU Count RAM Network Performance
c4.large 2 3.75 GiB Moderate
c4.xlarge 4 7.5 GiB Moderate
c4.2xlarge 8 15 GiB High
c4.4xlarge 16 30 GiB High
c4.8xlarge 36 60 GiB 10 Gbps

These instances are a great match for the SSD-Backed Elastic Block Storage that we introduced earlier this year. EBS Optimization is enabled by default for all C4 instance sizes, and is available to you at no extra charge. C4 instances also allow you to achieve significantly higher packet per second (PPS) performance, lower network jitter, and lower network latency using Enhanced Networking.

Like most of our newer instance types, the C4 instances will use Hardware Virtualization (HVM) in order to get the best performance from the underlying CPU, and will run within a Virtual Private Cloud.

The c4.8xlarge instances give you the ability to fine-tune the processor’s performance and power management (which can affect maximum Turbo frequencies) using P-state and C-state control. They also give you 36 vCPUs for improved compute performance.

Stay tuned for pricing and additional technical information!


Amazon EC2 Container Service (ECS) – Container Management for the AWS Cloud

Earlier this year I wrote about container computing and enumerated some of the benefits that you get when you use it as the basis for a distributed application platform: consistency &amp fidelity, development efficiency, and operational efficiency. Because containers are lighter in weight and have less memory and computational overhead than virtual machines, they make it easy to support applications that consist of hundreds or thousands of small, isolated “moving parts.” A properly containerized application is easy to scale and maintain, and makes efficient use of available system resources.

Introducing Amazon Amazon EC2 Container Service
In order to help you to realize these benefits, we are announcing a preview of our new container management service, EC2 Container Service (or ECS for short). This service will make it easy for you to run any number of Docker containers across a managed cluster of Amazon Elastic Compute Cloud (EC2) instances using powerful APIs and other tools. You do not have to install cluster management software, purchase and maintain the cluster hardware, or match your hardware inventory to your software needs when you use ECS. You simply launch some instances in a cluster, define some tasks, and start them. ECS is built around a scalable, fault-tolerant, multi-tenant base that takes care of all of the details of cluster management on your behalf.

By the way, don’t let the word “cluster” scare you off! A cluster is simply a pool of compute, storage, and networking resources that serves as a host for one or more containerized applications. In fact, your cluster can even consist of a single t2.micro instance. In general, a single mid-sized EC2 instance has sufficient resources to be used productively as a starter cluster.

EC2 Container Service Benefits
Here’s how this service will help you to build, run, and scale Docker-based applications:

  • Easy Cluster Management – ECS sets up and manages clusters made up of Docker containers. It launches and terminates the containers and maintains complete information about the state of your cluster. It can scale to clusters that encompass tens of thousands of containers across multiple Availability Zones.
  • High Performance – You can use the containers as application building blocks. You can start, stop, and manage thousands of containers in seconds.
  • Flexible Scheduling – ECS includes a built-in scheduler that strives to spread your containers out across your cluster to balance availability and utilization. Because ECS provides you with access to complete state information, you can also build your own scheduler or adapt an existing open source scheduler to use the service’s APIs.
  • Extensible & Portable – ECS runs the same Docker daemon that you would run on-premises. You can easily move your on-premises workloads to the AWS cloud, and back.
  • Resource Efficiency – A containerized application can make very efficient use of resources. You can choose to run multiple, unrelated containers on the same EC2 instance in order to make good use of all available resources. You could, for example, decide to run a mix of short-term image processing jobs and long-running web services on the same instance.
  • AWS Integration – Your applications can make use of AWS features such as Elastic IP addresses, resource tags, and Virtual Private Cloud (VPC). The containers are, in effect, a new base-level building block in the same vein as EC2 and S3.
  • Secure – Your tasks run on EC2 instances within an Amazon Virtual Private Cloud. The tasks can take advantage of IAM roles, security groups, and other AWS security features. Containers run in a multi-tenant environment and can communicate with each other only across defined interfaces. The containers are launched on EC2 instances that you own and control.

Using EC2 Container Service
ECS was designed to be easy to set up and to use!

You can launch an ECS-enabled AMI and your instances will be automatically checked into your default cluster. If you want to launch into a different cluster you can specify it by modifying the configuration file in the image, or passing in User Data on launch. To ECS-enable a Linux AMI, you simply install the ECS Agent and the Docker daemon.

ECS will add the newly launched instance to its capacity pool and run containers on it as directed by the scheduler. In other words, you can add capacity to any of your clusters by simply launching additional EC2 instances in them!

The ECS Agent will be available in open source form under an Apache license. You can install it on any of your existing Linux AMIs and call registerContainerInstances to add them to your cluster.

Here are a few vocabulary items to help you to get familiar with the terminology used by ECS:

  • Cluster – A cluster is a pool of EC2 instances in a particular AWS Region, all managed by ECS. One cluster can contain multiple instance types and sizes, and can reside within one or more Availability Zones.
  • Scheduler – A scheduler is associated with each cluster. The scheduler is responsible for making good use of the resources in the cluster by assigning containers to instances in a way that respects any placement constraints and simultaneously drives as much parallelism as possible, while also aiming for high availability.
  • Container – A container is a packaged (or “Dockerized,” as the cool kids like to say) application component. Each EC2 instance in a cluster can serve as a host to one or more containers.
  • Task Definition – A JSON file that defines a Task as a set of containers. Fields in the file define the image for each container, convey memory and CPU requirements, and also specify the port mappings that are needed for the containers in the task to communicate with each other.
  • Task – A task is an instantiation of a Task Definition consisting of one or more containers, defined by the work that they do and their relationship to each other.
  • ECS-Enabled AMI – An Amazon Machine Image (AMI) that runs the ECS Agent and dockerd. We plan to ECS-enable the Amazon Linux AMI and are working with our partners to similarly enable their AMIs.

EC2 Container Service includes a set of APIs that are both simple and powerful. You can create, describe, and destroy clusters and you can register EC2 instances therein. You can create task definitions and initiate and manage tasks.

Here is the basic set of steps that you will follow in order to run your application on ECS. I am making the assumption that you have already Dockerized your application by breaking it down in to fine-grained components, each described by a Dockerfile and each running nicely on your existing infrastructure. There are plenty of good resources online to help you with this process. Many popular application components have already been Dockerized and can be found on Docker Hub. You can use ECS with any public or private Docker repository that you can acccess. Ok, so here are the steps:

  1. Create a cluster, or decide to use the default one for your account in the target Region.
  2. Create your task definitions and register them with the cluster.
  3. Launch some EC2 instances and register them with the cluster.
  4. Start the desired number of copies of each task.
  5. Monitor the overall utilization of the cluster and the overall throughput of your application, and make adjustments as desired. For example, you can launch and then register additional EC2 instances in order to expand the cluster’s pool of available resources.

EC2 Container Service Pricing and Availability
The service is launch today in preview form. If you are interested in signing up, click here to join the waiting list.

There is no extra charge for ECS. As usual, you pay only for the resources that you use.


Track AWS Resource Configurations With AWS Config

One of the coolest aspects of the Cloud is its dynamic nature. Resources can be created, attached, configured, used, detached, and destroyed in a matter of minutes. Some of these changes are triggered by a direct human action; others have their origins in AWS CloudFormation templates or take place in response to Auto Scaling triggers. The resources themselves, as well as their connections, settings, and other attributes, change over time.

With all of this change happening, organizations of all sizes face some new challenges when it comes to asset tracking, inventory management, change management, and governance in the Cloud. They need to know what was changed, when it happened, and how the change might affect other AWS resources. This need might arise due to an unexpected configuration change, a suspected system failure, a compliance audit, or a possible security incident. Regardless of the cause, having access to the right data enables a deep, data-driven forensic analysis.

Traditional configuration management tools were built in an era where resources and the relationships between them changed infrequently. These tools were costly, complex, and required some care and feeding.

Introducing AWS Config
We aim to address these challenges with AWS Config. This new AWS service captures the initial state of your AWS resources (EC2 instances and related items to start, with others planned) and the relationships between them, and then tracks creations, deletions, and property changes for analysis, visualization, and archiving.

You can enable AWS Config with two clicks! Once enabled, it discovers resources and records their current configurations and any changes to them. This configuration data can be viewed in timeline fashion in the AWS Management Console. AWS Config also delivers these CIs to you. Configuration changes are streamed to an Amazon Simple Notification Service (SNS) topic of your choice and are also snapshotted to an Amazon Simple Storage Service (S3) S3 bucket (also of your choice) every 6 hours. You can also process this data using tools from our partners (see below) or on your own.

AWS Config understands and tracks the relationships between your AWS resources. It knows that an EBS volumes can be mounted to an EC2 instance, and that the instance can be associated with (among other things) Security Groups, Elastic IP Addresses, VPCs, and Elastic Network Interfaces

With AWS Config, you get full visibility in to the state of your AWS resources. You can watch them change over time, and you can view the full history of configuration changes for a resource. You can see the connections between resources and determine how a change to one resource could potentially affect other resources. AWS Config gives you the information that you need to have in order to work productively in an environment that is subject to constant change!

You can discover all of your AWS resources and determine which resources are outside of policy for your organization. For example, you might want to track down all resources that are not within a production VPC. You might want to see which instances a particular Elastic IP address has been associated with over the course of the last two weeks. Or, you might need to know the state of a resource as of a particular date.

Using AWS Config
AWS Config is enabled on a per-account, per-Region basis. It is accessible from the AWS Management Console, the AWS Command Line Interface (CLI), and also provides a basic lookup API.

I start by enabling AWS Config for my account (within a particular Region). I can create a new SNS topic and S3 bucket, use a topic and bucket of my own, or I can use a topic and a bucket that belongs to a different AWS account (with proper permission):

I need to provide AWS Config with access to my AWS resources. This is done using an IAM role:

Data will begin to appear in the bucket and change notifications will be sent to the SNS topic. Here’s what the bucket looks like:

Unless you are building your own tools for AWS Config, you will probably not spend any time looking at the bucket or the data (scroll down to Inside the AWS Config Data if you want to know more). Instead, you will use the Console or a third-party tool. The Console lets you select a resource and then view configuration changes on a timeline:

Partner Support
Members of the AWS Partner Network (APN) have been working with AWS Config in order to address a variety of customer use cases.

Launch partners for AWS Config include:

  1. 2nd Watch
  2. CloudCheckr
  3. CloudNexa
  4. Evident.IO
  5. Red Hat Cloud Forms
  6. RedSeal Networks
  7. Splunk

Here’s what they have to offer, in their own words and screen shots!

2nd Watch enterprise tools will allow users to visually see changes as they occur in their environment both in real-time and playback mode. The integration with AWS Config events also includes integration with New Relic application alerts, Amazon CloudWatch alarms and AWS CloudTrail events to simplify workload management. Customers have a visual tool to simplify event management and incident resolution.

AWS Config offers users the ability to create and maintain an audit history for their environment. The logs present an invaluable aid for security and compliance. The dynamic nature of the cloud, however, presents challenges for properly leveraging the logs. CloudCheckr‘s compliance policy engine already converts AWS CloudWatch metrics and CloudTrail logs into actionable information. AWS Config represents a natural extension further into this area.

Cloudnexa integrates with AWS Config to get a snapshot of resources in the AWS account, and for audit of historical configuration changes. This capability makes it unnecessary for Cloudnexa to design, build and maintain software and infrastructure to get these features.

AWS Config allows Red Hat CloudForms customers to enforce policies and ensure compliance for workloads running in Amazon Web Services. This extends the same level of control that CloudForms customers already enjoyed for virtualization and private cloud workloads to the public cloud.

AWS Config enables customers to track and store the history of Amazon VPC configurations and configuration changes in Amazon S3. With AWS Config, RedSeal customers get even more information so they can strengthen the defenses on their AWS-based networks.

Splunk provides software and cloud services that enable you to collect, index and harness machine data generated by the applications, servers, networks, sensors and other systems that power your business. The Splunk App for AWS, integrated with AWS Config, enables you to gain real-time and historical visibility into the configuration of AWS resources and how these resources relate to one another. You can also use the app to correlate data from AWS Config and AWS CloudTrail in order to gain a comprehensive view into security and compliance in your AWS account.

Inside the AWS Config Data (Developers Only)
Let’s take an inside look at the data generated by AWS Config. Here is a small portion of the snapshot data associated with a single EC2 instance. As you can see it includes complete identifying information, lists the set of tags on the instance, and describes the relationships that the instance has with a security group and an EBS volume:

  "configurationItemVersion" : "1.0",



    "name":"Is associated with SecurityGroup"

    "name":"Is attached to Volume"

AWS Config will send a notification to the given SNS topic each time it detects a change. The body of the notification contains detailed information about the change:



    "tags":{ },
    "relationships":[ ],

      "attachments":[ ],
      "tags":[ ],

AWS Config will also send an SNS notification each time it stores a new snapshot of the current configuration.

AWS Config APIs
AWS Config provides two APIs that allow you to retrieve the resource configuration information:

  • GetResourceConfigHistory – Look up configurations for a given resource within a given historical time range.
  • DeliverConfigSnapshot – Trigger the creation of a full snapshot of your resources for delivery to S3.

Pricing and Availability
AWS Config is available in limited preview form and you can start using it today in the US East (Northern Virginia) Region. We plan to make it available in all public AWS Regions

With AWS Config, you are charged based on the number of resources and configuration changes recorded for supported resources in your AWS account (Configuration Items). There is no up-front commitment and you can stop recording Configuration Items at any time.

You will be charged $3.00 per 1000 Configuration Items recorded per month. Standard S3 rates apply for the storage of Configuration snapshots and Configuration history files. Standard rates also apply to any notifications delivered via SNS.

If you generate 10,000 Configuration Items per month, you can expect to pay less than $0.13 per month in S3 storage charges. The AWS Free Tier provides you will 1 million SNS notifications per month (you’ll get about 10,000 notifications if you have 10,000 Configuration Items).


New AWS Tools for Code Management and Deployment

Today I would like to tell you about a trio of new AWS tools that are designed to help individual developers, teams of developers, and system administrators store, integrate, and deploy their code on the cloud. Here is the lineup, in the order that you’d generally put them to use:

  • AWS CodeDeploy – This service efficiently deploys your released code to a “fleet” of EC2 instances while taking care to leave as much of the fleet online as possible. It can accommodate fleets that range in size from one instance all the way up to tens of thousands of instances.
  • AWS CodeCommit – This is a managed revision control service that hosts Git repositories and works with all Git-based tools. You no longer need to worry about hosting, scaling, or maintaining your own source code control infrastructure.
  • AWS CodePipeline – This service will help you to model and automate your software release process. You can design a development workflow that fits your organization’s needs and your working style and use it to shepherd your code through the staging, testing, and release process. CodePipeline works with third-party tools but is also a complete, self-contained end-to-end solution.

We are launching AWS CodeDeploy today and you can start using it right away. I’ll share additional information on the launch plans for the other two tools as it becomes available. These tools were designed to work well independently and to provide even more functionality and value when used together.

Let’s take a closer look at each of these tools!

CodeDeploy was designed to help you to deploy code at scale, with a focus on rapid development and rapid deployment in mission-critical situations where the cost of failure is high. As I mentioned earlier, it was designed to update an EC2 fleet without the need for any down time. CodeDeploy will automatically schedule updates across multiple Availability Zones in order to maintain high availability during the deployment process.

The fundamental unit of CodeDeploy work is a Deployment. It copies an Application revision (a collection of files) to a set of EC2 instances (a Deployment Group) and can also run designated scripts throughout the deployment process. YAML-formatted files are used to describe Applications and Deployment Groups. A Deployment Group identifies a set of EC2 instances by tag name, and can also reference an Auto Scaling Group.

Each instance must be running a copy of the CodeDeploy Agent. This is a small, open source (Apache 2.0 licensed) app that knows how to copy and validate files, set up permissions, and run scripts on Linux and on Windows. You can also configure it to run at startup on your custom AMIs, and you can even install it manually on running instances.

You can use CodeDeploy from the AWS Management Console, the Command-Line Interface, or through a set of APIs. For example, you can initiate an entire deployment with one API call. CodeDeploy can also be used in conjunction with your existing Chef recipes and Puppet scripts.

Let’s walk through the process of setting up and deploying an Application. The CodeDeploy Console includes a handy demo option that I’ll use to get started. While there are a lot of screens below, most of this is setup work that you’ll do one and benefit from for a long time!

The demo uses a AWS CloudFormation template to launch three EC2 instances, all tagged as CodeDeployDemo:

I begin by creating an Application:

Then I create a versioned Revision for deployment. My sample revision is stored in S3, but it could also come from CodeCommit or GitHub:

I need to tell CodeDeploy which IAM role to use when it interacts with other AWS services like EC2 or Auto Scaling (I can create a new one or use an existing one):

Now I need a Deployment Configuration. I can pick one of the defaults or I can create one from scratch. Here are the three default Deployment Configurations (these should be self-explanatory):

Here’s how I would create a custom Deployment Configuration:

Now I can review the settings and perform my first deployment:

The Application is deployed per my Deployment Configuration and the Console updates as the work proceeds:

Once I have taken care of the setup work, I can easily create more Deployments and deploy them with a couple of clicks:

Your application’s source code is a concrete representation of your intellectual property. It is also the most visible artifact of the hours that you spend slaving away at the keyboard! AWS CodeCommit is designed to keep it safe and sound. As I have already mentioned, it is a managed revision control service that hosts Git repositories. Your existing Git skills, tools (command line and IDE), and practices will continue to be applicable.

You (or your organization’s Cloud Administrator) can simply create a CodeCommit repo, assign permissions, and open it up to commits. CodeCommit will store code, binaries, and metadata in redundant fashion with high availability. You will be able to collaborate with local and remote teams to edit, compare, sync, and revise code.

Because CodeCommit runs in the AWS Cloud, it will work really well in situations where your development team works from multiple locations or involves collaboration with vendors or other partners (no more punching holes in corporate firewalls). You don’t have to worry about running out of space (go ahead, check in those images and videos). CodeCommit encrypts your files at checkin time and uses IAM roles to control developer and administrative access.

Here’s a sneak peek at a preliminary version of the CodeCommit Console:

I’ll publish a more detailed blog post at launch time, so stay tuned.

Presumably, your release process is more complex and more robust than “run a smoke test and ship it if nothing explodes!” As the process becomes more complex, automation becomes more and more valuable.

AWS CodePipeline will help you to codify and automate your release process. It should make your entire process more robust and more efficient. You’ll spend more time on features and less time on infrastructure. You will be able to test each code change as you make it, with the assurance that it will have pass through whatever test gates that you define before it is released to your customers.

You will be able to use the CodePipeline user interface to construct a graphical model of your release process using a combination of serial and parallel actions. The workflow can include time-based or manual approval gates between each stage. For example, you could choose to deploy new changes to staging servers only during weekday working hours in your own time zone.

CodePipeline watches your source code repo for changes and triggers the appropriate workflow. A release workflow could build the code in a production build tree, run test cases, and deploy tested code to a staging server. Upon final approval (a manual gate), the code can be promoted to production and widely deployed.

Here’s a sneak peek at the CodePipeline Console:

CodeDeploy is launching today and you can start using it now. Please stay tuned for more information on CodeCommit and CodePipeline!