AWS DevOps Blog

OpsWorks September 2016 Updates

by Daniel Huesch | on | | Comments

Over the past few months, the AWS OpsWorks team has introduced several enhancements to existing features and added to support for new one. Let’s discuss some of these new capabilities.

·       Chef client 12.13.37 – Released a new AWS OpsWorks agent version for Chef 12 for Linux, enabling the latest enhancements from Chef. The OpsWorks console now shows the full history of enhancements to its agent software. Here’s an example of what the change log looks like:

·       Node.js 0.12.15 – Provided support for a new version of Node.js, in Chef 11.

–        Fixes a bug in the read/write locks implementation for the Windows operating system.
–        Fixes a potential buffer overflow vulnerability.

·       Ruby 2.3.1 – The built-in Chef 11 Ruby layer now supports Ruby 2.3.1, which includes these Ruby enhancements:

–        Introduced a frozen string literal pragma.
–        Introduced a safe navigation operator (lonely operator).
–        Numerous performance improvements.

·       Larger EBS volumes – Following the recent announcement from Amazon EBS, you can now use OpsWorks to create provisioned IOPS volumes that store up to 16 TB and process up to 20,000 IOPS, with a maximum throughput of 320 MBps. You can also create general purpose volumes that store up to 16 TB and process up to 10,000 IOPS, with a maximum throughput of 160 MBps.

·       New Linux operating systems – OpsWorks continues to enhance its operating system support and now offers:

–        Amazon Linux 2016.03 (Amazon Linux 2016.09 support will be available soon)
–        Ubuntu 16.04
–        CentOS 7

·       Instance tenancy – You can provision dedicated instances through OpsWorks. Dedicated instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Your dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts.

·       Define root volumes – You can define the size of the root volume of your EBS-backed instances directly from the OpsWorks console. Choose from a variety of volume types: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic.

·       Instance page – The OpsWorks instance page now displays a summary bar that indicates the aggregated state of all the instances in a selected stack. Summary fields include total instance count, online instances, instances that are in the setting-up stage, instances that are in the shutting-down stage, stopped instances, and instances in an error state.

·       Service role regeneration – You can now use the OpsWorks console to recreate your IAM service role if it was deleted.

Recreate IAM service role

Confirmation of IAM service role creation

As always, we welcome your feedback about features you’re using in OpsWorks. Be sure to visit the OpsWorks user forums, and check out the documentation.



Secure AWS CodeCommit with Multi-Factor Authentication

by Steffen Grunwald | on | in How-to | | Comments

This blog post shows you how to set up AWS CodeCommit if you want to enforce multi-factor authentication (MFA) for your repository users. One of the most common reasons for using MFA for your AWS CodeCommit repository is to secure sensitive data or prevent accidental pushes to the repository that could trigger a sensitive change process.

By using the MFA capabilities of AWS Identity and Access Management (IAM) you can add an extra layer of protection to sensitive code in your AWS CodeCommit repository. AWS Security Token Service (STS) and IAM allow you to stretch the period during which the authentication is valid from 15 minutes to 36 hours, depending on your needs. AWS CLI profile configuration and the AWS CodeCommit credential helper transparently use the MFA information as soon as it has been issued, so you can work with MFA with minimal impact to your daily development process.

Solution Overview

AWS CodeCommit currently provides two communication protocols and authentication methods:

  • SSH authentication uses keys configured in IAM user profiles.
  • HTTPS authentication uses IAM keys or temporary security credentials retrieved when assuming an IAM role.

It is possible to use SSH in a manner that incorporates multiple factors. An SSH private key can be considered something you have and its passphrase something you know. However, the passphrase cannot technically be enforced on the client side. Neither is it issued on an independent device.

That is why the solution described in this post uses the assumption of IAM roles to enforce MFA. STS can validate MFA information from devices that issue time-based one-time passwords (TOTPs).

A typical scenario involves the use of multiple AWS accounts (for example, Dev and Prod). One account is used for authentication and another contains the resource to be accessed (in this case, your AWS CodeCommit repository). You could also apply this solution to a single account.

This is what the workflow looks like:


  1. A user authenticates with IAM keys and a token from her MFA device and retrieves temporary credentials from STS. Temporary credentials consist of an access key ID, a secret access key, and a session token. The expiration of these keys can be configured with a duration of up to 36 hours.
  2. To access the resources in a different account, role delegation comes into play. The local Git repository is configured to use the temporary credentials to assume an IAM role that has access to the AWS CodeCommit repository. Here again, STS provides temporary credentials, but they are valid for a maximum of one hour.
  3. When Git is calling AWS CodeCommit, the credentials retrieved in step 2 are used to authenticate the requests. When the credentials expire, they are reissued with the credentials from step 1.

You could use permanent IAM keys to directly assume the role in step 2 without the temporary credentials from step 1. The process reduces the frequency with which a developer needs to use MFA by increasing the lifetime of the temporary credentials.

Account Setup Tasks

The tasks to set up the MFA scenario are as follows:

  1. Create a repository in AWS CodeCommit.
  2. Create a role that is used to access the repository.
  3. Create a group allowed to assume the role.
  4. Create a user with an MFA device who belongs to the group.

The following steps assume that you have set up the AWS CLI and configured it with the keys of users who have the required permissions to IAM and AWS CodeCommit in two accounts. Following the workflow, we will create the following admin users and AWS CLI profiles:

  • admin-account-a needs permissions to administer IAM (built-in policy IAMFullAccess)
  • admin-account-b needs permissions to administer IAM and AWS CodeCommit (built-in policies IAMFullAccess and AWSCodeCommitFullAccess)

At the time of this writing, AWS CodeCommit is available in us-east-1 only, so use that region for the region profile attribute for account B.

The following scripts work for Linux and Mac OS. For readability, line breaks are separated by back slashes. If you want to run these scripts on Microsoft Windows, you will need to adapt them or run them on an emulation layer (for example, Cygwin).

Replace placeholders like <XXXX> before issuing the commands.

Task 1: Create a repository in AWS CodeCommit

Create an AWS CodeCommit repository in Account B:

aws codecommit create-repository 
   --repository-name myRepository 
   --repository-description "My Repository" 
   --profile admin-account-b

Task 2: Create a role that is used to access the repository

  1. Create an IAM policy that grants access to the repository in Account B. Name it MyRepositoryContributorPolicy.

    Here is the MyRepositoryContributorPolicy.json policy document:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

    Create the policy:

    aws iam create-policy 
        --policy-name MyRepositoryContributorPolicy 
        --policy-document file://./MyRepositoryContributorPolicy.json 
        --profile admin-account-b

  2. Create a MyRepositoryContributorRole role that has the MyRepositoryContributorPolicy attached in Account B.

    Here is the MyRepositoryContributorTrustPolicy.json trust policy document:

      "Version": "2012-10-17",
      "Statement": [
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<ACCOUNT_A_ID>:root"
          "Action": "sts:AssumeRole"

    Create the role:

    aws iam create-role 
    --role-name MyRepositoryContributorRole 
    --assume-role-policy-document file://./MyRepositoryContributorTrustPolicy.json 
    --profile admin-account-b

    Attach the MyRepositoryContributorPolicy:

    aws iam attach-role-policy 
    --role-name MyRepositoryContributorRole 
    --policy-arn arn:aws:iam::<ACCOUNT_B_ID>:policy/MyRepositoryContributorPolicy 
    --profile admin-account-b

Task 3: Create a group allowed to assume the role

  1. Create a MyRepositoryContributorAssumePolicy policy for users who are allowed to assume the role in Account A.

    Here is the MyRepositoryContributorAssumePolicy.json policy document:

        "Version": "2012-10-17",
        "Statement": [
                "Effect": "Allow",
                "Action": [
                "Resource": [
                "Condition": {
                    "NumericLessThan": {
                        "aws:MultiFactorAuthAge": "86400"

    The aws:MultiFactorAuthAge attribute is used to specify the validity, in seconds, after the temporary credentials with MFA information have been issued. After this period, the user can’t issue new temporary credentials by assuming a role. However, the old credentials retrieved by role assumption may still be valid for one hour to make calls to the repository.

    For this example, we set the value to 24 hours (86400 seconds).

    Create the policy:

    aws iam create-policy 
        --policy-name MyRepositoryContributorAssumePolicy 
        --policy-document file://./MyRepositoryContributorAssumePolicy.json 
        --profile admin-account-a

  2. Create the group for all users who need access to the repository:

    aws iam create-group 
        --group-name MyRepositoryContributorGroup 
        --profile admin-account-a

  3. Attach the policy to the group:

    aws iam attach-group-policy 
        --group-name MyRepositoryContributorGroup 
        --policy-arn arn:aws:iam::<ACCOUNT_A_ID>:policy/MyRepositoryContributorAssumePolicy 
        --profile admin-account-a

Task 4: Create a user with an MFA device who belongs to the group

  1. Create an IAM user in Account A:

    aws iam create-user 
        --user-name MyRepositoryUser 
        --profile admin-account-a

  2. Add the user to the IAM group:

    aws iam add-user-to-group 
        --group-name MyRepositoryContributorGroup 
        --user-name MyRepositoryUser 
        --profile admin-account-a

  3. Create a virtual MFA device for the user. You can use the AWS CLI, but in this case it is easier to create one in the AWS Management Console.

  4. Create IAM access keys for the user. Make note of the output of AccessKeyId and SecretAccessKey. They will be referenced as <ACCESS_KEY_ID> and <SECRET_ACCESS> later in this post.

    aws iam create-access-key 
       --user-name MyRepositoryUser 
       --profile admin-account-a

You’ve now completed the account setup. To create more users, repeat task 4. Now we can continue to the local setup of the contributor’s environment.

Initialize the Contributor’s Environment

Each contributor must perform the setup in order to have access to the repository.

Setup Tasks:

  1. Create a profile for the IAM user who fetches temporary credentials.
  2. Create a profile that is used to access the repository.
  3. Populate the role assuming profile with temporary credentials.

Task 1: Create a profile for the IAM user who fetches temporary credentials

By default, the AWS CLI maintains two files in ~/.aws/ that contain per-profile settings. One is credentials, which stores sensitive information for means of authentication (for example, the secret access keys). The other is config, which defines all other settings, such as the region or the MFA device to use.

Add the IAM keys for MyRepositoryUser that you created in Account Setup task 4 to ~/.aws/credentials:


Add the following lines to ~/.aws/config:

[profile FetchMfaCredentials]

get_session_token_duration_seconds is a custom attribute that is used later by a script. It must not exceed the value of aws:MultiFactorAuthAge that we used in the assume policy.

Task 2: Create a profile that is used to access the repository

Add the following lines to ~/.aws/config:

[profile MyRepositoryContributor]

When the MyRepositoryContributor profile is used, the MyRepositoryContributorRole is assumed with credentials of the MyRepositoryAssumer profile. You may have noticed that we have not put MyRepositoryAssumer in the credentials file yet. The following task shows how the file is populated.

Task 3: Populate the role assuming profile with temporary credentials

  1. Create the script in your home directory or any other location:

    # Parameter 1 is the name of the profile that is populated
    # with keys and tokens.
    # Parameter 2 is the name of the profile that calls the
    # session token service.
    # It must contain IAM keys and mfa_serial configuration
    # The STS response contains an expiration date/ time.
    # This is checked to only set the keys if they are expired.
    EXPIRATION=$(aws configure get expiration --profile "$1")
    if [ -n "$EXPIRATION" ];
            # get current time and expiry time in seconds since 1-1-1970
            NOW=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
            # if tokens are set and have not expired yet
            if [[ "$EXPIRATION" > "$NOW" ]];
                    echo "Will not fetch new credentials. They expire at (UTC) $EXPIRATION"
    if [ "$RELOAD" = "true" ];
            echo "Need to fetch new STS credentials"
            MFA_SERIAL=$(aws configure get mfa_serial --profile "$2")
            DURATION=$(aws configure get get_session_token_duration_seconds --profile "$2")
            read -p "Token for MFA Device ($MFA_SERIAL): " TOKEN_CODE
            read -r AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN EXPIRATION AWS_ACCESS_KEY_ID < <(aws sts get-session-token 
                    --profile "$2" 
                    --output text 
                    --query 'Credentials.*' 
                    --serial-number $MFA_SERIAL 
                    --duration-seconds $DURATION 
                    --token-code $TOKEN_CODE)
            aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" --profile "$KEY_PROFILE"
            aws configure set aws_session_token "$AWS_SESSION_TOKEN" --profile "$KEY_PROFILE"
            aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" --profile "$KEY_PROFILE"
            aws configure set expiration "$EXPIRATION" --profile "$1"

    This script takes the credentials from the profile from the second parameter to request temporary credentials. These will be written to the profile specified in the first parameter.

  2. Run the script once. You might need to set execution permission (for example, chmod 755) before you run it.

    ~/ MyRepositoryAssumer FetchMfaCredentials
    Need to fetch new STS credentials
    Token for MFA Device (arn:aws:iam::<ACCOUNT_A_ID>:mfa/MyRepositoryUser): XXXXXX

    This populates information retrieved from STS to the ~/.aws/config and ~/.aws/credentials file.

  3. Clone the repository, configure Git to use temporary credentials, and create an alias to renew MFA credentials:

    git clone --config 'credential.helper=!aws codecommit 
        --profile MyRepositoryContributor 
        credential-helper $@' 
        --config 'credential.UseHttpPath=true' 
        --config 'alias.mfa=!~/ 
        MyRepositoryAssumer FetchMfaCredentials' 
        $(aws codecommit get-repository 
        --repository-name myRepository 
        --profile MyRepositoryContributor 
        --output text 
        --query repositoryMetadata.cloneUrlHttp)

    This clones the repository from AWS CodeCommit. You can issue subsequent calls as long as the temporary credentials retrieved in step 2 have not expired. As soon as they have expired, the credential helper will return an error with prompts for username and password:

    A client error (ExpiredToken) occurred when calling the AssumeRole operation:
    The security token included in the request is expired

    In this case, you should cancel the Git command (Ctrl-C) and trigger the renewal of the token by calling the alias in your repository:

    git mfa

We hope you find the steps for enforcing MFA for your repository users helpful. Feel free to leave your feedback in the comments.

Introducing Application Load Balancer – Unlocking and Optimizing Architectures

by George Huang | on | | Comments

This is a guest blog post by Felix Candelario & Benjamin F., AWS Solutions Architects.

This blog post will focus on architectures you can unlock with the recently launched Application Load Balancer and compare them with the implementations that use what we now refer to as the Classic Load Balancer. An Application Load Balancer operates at the application layer and makes routing and load-balancing decisions on application traffic using HTTP and HTTPS.

There are several features to help you unlock new workloads:

  • Content-based routing

    • Allows you to define rules that route traffic to different target groups based on the path of a URL. The target group typically represents a service in a customer’s architecture.
  • Container support

    • Provides the ability to load-balance across multiple ports on the same Amazon EC2 instance. This functionality specifically targets the use of containers and is integrated into Amazon ECS.
  • Application monitoring

    • Allows you to monitor and associate health checks per target group.

Service Segmentation Using Subdomains

Our customers often need to break big, monolithic applications into smaller service-oriented architectures while hosting this functionality under the same domain name.

In the architecture shown here, a customer has decided to segment services such as processing orders, serving images, and processing registrations. Each function represents a discrete collection of instances. Each collection of instances host several applications that provide a service.

Using a classic load balancer, the customer has to deploy several load balancers. Each load balancer points to the instances that represent and front the service by using a subdomain.

With the introduction of content-based routing on the new application load balancers, customers can reduce the number of load balancers required to accomplish the segmentation.

Application Load Balancers introduce the concept of rules, targets, and target groups. Rules determine how to route requests. Each rule specifies a target group, a condition, and a priority. An action is taken when the conditions on a rule are matched. Targets are endpoints that can be registered as a member of a target group. Target groups are used to route requests to registered targets as part of the action for a rule. Each target group specifies a protocol and target port. You can define health checks per target group and you can route to multiple target groups from each Application Load Balancer.

A new architecture shown here accomplishes with a single load balancer what previously required three. Here we’ve configured a single Application Load Balancer with three rules.

Let’s walk through the first rule in depth. To configure the Application Load Balancer to route traffic destined to, we must complete five tasks.

  1. Create the Application Load Balancer.
  2. Create a target group.
  3. Register targets with the target group.
  4. Create a listener with the default rule that forwards requests to the default target group.
  5. Create a listener that forwards requests to the previously created target group.

To create the Application Load Balancer, we must provide a name for it and a minimum of two subnets.

aws elbv2 create-load-balancer –name example-loadbalancer –subnets "subnet-9de127c4" "subnet-0b1afc20"

To create a target group, we must specify a name, protocol, port, and vpc-id. Based on the preceding figure, we execute the following command to create a target group for the instances that represent the order-processing functionality.

aws elbv2 create-target-group –name order-instances –protocol HTTP –port 80 –vpc vpc-85a268e0

After the target group has been created, we can either add instances manually or through the use of an Auto Scaling group. To add an Auto Scaling group, we use the Auto Scaling group name and the generated target group ARN:

aws autoscaling attach-load-balancer-target-groups –auto-scaling-group-name order_autoscaling_group –target-group-arns "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/order-instances/f249f89ef5899de1"

If we want to manually add instances, we would supply a list of instances and the generated target group ARN to register the instances associated with the order-processing functionality:

aws elbv2 register-targets –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/order-instances/f249f89ef5899de1" –targets Id=i-01cb16f914ec4714c,Port=80

After the instances have been registered with the target group, we create a listener with a default rule that forwards requests to the first target group. For the sake of this example, we’ll assume that the orders target group is the default group:

aws elb create-listener –load-balancer-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/order-instances/f249f89ef5899de1"  –protocol HTTP –port 80 –default-actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-east-1:007038732177:targetgroup/orders-instances/e53f8f9dfaf230c8"

Finally, we create a rule that forwards a request to the target group to which the order instances are registered when the condition of a path-pattern (in this case, ‘/orders/*’) is met:

aws elbv2 create-rule –listener-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:listener/app/example-loadbalancer/6bfa6ad4a2dd7925/6f916335439e2735" –conditions Field=path-pattern,Values=’/orders/*’ –priority 20 –actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/order-instances/f249f89ef5899de1"

We repeat this process (with the exception of creating the default listener) for the images and registration functionality.

With this new architecture, we can move away from segmenting functionality based on subdomains and rely on paths. In this way, we preserve the use of a single subdomain, www, throughout the entire user experience. This approach reduces the number of Elastic Load Balancing load balancers required, which results in costs savings. It also reduces the operational overheard required for monitoring and maintaining additional elements in the application architecture.

Important The move from subdomain segmentation to path segmentation requires you to rewrite code to accommodate the new URLs.

Service Segmentation Using a Proxy Layer

A proxy layer pattern is used when customers want to use a single subdomain, such as www, while still segmenting functionality by grouping back-end servers. The following figure shows a common implementation of this pattern using the popular open source package NGINX.

In this implementation, the subdomain of is associated with a top-level external load balancer. This load balancer is configured so that traffic is distributed to a group of instances running NGINX. Each instance running NGINX is configured with rules that direct traffic to one of the three internal load balancers based on the path in the URL.

For example, when a user browses to, the external Elastic Load Balancing load balancer sends all traffic to the NGINX layer. All three of the NGINX installations are configured in the same way. When one of the NGINX instances receives the request, it parses the URL, matches a location for “/amazing”, and sends traffic to the server represented by the internal load balancer fronting the group of servers providing the Amazing Brand functionality.

It’s important to consider the impact of failed health checks. Should one of the NGINX instances fail health checks generated by the external load balancer, this load balancer will stop sending traffic to that newly marked unhealthy host. In this scenario, all of the discrete groups of functionality would be affected, making troubleshooting and maintenance more complex.

The following figure shows how customers can achieve segmentation while preserving a single subdomain without having to deploy a proxy layer.

In this implementation, both the proxy layer and the internal load balancers can be removed now that we can use the content-based routing associated with the new application load balancers. Using the previously demonstrated rules functionality, we can create three rules that point to different target groups based on different path conditions.

For this implementation, you’ll need to create the application load balancer, create a target group, register targets to the target group, create the listener, and create the rules.

1. Create the application load balancer.

aws elbv2 create-load-balancer –name example2-loadbalancer –subnets "subnet-fc02b18b" "subnet-63029106"

2. Create three target groups.

aws elbv2 create-target-group –name amazing-instances –protocol HTTP –port 80 –vpc vpc-85a268e0

aws elbv2 create-target-group –name stellar-instances –protocol HTTP –port 80 –vpc vpc-85a268e0

aws elbv2 create-target-group –name awesome-instances –protocol HTTP –port 80 –vpc vpc-85a268e0

3. Register targets with each target group.

aws elbv2 register-targets –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/amazing-instances/ad4a2174e7cc314c" –targets Id=i-072db711f70c36961,Port=80

aws elbv2 register-targets –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/stellar-instances/ef828b873624ba7a" –targets Id=i-08def6cbea7584481,Port=80

aws elbv2 register-targets –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/awesome-instances/116b2df4cd7fcc5c" –targets Id=i-0b9dba5b06321e6fe,Port=80

4. Create a listener with the default rule that forwards requests to the default target group.

aws elbv2 create-listener –load-balancer-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:loadbalancer/app/example2-loadbalancer/a685c68b17dfd091" –protocol HTTP –port 80 –default-actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/amazing-instances/ad4a2174e7cc314c"

5.  Create a listener that forwards requests for each path to each target group. You need to make sure that every priority is unique.

aws elbv2 create-rule –listener-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:listener/app/example2-loadbalancer/a685c68b17dfd091/546af7daf3bd913e" –conditions Field=path-pattern,Values=’/amazingbrand/*’ –priority 20 –actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/amazing-instances/ad4a2174e7cc314c"


aws elbv2 create-rule –listener-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:listener/app/example2-loadbalancer/a685c68b17dfd091/546af7daf3bd913e" –conditions Field=path-pattern,Values=’/stellarbrand/*’ –priority 40 –actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/stellar-instances/ef828b873624ba7a"


aws elbv2 create-rule –listener-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:listener/app/example2-loadbalancer/a685c68b17dfd091/546af7daf3bd913e" –conditions Field=path-pattern,Values=’/awesomebrand/*’ –priority 60 –actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/awesome-instances/116b2df4cd7fcc5c"

This implementation not only saves you the costs associated with running instances that support a proxy layer and an additional layer of load balancers. It also increases robustness as a result of application monitoring. In the Classic Load Balancer implementation of a proxy pattern, the failure of a single instance hosting NGINX impacts all of the other discrete functionality represented by the grouping of instances. In the application load balancer implementation, health checks are now associated with a single target group only. Failures and performance are now segmented from each other.

Run the following command to verify the health of the registered targets in the Amazing Brands target group:

aws elbv2 describe-target-health –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/amazing-instances/ad4a2174e7cc314c"

If the instances in this target group were marked as unhealthy, you would see the following output:


    "TargetHealthDescriptions": [


            "HealthCheckPort": "80",

            "Target": {

                "Id": "i-072db711f70c36961",

                "Port": 80


            "TargetHealth": {

                "State": "unhealthy",

                "Reason": "Target.Timeout",

                "Description": "Request timed out"





Service Segmentation Using Containers

Increasingly, customers are using containers as a way to package and isolate applications. Instead of grouping functionality by instances, customers are providing an even more granular collection of computing resources by using containers.

When you use Classic load balancers, you create a fixed relationship between the load balancer port and the container instance port. For example, it is possible to map the load balancer port 80 to the container instance port 3030 and the load balancer port 4040 to the container instance port 4040. However, it is not possible to map the load balancer port 80 to port 3030 on one container instance and port 4040 on another container instance.

The following figure illustrates this limitation. It also points out a pattern of using a proxy container to represent other containers operating on different ports. Logically, this implementation is similar to the proxy segmentation implementation described earlier.

Figure 5 Classic load balancer container based segmentation

Enhanced container support is one of the major features of the Application Load Balancer. It makes it possible to load-balance across multiple ports on the same EC2 instance. The following figure shows how this capability removes the need to run containers that proxy access to other containers.

To integrate containers, you only need to register the targets in the target group, which the Amazon ECS scheduler handles automatically. The following command configures /cart as illustrated in the preceding figure.

aws elbv2 register-targets –-target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/cart-instances/ad4a2174e7cc314c" –-targets Id=i-84ri3a2c6dcd16b9c,Port=90 Id=i-83fc3a2c6dcd16b9c,Port=90 Id=i-qy342a2c6dcd16b9c,Port=100

A/B Testing

A/B testing is a term used for randomized experiments of two separate website experiences to test and gather data that will be helpful in decision-making. To facilitate this type of testing, you need to redirect a percentage of traffic to the secondary stack.

By using Classic Load Balancers, you can conduct these experiments by grouping the different experiences under separate load balancers. By using Amazon Route 53, you can then leverage a group of weighted resource record sets that point to the CNAMEs provided by the Classic Load Balancer. By modifying the weight of a given record, you can then move a random sampling of customers to a different website experience represented by the instances behind the Classic Load Balancer.

The introduction of the application load balancer optimizes A/B testing in a couple of ways. In the following figure, you can see the same grouping of instances that represent the two different website experiences (the A and the B experience shown in the preceding figure). The major differences here are one less load balancer, which reduces costs and configuration, and a new mechanism, rules, to control the switch from the A to the B experience. In this configuration, the logic for redirecting a percentage of traffic must be done at the application level, not the DNS level, by rewriting URLs that point to the B stack instead of the default A stack. The benefits of this approach are that specific users are targeted based on criteria that the application is aware of (random users, geographies, users’ history or preferences). There is also no need to rely on DNS for redirecting some traffic, so the control of who is directed to stack B is much more fine-grained. This mechanism also allows for a more immediate transitioning of users from the A to the B experience because there is no delay associated with DNS records having to be flushed for user cache.


The launch of the application load balancer provides significant optimization in segmentation techniques and A/B testing. These two use cases represent only a subset, but they illustrate how you can leverage the new features associated with this launch. Feel free to leave your feedback in the comments.

Building a Microsoft BackOffice Server Solution on AWS with AWS CloudFormation

by Bill Jacobi | on | in Best practices, How-to, New stuff | | Comments

Last month, AWS released the AWS Enterprise Accelerator: Microsoft Servers on the AWS Cloud along with a deployment guide and CloudFormation template. This blog post will explain how to deploy complex Windows workloads and how AWS CloudFormation solves the problems related to server dependencies.

This AWS Enterprise Accelerator solution deploys the four most requested Microsoft servers ─ SQL Server, Exchange Server, Lync Server, and SharePoint Server ─ in a highly available, multi-AZ architecture on AWS. It includes Active Directory Domain Services as the foundation. By following the steps in the solution, you can take advantage of the email, collaboration, communications, and directory features provided by these servers on the AWS IaaS platform.  

There are a number of dependencies between the servers in this solution, including:

  • Active Directory
  • Internet access
  • Dependencies within server clusters, such as needing to create the first server instance before adding additional servers to the cluster.
  • Dependencies on AWS infrastructure, such as sharing a common VPC, NAT gateway, Internet gateway, DNS, routes, and so on.

The infrastructure and servers are built in three logical layers. The Master template orchestrates the stack builds with one stack per Microsoft server and manages inter-stack dependencies. Each of the CloudFormation stacks uses PowerShell to stand up the Microsoft servers at the OS level. Before it configures the OS, CloudFormation configures the AWS infrastructure required by each Windows server. Together, CloudFormation and PowerShell create a quick, repeatable deployment pattern for the servers. The solution supports 10,000 users. Its modularity at both the infrastructure and application level enables larger user counts.

MSServers Solution - 6 CloudFormation Stacks

Managing Stack Dependencies

To explain how we enabled the dependencies between the stacks, the SQLStack is dependent on ADStack since SQL Server is dependent on Active Directory; and, similarly, SharePointStack is dependent on SQLStack, both as required by Microsoft. Lync is dependendent on Exchange since both servers must extend the AD schema independently. In Master, these server dependencies are coded in CloudFormation as follows:

"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",

             "Properties": …
"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",
             "Properties": …
       "SharePointStack": {
            "Type": "AWS::CloudFormation::Stack",
            "DependsOn": "SQLStack",
            "Properties": …

The “DependsOn” statements in the stack definitions forces the order of stack execution to match the diagram. Lower layers are executed and successfully completed before the upper layers. If you do not use “DependsOn”, CloudFormation will execute your stacks in parallel. An example of parallel execution is what happens after ADStack returns SUCCESS. The two higher-level stacks, SQLStack and ExchangeStack, are executed in parallel at the next level (layer 2).  SharePoint and Lync are executed in parallel at layer 3. The arrows in the diagram indicate stack dependencies.

Passing Parameters Between Stacks

If you have concerns about how to pass infrastructure parameters between the stack layers, let’s use an example in which we want to pass the same VPCCIDR to all of the stacks in the solution. VPCCIDR is defined as a parameter in Master as follows:

            "AllowedPattern": "[a-zA-Z0-9]+\..+",
            "Default": "",
            "Description": "CIDR Block for the VPC",
            "Type": "String"

By defining VPCCIDR in Master and soliciting user input for this value, this value is then passed to ADStack by the use of an identically named and typed parameter between Master and the stack being called.

            "Description": "CIDR Block for the VPC",
            "Type": "String",
            "Default": "",
            "AllowedPattern": "[a-zA-Z0-9]+\..+"

After Master defines VPCCIDR, ADStack can use “Ref”: “VPCCIDR” in any resource (such as the security group, DomainController1SG) that needs the VPC CIDR range of the first domain controller. Instead of passing commonly-named parameters between stacks, another option is to pass outputs from one stack as inputs to the next. For example, if you want to pass VPCID between two stacks, you could accomplish this as follows. Create an output like VPCID in the first stack:

Outputs”  : {
               “VPCID” : {
                          “Value” : “ {“Ref” : “VPC”},
                          “Description” : “VPC ID”
               }, …

In the second stack, create a parameter with the same name and type:

Parameters” : {
               “VPCID” : {
                          “Type” : “AWS::EC2::VPC::Id”,
               }, …

When the first template calls the second template, VPCID is passed as an output of the first template to become an input (parameter) to the second.

Managing Dependencies Between Resources Inside a Stack

All of the dependencies so far have been between stacks. Another type of dependency is one between resources within a stack. In the Microsoft servers case, an example of an intra-stack dependency is the need to create the first domain controller, DC1, before creating the second domain controller, DC2.

DC1, like many cluster servers, must be fully created first so that it can replicate common state (domain objects) to DC2.  In the case of the Microsoft servers in this solution, all of the servers require that a single server (such as DC1 or Exch1) must be fully created to define the cluster or farm configuration used on subsequent servers.

Here’s another intra-stack dependency example: The Microsoft servers must fully configure the Microsoft software on the Amazon EC2 instances before those instances can be used. So there is a dependency on software completion within the stack after successful creation of the instance, before the rest of stack execution (such as deploying subsequent servers) can continue. These intra-stack dependencies like “software is fully installed” are managed through the use of wait conditions. Wait conditions are CloudFormation resources just like EC2 instances and allow the “DependsOn” attribute mentioned earlier to manage dependencies inside a stack. For example, to pause the creation of DC2 until DC1 is complete, we configured the following “DependsOn” attribute using a wait condition. See (1) in the following diagram:

"DomainController1": {
            "Type": "AWS::EC2::Instance",
            "DependsOn": "NATGateway1",
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "configSets": {
                        "config": [
                    }, …
             "Properties" : …
"DomainController2": {
             "Type": "AWS::EC2::Instance",
[1]          "DependsOn": "DomainController1WaitCondition",
             "Metadata": …,
             "Properties" : …

The WaitCondition (2) uses on a CloudFormation resource called a WaitConditionHandle (3), which receives a SUCCESS or FAILURE signal from the creation of the first domain controller:

"DomainController1WaitCondition": {
            "Type": "AWS::CloudFormation::WaitCondition",
            "DependsOn": "DomainController1",
            "Properties": {
                "Handle": {
[2]                    "Ref": "DomainController1WaitHandle"
                "Timeout": "3600"
     "DomainController1WaitHandle": {
[3]            "Type": "AWS::CloudFormation::WaitConditionHandle"

SUCCESS is signaled in (4) by cfn-signal.exe –exit-code 0 during the “finalize” step of DC1, which enables CloudFormation to execute DC2 as an EC2 resource via the wait condition.

                "finalize": {
                       "commands": {
                           "a-signal-success": {
                               "command": {
                                   "Fn::Join": [
[4]                                            "cfn-signal.exe -e 0 "",
                                               "Ref": "DomainController1WaitHandle"


If the timeout had been reached in step (2), this would have automatically signaled a FAILURE and stopped stack execution of ADStack and the Master stack.

As we have seen in this blog post, you can create both nested stacks and nested dependencies and can pass parameters between stacks by passing standard parameters or by passing outputs. Inside a stack, you can configure resources that are dependent on other resources through the use of wait conditions and the cfn-signal infrastructure. The AWS Enterprise Accelerator solution uses both techniques to deploy multiple Microsoft servers in a single VPC for a Microsoft BackOffice solution on AWS.  

In a future blog post, we will illustrate how PowerShell can be used to bootstrap and configure Windows instances with downloaded cmdlets, all integrated into CloudFormation stacks.

AWS OpsWorks Endpoints Available in 11 Regions

by Daniel Huesch | on | | Comments

AWS OpsWorks, a service that helps you configure and operate applications of all shapes and sizes using Chef automation, has just added support for the Asia Pacific (Seoul) Region and launched public endpoints in Frankfurt, Ireland, N. California, Oregon, Sao Paolo, Singapore, Sydney, and Tokyo.

Previously, customers had to manage OpsWorks stacks for these regions using our N. Virginia endpoint. Using an OpsWorks endpoint in the same region as your stack reduces API latencies, improves instance response times, and limits impact from cross-region dependency failures.

A full list of endpoints can be found in AWS Regions and Endpoints.

Introducing the AWS for DevOps Getting Started Guide

by Paul Cornell | on | in How-to | | Comments

We are pleased to announce the AWS for DevOps Getting Started Guide is now available. As a companion to our DevOps and AWS website, this new resource teaches you, in a hands-on way, how to use services like AWS CodeCommit, AWS CodeDeploy, and AWS CodePipeline for continuous integration and continuous delivery.

Specifically, you will learn how to:

  1. Use AWS CloudFormation to give users access to required AWS services, resources, and actions.
  2. Create a source code repository in AWS CodeCommit and then use AWS CloudFormation to launch an Amazon EC2 instance that connects to the repository.
  3. Download the source code you will deploy and then push it into the repository.
  4. Use AWS CloudFormation to create the deployment target (an Amazon EC2 instance) and AWS resources that are compatible with AWS CodeDeploy, AWS Elastic Beanstalk, or AWS OpsWorks.
  5. Use AWS CloudFormation to create and run a pipeline in AWS CodePipeline to automate continuous delivery of the repository’s source code to the deployment target.
  6. Verify the deployment’s results on the deployment target.
  7. Make a change to the source code and then push it into the repository, triggering an automatic redeployment to the deployment target.
  8. Verify the deployed change on the deployment target.
  9. Use AWS CloudFormation to clean up the resources you created for this walkthrough.

You do not need to know anything about AWS to try this walkthrough. If you don’t have an AWS account, we’ll show you how to create one. And, if you’re new to AWS, our AWS Free Tier lets you to experiment with few, if any, charges to your AWS account.

Perhaps your organization is considering a move to DevOps practices. Or maybe your organization is practicing DevOps now. In either case, if you’re not yet using AWS services, this resource can help you down this path. Please let us know what you think by using the Feedback button on any of the walkthrough’s pages.

Get started now!

Auto Scaling AWS OpsWorks Instances

by Daniel Huesch | on | | Comments

This post will show you how to integrate Auto Scaling groups with AWS OpsWorks so you can leverage  the native scaling capabilities of Amazon EC2 and the OpsWorks Chef configuration management solution.

Auto Scaling ensures you have the correct number of EC2 instances available to handle your application load.  You create collections of EC2 instances (called Auto Scaling groups), specify desired instance ranges for them, and create scaling policies that define when instances are provisioned or removed from the group.

AWS OpsWorks helps configure and manage your applications.  You create groups of EC2 instances (called stacks and layers) and associate to them configuration such as volumes to mount or Chef recipes to execute in response to lifecycle events (for example, startup/shutdown).  The service streamlines the instance provisioning and management process, making it easy to launch uniform fleets using Chef and EC2.

The following steps will show how you can use an Auto Scaling group to manage EC2 instances in an OpsWorks stack.

Integrating Auto Scaling with OpsWorks

This example will require you to create the following resources:

Auto Scaling group: This group is responsible for EC2 instance provisioning and release.

Launch configuration: A configuration template used by the Auto Scaling group to launch instances.

OpsWorks stack: Instances provisioned by the Auto Scaling group will be registered with this stack.

IAM instance profile: This profile grants permission to your instances to register with OpsWorks.

Lambda function: This function handles deregistration of instances from your OpsWorks stack.

SNS topic: This topic triggers your deregistration Lambda function after Auto Scaling terminates an instance.

Step 1: Create an IAM instance profile

When an EC2 instance starts, it must make an API call to register itself with OpsWorks.  By assigning an IAM instance profile to the instance, you can grant it permission to make OpsWorks calls.

Open the IAM console, choose Roles, and then choose Create New Role. Type a name for the role, and then choose Next Step.  Choose the Amazon EC2 Role, and then select the check box next to the AWSOpsWorksInstanceRegistration policy.  Finally, choose Next Step, and then choose Create Role. As the name suggests, the AWSOpsWorksInstanceRegistration policy only allows the API calls required to register an instance. Because you will have to make two more calls for this demo,  add the following inline policy to the new role.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

Step 2: Create an OpsWorks stack

Open the AWS OpsWorks console.  Choose the Add Stack button from the dashboard, and then  choose Sample Stack. Make sure the Linux OS option is selected, and  then choose Create Stack.  After the stack has been created, choose Explore the sample stack. Choose the layer named Node.js App Server.  You will need the IDs of this sample stack and layer in a later step.  You can extract both from the URL of the layer page, which uses this format: YOUR-OPSWORKS-STACK-ID/layers/ YOUR-OPSWORKS-LAYER-ID.

Step 3: Create a Lambda function

This function is responsible for deregistering an instance from your OpsWorks stack.  It will be invoked whenever an EC2 instance in the Auto Scaling group is terminated.

Open the AWS Lambda console and choose the option to create a Lambda function.  If you are prompted to choose a blueprint, choose Skip.  You can give the function any name you like, but be sure to choose the Python 2.7 option from the Runtime drop-down list.

Next, paste the following code into the Lambda Function Code text entry box:

import json
import boto3

def lambda_handler(event, context):
    message = json.loads(event['Records'][0]['Sns']['Message'])
    if (message['Event'] == 'autoscaling:EC2_INSTANCE_TERMINATE'):
        ec2_instance_id = message['EC2InstanceId']
        ec2 = boto3.client('ec2')
        for tag in ec2.describe_instances(InstanceIds=[ec2_instance_id])['Reservations'][0]['Instances'][0]['Tags']:
            if (tag['Key'] == 'opsworks_stack_id'):
                opsworks_stack_id = tag['Value']
                opsworks = boto3.client('opsworks', 'us-east-1')
                for instance in opsworks.describe_instances(StackId=opsworks_stack_id)['Instances']:
                    if ('Ec2InstanceId' in instance):
                        if (instance['Ec2InstanceId'] == ec2_instance_id):
                            print("Deregistering OpsWorks instance " + instance['InstanceId'])
    return message

Then, from the Role drop-down list, choose Basic Execution Role.  On the page that appears, expand  View Policy Document, and then choose Edit

Next, paste the following JSON into the policy text box:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": [
      "Effect": "Allow",
      "Action": [
       "Resource": "arn:aws:logs:*:*:*"

Choose Allow.  On the Lambda creation page, change the Timeout field to 0 minutes and 15 seconds, and choose Next.  Finally, choose Create Function.

Step 4: Create an SNS topic

The SNS topic you create in this step will be responsible for triggering an execution of the Lambda function you created in step 3.  It is the glue that ties Auto Scaling instance terminations to corresponding OpsWorks instance deregistrations.

Open the Amazon SNS console.  Choose Topics, and then choose Create New Topic.  Type topic and display names, and then choose Create Topic.  Select the check box next to the topic you just created, and from Actions, choose Subscribe to Topic.  From the Protocol drop-down list, choose AWS Lambda.  From the Endpoint drop-down list, choose the Lambda function you created in step 3.  Finally, choose Create Subscription.

Step 5: Create a launch configuration

This configuration contains two important settings: security group and user data.  Because you’re deploying a Node.js app that’s will listen on port 80, you must use a security group that has this port open. Then there’s the user data script that’s executed when an instance starts. Here we make the call to register the instance with OpsWorks.



Open the Amazon EC2 console and create a launch configuration. Use the latest release of Amazon Linux, which should be the first operating system in the list. On the details page, under IAM role, choose the instance profile you created in step 2. Expand the Advanced Details area and paste the following code in the User data field. Because this is a template, you will have to replace YOUR-OPSWORKS-STACK-ID and YOUR-OPSWORKS-LAYER-ID with the OpsWorks stack and layer ID you copied in step 1.

sed -i'' -e 's/.*requiretty.*//' /etc/sudoers
pip install --upgrade awscli
INSTANCE_ID=$(/usr/bin/aws opsworks register --use-instance-profile --infrastructure-class ec2 --region us-east-1 --stack-id YOUR-OPSWORKS-STACK-ID --override-hostname $(tr -cd 'a-z' < /dev/urandom |head -c8) --local 2>&1 |grep -o 'Instance ID: .*' |cut -d' ' -f3)
/usr/bin/aws opsworks wait instance-registered --region us-east-1 --instance-id $INSTANCE_ID
/usr/bin/aws opsworks assign-instance --region us-east-1 --instance-id $INSTANCE_ID --layer-ids YOUR-OPSWORKS-LAYER-ID

Step 6. Create an Auto Scaling group

On the last page of the Launch Configuration wizard, choose Create an Auto Scaling group using this launch configuration. In the notification settings, add a notification to your SNS topic for the terminate event. In the tag settings, add a tag with key opsworks_stack_id. Use the OpsWorks stack ID you entered in the User data field as the value. Make sure the Tag New Instances check box is selected.


Because the default desired size for your Auto Scaling group is 1, a single instance will be started in EC2 immediately.  You can confirm this through the EC2 console in a few seconds:

A few minutes later, the instance will appear in the OpsWorks console:

To confirm your Auto Scaling group instances will be deregistered from OpsWorks on termination, change the Desired value from 1 to 0.  The instance will disappear from the EC2 console. Within minutes, it will disappear from the OpsWorks console, too.

Congratulations! You’ve configured an Auto Scaling group to seamlessly integrate with AWS OpsWorks. Please let us know if this helps you scale instances in OpsWorks or if you have tips of your own.

How to Centrally Manage AWS Config Rules across Multiple AWS Accounts

by Chayan Biswas | on | in How-to | | Comments

AWS Config Rules allow you to codify policies and best practices for your organization and evaluate configuration changes to AWS resources against these policies. If you manage multiple AWS accounts, you might want to centrally govern and define these policies for all of the AWS accounts in your organization. With appropriate authorization, you can create a Config rule in one account that uses an AWS Lambda function owned by another account. Such a setup allows you to maintain a single copy of the Lambda function. You do not have to duplicate source code across accounts.

In this post, I will show you how to create Config rules with appropriate cross-account Lambda function authorization. I’ll use a central account that I refer to as the admin-account to create a Lambda function. All of the other accounts then point to the Lambda function owned by the admin-account to create a Config rule. Let’s call one of these accounts the managed-account. This setup allows you to maintain tight control over the source code and eliminates the need to create a copy of Lambda functions in all of the accounts. You no longer have to deploy updates to the Lambda function in these individual accounts.

We will complete these steps for the setup:

  1. Create a Lambda function for a cross-account Config rule in the admin-account.
  2. Authorize Config Rules in the managed-account to invoke a Lambda function in the admin-account.
  3. Create an IAM role in the managed-account to pass to the Lambda function.
  4. Add a policy and trust relationship to the IAM role in the managed-account.
  5. Pass the IAM role from the managed-account to the Lambda function.

Step 1: Create a Lambda Function for a Cross-Account Config Rule

Let’s first create a Lambda function in the admin-account. In this example, the Lambda function checks if log file validation is enabled for all of the AWS CloudTrail trails. Enabling log file validation helps you determine whether a log file was modified or deleted after CloudTrail delivered it. For more information about CloudTrail log file validation, see Validating CloudTrail Log File Integrity.

Note: This rule is an example only. You do not need to create this specific rule to set up cross-account Config rules. You can apply the concept illustrated here to any new or existing Config rule.

To get started, in the AWS Lambda console, choose the config-rule-change-triggered blueprint.


Next, modify the evaluateCompliance function and the handler invoked by Lambda. Leave the rest of the blueprint code as is.

function evaluateCompliance(configurationItem, ruleParameters) {
    checkDefined(configurationItem, 'configurationItem');
    checkDefined(configurationItem.configuration, 'configurationItem.configuration');
    checkDefined(ruleParameters, 'ruleParameters');
    //Check if the resource is of type CloudTrail
    if ('AWS::CloudTrail::Trail' !== configurationItem.resourceType) {
        return 'NOT_APPLICABLE';
    //If logfileValidation is enabled, then the trail is compliant
    else if (configurationItem.configuration.logFileValidationEnabled) {
        return 'COMPLIANT';
    else {
        return 'NON_COMPLIANT';

In this code snippet, we first ensure that the evaluation is being performed for a trail. Then we check whether the LogFileValidationEnabled property of the trail is set to true. If log file validation is enabled, the trail is marked compliant. Otherwise, the trail is marked noncompliant.

Because this Lambda function is created for reporting evaluation results in the managed-account, the Lambda function will need to be able to call the PutEvaluations Config API (and other APIs, if needed) on the managed-account. We’ll pass the ARN of an IAM role in the managed-account to this Lambda function as a rule parameter. We will need to add a few lines of code to the Lambda function’s handler in order to assume the IAM role passed on by the Config rule in the managed-account:

exports.handler = (event, context, callback) => {
    event = checkDefined(event, 'event');
    const invokingEvent = JSON.parse(event.invokingEvent);
    const ruleParameters = JSON.parse(event.ruleParameters);
    const configurationItem = checkDefined(invokingEvent.configurationItem, 'invokingEvent.configurationItem');
    let compliance = 'NOT_APPLICABLE';
    const putEvaluationsRequest = {}; 
    if (isApplicable(invokingEvent.configurationItem, event)) {
        // Invoke the compliance checking function.
        compliance = evaluateCompliance(invokingEvent.configurationItem, ruleParameters);
    // Put together the request that reports the evaluation status
    // Note that we're choosing to report this evaluation against the resource that was passed in.
    // You can choose to report this against any other resource type supported by Config 

    putEvaluationsRequest.Evaluations = [{
        ComplianceResourceType: configurationItem.resourceType,
        ComplianceResourceId: configurationItem.resourceId,
        ComplianceType: compliance,
        OrderingTimestamp: configurationItem.configurationItemCaptureTime
    putEvaluationsRequest.ResultToken = event.resultToken;
    // Assume the role passed from the managed-account
    aws.config.credentials = new aws.TemporaryCredentials({RoleArn: ruleParameters.executionRole});
    let config = new aws.ConfigService({});
    // Invoke the Config API to report the result of the evaluation
    config.putEvaluations(putEvaluationsRequest, callback);

In this code snippet, the ARN of the IAM role in the managed-account is passed to this Lambda function as a rule parameter called executionRole. The highlighted lines of code are used to assume the role in the managed-account. Finally, we select the appropriate role (in the admin-account) and save the function.

Make a note of the IAM role in admin-account assigned to the Lambda function and the ARN of the Lambda function. We’ll need to refer these later. You can find the ARN of the Lambda function in the upper-right corner of the AWS Lambda console.

Step 2: Authorize Config Rules in Other Accounts to Invoke a Lambda Function in Your Account

Because the Lambda function we just created will be invoked by the managed-account, we need to add resource policies to allow the managed-account to perform this action. Resource policies to Lambda functions can be applied only through the AWS CLI or SDKs.

Here’s a CLI command you can use to add the resource policy for the managed-account:

$ aws lambda add-permission 
  --function-name cloudtrailLogValidationEnabled 
  --region <region> 
  --statement-id <id> 
  --action "lambda:InvokeFunction" 
  --source-account <managed-account> 

This statement allows only the principal (AWS Config) for the specified source-account to perform the InvokeFunction action on AWS Lambda functions. If more than one account will invoke the Lambda function, each account must be authorized.

Step 3: Create an IAM Role to Pass to the Lambda Function

Next, we need to create an IAM role in the managed-account that can be assumed by the Lambda function. If you want to use an existing role, you can skip to step 4.

Sign in to the AWS IAM console of one of the managed-accounts. In the left navigation, choose Roles, and then choose Create New Role.

On the Set Role Name page, type a name for the role:

Because we are creating this role for cross-account access between the AWS accounts we own, on the Select Role Type page, select Role for Cross-Account Access:

After we choose this option, we must type the account number of the account to which we want to allow access. In our case, we will type the account number of the admin-account.

After we complete this step, we can attach policies to the role. We will skip this step for now. Choose Next Step to review and create the role.

Step 4: Add Policy and Trust Relationships to the IAM Role

From the IAM console of the managed-account, choose the IAM role that the Lambda function will assume, and then click it to modify the role:

We now see options to modify permissions and trust relationships. This IAM role must have, at minimum, permission to call the PutEvaluations Config API in the managed-account. You can attach an existing managed policy or create an inline policy to grant permission to the role:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

This policy only allows PutEvaluations action on AWS Config service. You might want to extend permission for the role to perform other actions, depending on the evaluation logic you implement in the Lambda function.

We also need to ensure that the trust relationship is set up correctly. If you followed the steps in this post to create the role, you will see the admin-account has already been added as a trusted entity. This trust policy allows any entity in the admin-account to assume the role.

You can edit the trust relationship to restrict permission only to the role in the admin-account:

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<admin-account>:role/lambda_config_role"
      "Action": "sts:AssumeRole"

Here, lambda_config_role is the role we assigned to the Lambda function we created in the admin-account.

Step 5: Pass the IAM Role to the Lambda Function

The last step involves creating a custom rule in the managed-account. In the AWS Config console of the managed-account, follow the steps to create a custom Config rule. On the rule creation page, we will provide a name and description and paste the ARN of the Lambda function we created in the admin-account:

Because we want this rule to be triggered upon changes to CloudTrail trails, for Trigger type, select Configuration changes. For Scope of changes, select Resources. For Resources, add CloudTrail:Trail. Finally, add the executionRole rule parameter and paste the ARN of the IAM role: arn:aws:iam::<managed-account>:role/config-rule-admin.


Save your changes and then create the rule. After the rule is evaluated, inspect the results:

In this example, there are two CloudTrail trails, one of which is noncompliant. Upon further inspection, we find that the noncompliant trail does not have log file validation enabled:

After we enable log file validation, the rule will be evaluated again and the trail will be marked compliant.

In case you are managing multiple AWS accounts, you may want an easy way to create the Config rule and IAM role in all the accounts in your organization. This can be achieved by using the AWS CloudFormation template I have provided here. Before using this CloudFormation template, replace the admin-account placeholder with the account number of the AWS account you plan to use for centrally managing the Lambda function. Once the Config rule and IAM role are set up in all the managed accounts, you can simply modify the Lambda function in the admin-account to add further checks.


In this blog post, I showed how you can create AWS Config Rules that use Lambda functions with cross-account authorization. This setup allows you to centrally manage the Config rules and associated Lambda functions and retain control over the source code. As an alternative to this approach, you can use a CloudFormation template to create and update Config rules and associated Lambda functions in the managed accounts. The cross-account authorization we set up for the Lambda function in this blog post can also be extended to perform actions beyond reporting evaluation results. To do this, you need to add permission for the relevant APIs in the managed accounts.

We welcome your feedback! Leave comments in the section below or contact us on the AWS Config forum.

AWS CodeDeploy Deployments with HashiCorp Consul

by George Huang | on | in How-to, New stuff | | Comments

Learn how to use AWS CodeDeploy and HashiCorp Consul together for your application deployments. 

AWS CodeDeploy automates code deployments to Amazon Elastic Compute Cloud (Amazon EC2) and on-premises servers. HashiCorp Consul is an open-source tool providing service discovery and orchestration for modern applications. 

Learn how to get started by visiting the guest post on the AWS Partner Network Blog. You can see a full list of CodeDeploy product integrations by visiting here

Color-Code Your AWS OpsWorks Stacks for Better Instance and Resource Tracking

by Daniel Huesch | on | | Comments

AWS OpsWorks provides options for organizing your Amazon EC2 instances and other AWS resources. There are stacks to group related resources and isolate them from each other; layers to group instances with similar roles; and apps to organize software deployments. Each has a name to help you keep track of them.

Because it can be difficult to see if the instance you’re working on belongs to the right stack (for example, an integration or production stack) just by looking at the host name, OpsWorks provides a simple, user-defined attribute that you can use to color-code your stacks. For example, some customers use red for their production stacks. Others apply different colors to correspond to the regions in which the stacks are operating.

A stack color is simply a visual indicator to assist you while you’re working in the console. In those cases when you need to sign in to an instance (for auditing, for example, or to check log files or restart a process), it can be difficult to immediately detect when you have signed in to an instance on the wrong stack.

When you add a small, custom recipe to the setup lifecycle event, however, you can reuse the stack color for the shell prompt. Most modern terminal emulators support a 256-color mode. Changing the color of the prompt is simple.

The following code can be used to change the color of the shell prompt:


stack = search("aws_opsworks_stack").first
match = stack["color"].match(/rgb((d+), (d+), (d+))/)
r, g, b = match[1..3].map { |i| (5 * i.to_f / 255).round }

template "/etc/profile.d/" do
  source ""
  variables(:color => 16 + b + g * 6 + 36 * r)


if [ -n "$PS1" ]; then
  PS1="33[38;5;<%= @color %>m[u@h W]\$33[0m "

You can use this with Chef 12, this custom cookbook, the latest Amazon Linux AMI, and Bash. You may have to adapt the cookbook for other operating systems and shells.

The stack color is not the only information you can include in the prompt. You can also add the stack and layer names of your instances to the prompt:

We invite you to try color-coding your stacks. If you have questions or other feedback, let us know in the comments.