Category: Best practices


Building End-to-End Continuous Delivery and Deployment Pipelines in AWS and TeamCity

by Balaji Iyer | on | in Best practices, How-to, Partners, Web app | | Comments

By Balaji Iyer, Janisha Anand, and Frank Li

Organizations that transform their applications to cloud-optimized architectures need a seamless, end-to-end continuous delivery and deployment workflow: from source code, to build, to deployment, to software delivery.

Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. The practice expands on continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has undergone a standardized test process.

Continuous deployment is the process of deploying application revisions to a production environment automatically, without explicit approval from a developer. This process makes the entire software release process automated. Features are released as soon as they are ready, providing maximum value to customers.

These two techniques enable development teams to deploy software rapidly, repeatedly, and reliably.

In this post, we will build an end-to-end continuous deployment and delivery pipeline using AWS CodePipeline (a fully managed continuous delivery service), AWS CodeDeploy (an automated application deployment service), and TeamCity’s AWS CodePipeline plugin. We will use AWS CloudFormation to setup and configure the end-to-end infrastructure and application stacks. The ­­pipeline pulls source code from an Amazon S3 bucket, an AWS CodeCommit repository, or a GitHub repository. The source code will then be built and tested using TeamCity’s continuous integration server. Then AWS CodeDeploy will deploy the compiled and tested code to Amazon EC2 instances.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3.

Overview

Here are the steps:

  1. Continuous integration server setup using TeamCity.
  2. Continuous deployment using AWS CodeDeploy.
  3. Building a delivery pipeline using AWS CodePipeline.

In less than an hour, you’ll have an end-to-end, fully-automated continuous integration, continuous deployment, and delivery pipeline for your application. Let’s get started!

(more…)

Building a Microsoft BackOffice Server Solution on AWS with AWS CloudFormation

by Bill Jacobi | on | in Best practices, How-to, New stuff | | Comments

Last month, AWS released the AWS Enterprise Accelerator: Microsoft Servers on the AWS Cloud along with a deployment guide and CloudFormation template. This blog post will explain how to deploy complex Windows workloads and how AWS CloudFormation solves the problems related to server dependencies.

This AWS Enterprise Accelerator solution deploys the four most requested Microsoft servers ─ SQL Server, Exchange Server, Lync Server, and SharePoint Server ─ in a highly available, multi-AZ architecture on AWS. It includes Active Directory Domain Services as the foundation. By following the steps in the solution, you can take advantage of the email, collaboration, communications, and directory features provided by these servers on the AWS IaaS platform.  

There are a number of dependencies between the servers in this solution, including:

  • Active Directory
  • Internet access
  • Dependencies within server clusters, such as needing to create the first server instance before adding additional servers to the cluster.
  • Dependencies on AWS infrastructure, such as sharing a common VPC, NAT gateway, Internet gateway, DNS, routes, and so on.

The infrastructure and servers are built in three logical layers. The Master template orchestrates the stack builds with one stack per Microsoft server and manages inter-stack dependencies. Each of the CloudFormation stacks uses PowerShell to stand up the Microsoft servers at the OS level. Before it configures the OS, CloudFormation configures the AWS infrastructure required by each Windows server. Together, CloudFormation and PowerShell create a quick, repeatable deployment pattern for the servers. The solution supports 10,000 users. Its modularity at both the infrastructure and application level enables larger user counts.

MSServers Solution - 6 CloudFormation Stacks

Managing Stack Dependencies

To explain how we enabled the dependencies between the stacks, the SQLStack is dependent on ADStack since SQL Server is dependent on Active Directory; and, similarly, SharePointStack is dependent on SQLStack, both as required by Microsoft. Lync is dependendent on Exchange since both servers must extend the AD schema independently. In Master, these server dependencies are coded in CloudFormation as follows:

"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",

             "Properties": …
       }
and
"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",
             "Properties": …
       },
       "SharePointStack": {
            "Type": "AWS::CloudFormation::Stack",
            "DependsOn": "SQLStack",
            "Properties": …
       }

The “DependsOn” statements in the stack definitions forces the order of stack execution to match the diagram. Lower layers are executed and successfully completed before the upper layers. If you do not use “DependsOn”, CloudFormation will execute your stacks in parallel. An example of parallel execution is what happens after ADStack returns SUCCESS. The two higher-level stacks, SQLStack and ExchangeStack, are executed in parallel at the next level (layer 2).  SharePoint and Lync are executed in parallel at layer 3. The arrows in the diagram indicate stack dependencies.

Passing Parameters Between Stacks

If you have concerns about how to pass infrastructure parameters between the stack layers, let’s use an example in which we want to pass the same VPCCIDR to all of the stacks in the solution. VPCCIDR is defined as a parameter in Master as follows:

"VPCCIDR": {
            "AllowedPattern": "[a-zA-Z0-9]+\..+",
            "Default": "10.0.0.0/16",
            "Description": "CIDR Block for the VPC",
            "Type": "String"
           }

By defining VPCCIDR in Master and soliciting user input for this value, this value is then passed to ADStack by the use of an identically named and typed parameter between Master and the stack being called.

"VPCCIDR": {
            "Description": "CIDR Block for the VPC",
            "Type": "String",
            "Default": "10.0.0.0/16",
            "AllowedPattern": "[a-zA-Z0-9]+\..+"
           }

After Master defines VPCCIDR, ADStack can use “Ref”: “VPCCIDR” in any resource (such as the security group, DomainController1SG) that needs the VPC CIDR range of the first domain controller. Instead of passing commonly-named parameters between stacks, another option is to pass outputs from one stack as inputs to the next. For example, if you want to pass VPCID between two stacks, you could accomplish this as follows. Create an output like VPCID in the first stack:

Outputs”  : {
               “VPCID” : {
                          “Value” : “ {“Ref” : “VPC”},
                          “Description” : “VPC ID”
               }, …
}

In the second stack, create a parameter with the same name and type:

Parameters” : {
               “VPCID” : {
                          “Type” : “AWS::EC2::VPC::Id”,
               }, …
}

When the first template calls the second template, VPCID is passed as an output of the first template to become an input (parameter) to the second.

Managing Dependencies Between Resources Inside a Stack

All of the dependencies so far have been between stacks. Another type of dependency is one between resources within a stack. In the Microsoft servers case, an example of an intra-stack dependency is the need to create the first domain controller, DC1, before creating the second domain controller, DC2.

DC1, like many cluster servers, must be fully created first so that it can replicate common state (domain objects) to DC2.  In the case of the Microsoft servers in this solution, all of the servers require that a single server (such as DC1 or Exch1) must be fully created to define the cluster or farm configuration used on subsequent servers.

Here’s another intra-stack dependency example: The Microsoft servers must fully configure the Microsoft software on the Amazon EC2 instances before those instances can be used. So there is a dependency on software completion within the stack after successful creation of the instance, before the rest of stack execution (such as deploying subsequent servers) can continue. These intra-stack dependencies like “software is fully installed” are managed through the use of wait conditions. Wait conditions are CloudFormation resources just like EC2 instances and allow the “DependsOn” attribute mentioned earlier to manage dependencies inside a stack. For example, to pause the creation of DC2 until DC1 is complete, we configured the following “DependsOn” attribute using a wait condition. See (1) in the following diagram:

"DomainController1": {
            "Type": "AWS::EC2::Instance",
            "DependsOn": "NATGateway1",
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "configSets": {
                        "config": [
                            "setup",
                            "rename",
                            "installADDS",
                            "configureSites",
                            "installADCS",
                            "finalize"
                        ]
                    }, …
             },
             "Properties" : …
},
"DomainController2": {
             "Type": "AWS::EC2::Instance",
[1]          "DependsOn": "DomainController1WaitCondition",
             "Metadata": …,
             "Properties" : …
},

The WaitCondition (2) uses on a CloudFormation resource called a WaitConditionHandle (3), which receives a SUCCESS or FAILURE signal from the creation of the first domain controller:

"DomainController1WaitCondition": {
            "Type": "AWS::CloudFormation::WaitCondition",
            "DependsOn": "DomainController1",
            "Properties": {
                "Handle": {
[2]                    "Ref": "DomainController1WaitHandle"
                },
                "Timeout": "3600"
            }
     },
     "DomainController1WaitHandle": {
[3]            "Type": "AWS::CloudFormation::WaitConditionHandle"
     }

SUCCESS is signaled in (4) by cfn-signal.exe –exit-code 0 during the “finalize” step of DC1, which enables CloudFormation to execute DC2 as an EC2 resource via the wait condition.

                "finalize": {
                       "commands": {
                           "a-signal-success": {
                               "command": {
                                   "Fn::Join": [
                                       "",
                                       [
[4]                                            "cfn-signal.exe -e 0 "",
                                           {
                                               "Ref": "DomainController1WaitHandle"

                                            },
                                           """
                                       ]
                                   ]
                               }
                           }
                       }
                   }
               }

If the timeout had been reached in step (2), this would have automatically signaled a FAILURE and stopped stack execution of ADStack and the Master stack.

As we have seen in this blog post, you can create both nested stacks and nested dependencies and can pass parameters between stacks by passing standard parameters or by passing outputs. Inside a stack, you can configure resources that are dependent on other resources through the use of wait conditions and the cfn-signal infrastructure. The AWS Enterprise Accelerator solution uses both techniques to deploy multiple Microsoft servers in a single VPC for a Microsoft BackOffice solution on AWS.  

In a future blog post, we will illustrate how PowerShell can be used to bootstrap and configure Windows instances with downloaded cmdlets, all integrated into CloudFormation stacks.

Explore Continuous Delivery in AWS with the Pipeline Starter Kit

by David Nasi | on | in Best practices, How-to, Web app | | Comments

By Chris Munns, David Nasi, Shankar Sivadasan, and Susan Ferrell

Continuous delivery, automating your software delivery process from code to build to deployment, is a powerful development technique and the ultimate goal for many development teams. AWS provides services, including AWS CodePipeline (a continuous delivery service) and AWS CodeDeploy (an automated application deployment service) to help you reach this goal. With AWS CodePipeline, any time a change to the code occurs, that change runs automatically through the delivery process you’ve defined. If you’ve ever wanted to try these services, but not wanted to set up the resources, we’ve created a starter kit you can use. This starter kit sets up a complete pipeline that builds and deploys a sample application in just a few steps. The starter kit includes an AWS CloudFormation template to create the pipeline and all of its resources in the US East (N. Virginia) Region. Specifically, the CloudFormation template creates:

  • An AWS Virtual Private Cloud (VPC), including all the necessary routing tables and routes, an Internet gateway, and network ACLs for EC2 instances to be launched into.
  • An Amazon EC2 instance that hosts a Jenkins server (also installed and configured for you).
  • Two AWS CodeDeploy applications, each of which contains a deployment group that deploys to a single Amazon EC2 instance.
  • All IAM service and instance roles required to run the resources.
  • A pipeline in AWS CodePipeline that builds the sample application and deploys it. This includes creating an Amazon S3 bucket to use as the artifact store for this pipeline.

What you’ll need:

  • An AWS account. (Sign up for one here if you don’t have one already.)
  • An Amazon EC2 key pair in the US East (N. Virginia) Region. (Learn how to create one here if you don’t have one.)
  • Administrator-level permissions in IAM, AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3. (Not sure how to set permissions in these services? See the sample policy in Troubleshooting Problems with the Starter Kit.)
  • Optionally, a GitHub account so you can fork the repository for the sample application. Alternatively, if you do not want to create a GitHub account, you can use the Amazon S3 bucket configured in the starter kit template, but you will not be able to edit the application or see your changes automatically run through the pipeline.

That’s it! The starter kit will create everything else for you.

Note: The resources created in the starter kit exceed what’s included in the AWS Free Tier so the use of the kit will result in charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

Let’s get started.

Decide how you want to source the provided sample application. AWS CodePipeline currently allows you use either an Amazon S3 bucket or a GitHub repository as the source location for your application. The CloudFormation template allows you to choose either of these methods. If you choose to use a GitHub repository, you will have a little more set up work to do, but you will be able to easily test modifying the application and seeing the changes run automatically through the pipeline. If you choose to use the Amazon S3 bucket already configured as the source in the startup kit, set up is simpler, but you won’t be able to modify the application.

Follow the steps for your choice:

GitHub:

  1. Sign in to GitHub and fork the sample application repository at https://github.com/awslabs/aws-codedeploy-sample-tomcat.
  2. Navigate to https://github.com/settings/tokens and generate a token to use with the starter kit. The token requires the permissions needed to integrate with AWS CodePipeline: repo and admin:repo_hook. For more information, see the AWS CodePipeline User Guide. Make sure you copy the token after you create it.

Amazon S3:

  1. If you’re using the bucket configured in the starter kit, there’s nothing else for you to do but continue on to step 3. If you want to use your own bucket, see Troubleshooting Problems with the Starter Kit.

Choose  to launch the starter kit template directly in the AWS CloudFormation console. Make sure that you are in the US East (N. Virginia) region.

Note: If you want to download the template to your own computer and then upload it directly to AWS CloudFormation, you can do so from this Amazon S3 bucket. Save the aws-codedeploy-codepipeline-starter-kit.template file to a location on your computer that’s easy to remember.

Choose Next.

On the Specify Details page, do the following:

  1. In Stack name, type a name for the stack. Choose something short and simple for easy reference.
  2. In AppName, you can leave the default as-is, or you can type a name of no more than 15 characters (for example, starterkit-demo). The name has the following restrictions:

    • The only allowed characters are lower-case letters, numbers, periods, and hyphens.
    • The name must be unique in your AWS account, so be sure to choose a new name each time you use the starter kit.
  3. In AppSourceType, choose S3 or GitHub, depending on your preference for a source location, and then do the following:

    • If you want to use the preconfigured Amazon S3 bucket as the source for your starter kit, leave all the default information as-is. (If you want to use your own Amazon S3 bucket, see Troubleshooting Problems with the Starter Kit.)
    • If you want to use a GitHub repo as the source for your starter kit, in Application Source – GitHub, type the name of your user account in GitHubUser. In GitHubToken, paste the token you created earlier. In GitHubRepoName, type the name of the forked repo. In GitHubBranchName, type the name of the branch (by default, master).
  4. In Key Name, choose the name of your Amazon EC2 key pair.
  5. In YourIP, type the IP address from which you will access the resources created by this starter kit. This is a recommended security best practice.

Choose Next.

(Optional) On the Options page, in Key, type Name. In Value, type a name that will help you easily identify the resources created for the starter kit. This name will be used to tag all of the resources created by the starter kit. Although this step is optional, it’s a good idea, particularly if you want to use or modify these resources later on. Choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. (It will.) Review the other settings, and then choose Create.

It will take several minutes for CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console.

When the stack has been created, you will see a CREATE_COMPLETE message in the Status column of the console and on the Overview tab.

Congratulations! You’ve created your first pipeline, complete with all required resources. The pipeline has four stages, each with a single action. The pipeline will start automatically as soon as it is created.

(If CloudFormation fails to create your resources and pipeline, it will roll back all resource creation automatically. The most common reason for failure is that you specified a stack name that is allowed in CloudFormation but not allowed in Amazon S3, and you chose Amazon S3 for your source location. For more information, see the Troubleshooting problems with the starter kit section at the end of this post.)

To view your pipeline, open the AWS CodePipeline console at http://console.aws.amazon.com/codepipeline. On the dashboard page, choose the name of your new pipeline (for example, StarterKitDemo-Pipeline). Your pipeline, which might or might not have started its first run, will appear on the view pipeline page.

You can watch the progress of your pipeline as it completes the action configured for each of its four stages (a source stage, a build stage, and two deployment stages).

The pipeline flows as follows:

  1. The source stage contains an action that retrieves the application from the source location (the Amazon S3 bucket created for you to store the app or the GitHub repo you specified).
  2. The build stage contains an action that builds the app in Jenkins, which is hosted on an Amazon EC2 instance.
  3. The first deploy stage contains an action that uses AWS CodeDeploy to deploy the app to a beta website on an Amazon EC2 instance.
  4. The second deploy stage contains an action that again uses AWS CodeDeploy to deploy the app, this time to a separate, production website on a different Amazon EC2 instance.

When each stage is complete, it turns from blue (in progress) to green (success).

You can view the details of any stage except the source stage by choosing the Details link for that stage. For example, choosing the Details link for the Jenkins build action in the build stage opens the status page for that Jenkins build:

Note: The first time the pipeline runs, the link to the build will point to Build #2. Build #1 is a failed build left over from the initial instance and Jenkins configuration process in AWS CloudFormation.

To view the details of the build, choose the link to the log file. To view the Maven project created in Jenkins to build the application, choose Back to Project.

While you’re in Jenkins, we strongly encourage you to consider securing it if you’re going to keep the resource for any length of time. From the Jenkins dashboard, choose Manage Jenkins, choose Setup Security, and choose the security options that are best for your organization. For more information about Jenkins security, see Standard Security Setup.

When Succeeded is displayed for the pipeline status, you can view the application you built and deployed:

  1. In the status area for the ProdDeploy action in the Prod stage, choose Details. The details of the deployment will appear in the AWS CodeDeploy console.
  2. In the Deployment Details section, in Instance ID, choose the instance ID of the successfully deployed instance.
  3. In the Amazon EC2 console, on the Description tab, in Public DNS, copy the address, and then paste it into the address bar of your web browser. The web page opens on the application you built:

Tip: You can also find the IP addresses of each instance in AWS CloudFormation on the Outputs tab of the stack.

Now that you have a pipeline, try experimenting with it. You can release a change, disable and enable transitions, edit the pipeline to add more actions or change the existing ones – whatever you want to do, you can do it. It’s yours to play with. You can make changes to the source in your GitHub repository (if you chose GitHub as your source location) and watch those pushed changes build and deploy automatically. You can also explore the links to the resources used by the pipeline, such as the application and deployment groups in AWS CodeDeploy and the Jenkins server.

What to Do Next

After you’ve finished exploring your pipeline and its associated resources, you can do one of two things:

  •      Delete the stack in AWS CloudFormation, which deletes the pipeline, its resources, and the stack itself. This is the option to choose if you no longer want to use the pipeline or any of its resources. Cleaning up resources you’re no longer using is important, because you don’t want to be charged for resources you no longer using.

To delete the stack:

  1. Delete the Amazon S3 bucket used as the artifact store in AWS CodePipeline. Although this bucket was created as part of the CloudFormation stack, Amazon S3 does not allow CloudFormation to delete buckets that contain objects. To delete this bucket, open the Amazon S3 console, select the bucket whose name starts with demo and ends with the name you chose for your stack, and then delete it. For more information, see Delete or Empty a Bucket.
  2. Follow the steps in Delete the stack.
  • Change the pipeline and its resources to start building applications you actually care about. Maybe you’re not ready to get into the business of creating bespoke suits for dogs. (We understand that dogs can be difficult clients to dress well, and that not everyone wants to be paid in dog treats.) However, perhaps you do have an application or two that you would like to set up for continuous delivery with AWS CodePipeline. AWS CodePipeline integrates with other services you might already be using for your software development, as well as GitHub. You can edit the pipeline to remove the actions or stages and add new actions and stages that more accurately reflect the delivery process for your applications. You can even create your own custom actions, if you want to integrate your own solutions.


If you decide to keep the pipeline and some or all of its resources, here are some things to consider:

We hope you’ve enjoyed the starter kit and this blog post. If you have any feedback or questions, feel free to get in touch with us on the AWS CodePipeline forum.

Troubleshooting Problems with the Starter Kit

You can use the events on the Events tab of the CloudFormation stack to help you troubleshoot problems if the stack fails to complete creation or deletion.

Problem: The stack creation fails when trying to create the custom action in AWS CodePipeline.

Possible Solution: You or someone who shares your AWS account number might have used the starter kit once and chosen the same name for the application. Custom actions must have unique names within an AWS account. Another possibility is that you or someone else then deleted the resources, including the custom action. You cannot create a custom action using the name of a deleted custom action. In either case, delete the failed stack, and then try to create the stack again using a different application name.

Problem: The stack creation fails in AWS CloudFormation without any error messages.

Possible Solution: You’re probably missing one or more required permissions. Creating resources with the template in AWS CloudFormation requires the following policy or its equivalent permissions:

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Allow",

            "Action": [

                "cloudformation:*",

                "codedeploy:*",

                "codepipeline:*",

                "ec2:*",

                "iam:AddRoleToInstanceProfile",

                "iam:CreateInstanceProfile",

                "iam:CreateRole",

                "iam:DeleteInstanceProfile",

                "iam:DeleteRole",

                "iam:DeleteRolePolicy",

                "iam:GetRole",

                "iam:PassRole",

                "iam:PutRolePolicy",

                "iam:RemoveRoleFromInstanceProfile",

                "s3:*"

            ],

            "Resource": "*"

        }

    ]

}

 

Problem: Deleting the stack fails when trying to delete the Amazon S3 bucket created by the stack.

Possible solution:  One or more files or folders might be left in the bucket created by the stack. To delete this bucket, follow the instructions in Delete or Empty a Bucket, and then delete the stack in AWS CloudFormation.

Problem: I want to use my own Amazon S3 bucket as the source location for a pipeline, not the bucket pre-configured in the template.

Possible solution: Create your own bucket, following these steps:

 

  1. Download the sample application from GitHub at https://github.com/awslabs/aws-codedeploy-sample-tomcat and upload the suitsfordogs.zip application to an Amazon S3 bucket that was created in the US East (N. Virginia) Region.
  2. Sign into the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3.
  3. Choose your bucket from the list of buckets available, and on the Properties tab for the bucket, choose to add or edit the bucket policy.
  4. Make sure that your bucket has the following permissions set to Allow:

    • s3:PutObject
    • s3:List*
    • s3:Get*

    For more information, see Editing Bucket Permissions.

  5. When configuring details in CloudFormation, on the Specify Details page, in AppSourceType, choose S3, but then replace the information in Application Source – S3 with the details of your bucket and object.

Optimize AWS CloudFormation Templates

by Elliot Yamaguchi | on | in Best practices, How-to | | Comments

The following post is by guest blogger Julien Lépine, Solutions Architect at AWS. He explains how to optimize templates so that AWS CloudFormation quickly deploys your environments.

______________________________________________________________________________________

Customers sometimes ask me if there’s a way to optimize large AWS CloudFormation templates, which can take several minutes to deploy a stack. Often stack creation is slow because one resource depends on the availability of another resource before it can be provisioned. Examples include:

  • A front-end web server that has a dependency on an application server
  • A service that waits for another remote service to be available

In this post, I describe how to speed up stack creation when resources have dependencies on other resources.

Note: I show how to launch Windows instances with Windows PowerShell, but you can apply the same concepts to Linux instances launched with shell scripts.

How CloudFormation Creates Stacks

When CloudFormation provisions two instances, it provisions them randomly. Defining one resource before another in a template doesn’t guarantee that CloudFormation will provision that resource first. You need to explicitly tell CloudFormation the right order for instance provisioning.

To demonstrate how to do this, I’ll start with the following CloudFormation template:

{
    "AWSTemplateFormatVersion" : "2010-09-09",
    "Description": "This is a demonstration AWS CloudFormation template containing two instances",
    "Parameters": {
        "ImageId" : {
            "Description": "Identifier of the base Amazon Machine Image (AMI) for the instances in this sample (please use Microsoft Windows Server 2012 R2 Base)",
            "Type" : "AWS::EC2::Image::Id"
        },
        "InstanceType" : {
            "Description": "EC2 instance type to use for the instances in this sample",
            "Type" : "String"
        },
    },
    "Resources" : { 
        "Instance1": {
            "Type": "AWS::EC2::Instance",
            "Properties": {
                "ImageId": { "Ref" : "ImageId" },
                "InstanceType": { "Ref": "InstanceType" },
            }
        },

        "Instance2": {
            "Type": "AWS::EC2::Instance",
            "Properties": {
                "ImageId": { "Ref" : "ImageId" },
                "InstanceType": { "Ref": "InstanceType" },
            }
        }
    }
}

CloudFormation would likely create the stack in the following sequence:

This is fast, but if Instance2 is dependent on Instance1, you would ordinarily need to hard code or script the provisioning sequence to ensure that Instance1 is provisioned first.

Specifying Dependencies

When you need CloudFormation to wait to provision one resource until another one has been provisioned, you can use the DependsOn attribute.

    "Instance2": {
        "DependsOn": ["Instance1"]
        "Type": "AWS::EC2::Instance",
        "Properties": {
            "ImageId": { "Ref" : "ImageId" },
            "InstanceType": { "Ref": "InstanceType" }
        }
    }

You can also introduce references between elements by using either the { "Ref": "MyResource" } or the { "Fn::GetAtt" : [ "MyResource" , "MyAttribute" ] } functions. When you use one of these functions, CloudFormation behaves as if you’ve added a DependsOn attribute to the resource. In the following example, the identifier of Instance1 is used in a tag for Instance2.

    "Instance2": {
        "Type": "AWS::EC2::Instance",
        "Properties": {
            "ImageId": { "Ref" : "ImageId" },
            "InstanceType": { "Ref": "InstanceType" },
            "Tags": [ { "Key" : "Dependency", "Value" : { "Ref": "Instance1" } } ]
        }
    }

Both methods of specifying dependencies result in the same sequence:

Now, CloudFormation waits for Instance1 to be provisioned before provisioning Instance2. But I’m not guaranteed that services hosted on Instance1 will be available, so I will have to address that in the template.

Note that instances are provisioned quickly in CloudFormation. In fact, it happens in the time it takes to call the RunInstances Amazon Elastic Compute Cloud (EC2) API. But it takes longer for an instance to fully boot than it does to provision the instance.

Using Creation Policies to Wait for On-Instance Configurations

In addition to provisioning the instances in the right order, I want to ensure that a specific setup milestone has been achieved inside Instance1 before contacting it. To do this, I use a CreationPolicy attribute. A CreationPolicy is an attribute you can add to an instance to prevent it from being marked CREATE_COMPLETE until it has been fully initialized.

In addition to adding the CreationPolicy attribute, I want to ask Instance1 to notify CloudFormation after it’s done initializing. I can do this in the instance’s UserData section. On Windows instances, I can use this section to execute code in batch files or in Windows PowerShell in a process called bootstrapping.

I’ll execute a batch script, then tell CloudFormation that the creation process is done by sending a signal specifying that Instance1 is ready. Here’s the code with a CreationPolicy attribute and a UserData section that includes a script that invokes cfn-signal.exe:

    "Instance1": {
      "Type": "AWS::EC2::Instance",
      "CreationPolicy" : {
        "ResourceSignal" : {
          "Timeout": "PT15M",
          "Count"  : "1"
        }
      },
      "Properties": {
        "ImageId": { "Ref" : "ImageId" },
        "InstanceType": { "Ref": "InstanceType" },
        "UserData": {
          "Fn::Base64": {
            "Fn::Join": [ "n", [
                "<script>",

                "REM ...Do any instance configuration steps deemed necessary...",

                { "Fn::Join": ["", [ "cfn-signal.exe -e 0 --stack "", { "Ref": "AWS::StackName" }, "" --resource "Instance1" --region "", { "Ref" : "AWS::Region" }, """ ] ] },
                "</script>"
            ] ]
          }
        }
      }
    }

I don’t need to change the definition of Instance2 because it’s already coded to wait for Instance1. I now know that Instance1 will be completely set up before Instance2 is provisioned. The sequence looks like this:

Optimizing the Process with Parallel Provisioning

It takes only a few seconds to provision an instance in CloudFormation, but it can take several minutes for an instance to boot and be ready because it must wait for the complete OS boot sequence, activation and the execution of the UserData scripts. As we saw in the figures, the time it takes to create the complete CloudFormation stack is about twice the boot and initialization time for a resource. Depending on the complexity of our processes, booting can take up to 10 minutes.

I can reduce waiting time by running instance creation in parallel and waiting only when necessary – before the application is configured. I can do this by splitting instance preparation into two steps: booting and initialization. Booting happens in parallel for both instances, but initialization for Instance2 starts only when Instance1 is completely ready.

This is the new sequence:

Because I’m doing some tasks in parallel, it takes much less time for Instance2 to become available.

The only problem is that CloudFormation has no built-in construct to enter a dependency in the middle of the booting process of another resource. Let’s devise a solution for this.

Using Wait Conditions

Creation policies also provide a notification mechanism. I can decouple notification for the creation of an instance from the notification that the instance is fully ready by using a wait condition.

    "Instance1WaitCondition" : {
        "Type" : "AWS::CloudFormation::WaitCondition",
        "DependsOn" : ["Instance1"],
        "CreationPolicy" : {
        "ResourceSignal" : {
                "Timeout": "PT15M",
                "Count"  : "1"
            }
        }
    }

Then I need to ask Instance1 to notify the wait condition after it’s done processing, instead of notifying itself. I’ll use the UserData section of the instance to do this.

    "Instance1": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        "ImageId": { "Ref" : "ImageId" },
        "InstanceType": { "Ref": "InstanceType" },
        "UserData": {
          "Fn::Base64": {
            "Fn::Join": [ "n", [
                "<script>",

                "REM ...Do any instance configuration steps deemed necessary...",

                { "Fn::Join": ["", [ "cfn-signal.exe -e 0 --stack "", { "Ref": "AWS::StackName" }, "" --resource "Instance1WaitCondition" --region "", { "Ref" : "AWS::Region" }, """ ] ] },
                "</script>"
            ] ]
          }
        }
      }
    }

Note that CreationPolicy is now defined inside Instance1WaitCondition, and the call to cfn-signal.exe notifies Instance1WaitCondition instead of Instance1.

We now have two resources that signal two different states of Instance1:

  • Instance1 is marked as created as soon as it is provisioned.
  • Instance1WaitCondition is marked as created only when Instance1 is fully initialized.

Let’s see how we can use this technique to optimize the booting process.

PowerShell to the Rescue

The DependsOn attribute is only available at the top level of resources, but I want to wait for Instance1 after the boot of Instance2. To allow that I need  a way to check the status of resources from within the instance’s initialization script so that I can see when resource creation for Instance1WaitCondition is complete. Let’s use Windows PowerShell to provide some automation.

To check resource status from within an instance’s initialization script, I’ll use AWS Tools for Windows PowerShell, a package that is installed by default on every Microsoft Windows Server image provided by Amazon Web Services. The package includes more than 1,100 cmdlets, giving us access to all of the APIs available on the AWS cloud.

The Get-CFNStackResources cmdlet allows me to see whether resource creation for Instance1WaitCondition is complete. This PowerShell script loops until a resource is created:

    $region = ""
    $stack = ""
    $resource = "Instance1WaitCondition"
    $output = (Get-CFNStackResources -StackName $stack -LogicalResourceId $resource -Region $region)
    while (($output -eq $null) -or ($output.ResourceStatus -ne "CREATE_COMPLETE") -and ($output.ResourceStatus -ne "UPDATE_COMPLETE"))
    {
        Start-Sleep 10
        $output = (Get-CFNStackResource -StackName $stack -LogicalResourceId $resource -Region $region)
    }

Securing Access to the Resources

When calling an AWS API, I need to be authenticated and authorized. I can do this by providing an access key and a secret key to each API call, but there’s a much better way. I can simply create an AWS Identity and Access Management (IAM) role for the instance. When an instance has an IAM role, code that runs on the instance (including our PowerShell code in UserData) is authorized to make calls to the AWS APIs that are granted in the role.

When creating this role in IAM, I specify only the required actions, and limit these actions to only the current CloudFormation stack.

    "DescribeRole": {
        "Type"      : "AWS::IAM::Role",
        "Properties": {
            "AssumeRolePolicyDocument": {
                "Version" : "2012-10-17",
                "Statement": [ 
                    { 
                        "Effect": "Allow",
                        "Principal": { "Service": [ "ec2.amazonaws.com" ] },
                        "Action": [ "sts:AssumeRole" ]
                    }
                ]
            },
            "Path": "/",
            "Policies": [
                {
                    "PolicyName"    : "DescribeStack",
                    "PolicyDocument": {
                        "Version"  : "2012-10-17",
                        "Statement": [
                            {
                                "Effect" : "Allow",
                                "Action" : ["cloudformation:DescribeStackResource", "cloudformation:DescribeStackResources"],
                                "Resource" : [ { "Ref" : "AWS::StackId" } ]
                            }
                        ]
                    }
                }
            ]
        }
    },
    "DescribeInstanceProfile": {
        "Type"      : "AWS::IAM::InstanceProfile",
        "Properties": {
            "Path" : "/",
            "Roles": [ { "Ref": "DescribeRole" } ]
        }
    }

Creating the Resources

The description for Instance1WaitCondition and Instance1 is fine, but I need to update Instance2 to add the IAM Role and include the PowerShell wait script. In the UserData section, I will add a scripted reference to Instance1WaitCondition. This "soft" reference doesn’t introduce any dependency in CloudFormation as this is just a simple string. In the UserData section, I will also add a GetAtt reference to Instance1 so that these instances will be provisioned quickly, one after another, without having to wait for the full instance to boot. I also need to secure my API calls by specifying the IAM role we have created as an IamInstanceProfile.

    "Instance2": {
        "Type": "AWS::EC2::Instance",
        "Properties": {
            "ImageId": { "Ref" : "ImageId" },
            "InstanceType": { "Ref": "InstanceType" },
            "IamInstanceProfile": { "Ref": "DescribeInstanceProfile" },
            "UserData": {
                "Fn::Base64": { 
                    "Fn::Join": [ "n", [
                        "",
                        "$resource = "Instance1WaitCondition"",
                        { "Fn::Join": ["", [ "$region = '", { "Ref" : "AWS::Region" }, "'" ] ] },
                        { "Fn::Join": ["", [ "$stack = '", { "Ref" : "AWS::StackId" }, "'" ] ] },

                        "#...Wait for instance 1 to be fully available...",

                        "$output = (Get-CFNStackResources -StackName $stack -LogicalResourceId $resource -Region $region)",
                        "while (($output -eq $null) -or ($output.ResourceStatus -ne "CREATE_COMPLETE") -and ($output.ResourceStatus -ne "UPDATE_COMPLETE")) {",
                        "    Start-Sleep 10",
                        "    $output = (Get-CFNStackResources -StackName $stack -LogicalResourceId $resource -Region $region)",
                        "}",

                        "#...Do any instance configuration steps you deem necessary...",

                        { "Fn::Join": ["", [ "$instance1Ip = '", { "Fn::GetAtt" : [ "Instance1" , "PrivateIp" ] }, "'" ] ] },

                        "#...You can use the private IP address from Instance1 in your configuration scripts...",

                        ""
                    ] ]
                }
            }
        }
    }

Now, CloudFormation provisions Instance2 just after Instance1, saving a lot of time because Instance2 boots while Instance1 is booting, but Instance2 then waits for Instance1 to be fully operational before finishing its configuration.

During new environment creation, when a stack contains numerous resources, some with cascading dependencies, this technique can save a lot of time. And when you really need to get an environment up and running quickly, for example, when you’re performing disaster recovery, that’s important.

More Optimization Options

If you want a more reliable way to execute multiple scripts on an instance in CloudFormation, check out AWS::CloudFormation::cfn-init, which provides a flexible and powerful way to configure an instance when it’s started. To automate and simplify scripting your instances and reap the benefits of automatic domain joining for instances, see Amazon EC2 Simple Systems Manager (SSM). To operate your Windows instances in a full DevOps environment, consider using AWS OpsWorks.

AWS CodeDeploy: Deploying from a Development Account to a Production Account

by Bangxi Yu | on | in Best practices, How-to | | Comments

AWS CodeDeploy helps users deploy software to a fleet of Amazon EC2 or on-premises instances. A software revision is typically deployed and tested through multiple stages (development, testing, staging, and so on) before it’s deployed to production. It’s also a common practice to use a separate AWS account for each stage. In this blog post, we will show you how to deploy a revision that is tested in one account to instances in another account.

Prerequisites

We assume you are already familiar with AWS CodeDeploy concepts and have completed the Basic Deployment Walkthrough. In addition, we assume you have a basic understanding of AWS Identity and Access Management (IAM) and have read the Cross-Account Access Using Roles topic.

Setup

Let’s assume you have development and production AWS accounts with the following details:

  • AWS account ID for development account: <development-account-id>
  • S3 bucket under development account: s3://my-demo-application/
  • IAM user for development account: development-user (This is the IAM user you use for AWS CodeDeploy deployments in your development account.)
  • AWS account ID for production account: <production-account-id>

You have already tested your revision in the development account and it’s available in s3://my-demo-application/. You want to deploy this revision to instances in your production account.

Step 1: Create the application and deployment group in the production account

You will need to create the application and deployment group in the production account. Keep in mind that deployment groups, and the Amazon EC2 instances to which they are configured to deploy, are strictly tied to the accounts under which they were created. Therefore, you cannot add an instance in the production account to a deployment group in the developer account. Also, make sure the EC2 instances in the production account have the AWS CodeDeploy agent installed and are launched with an IAM instance profile. You can follow the steps in this topic. For example, in this post, we will use the following settings:

  • Application name: CodeDeployDemo
  • Deployment group name: Prod
  • IAM instance profile: arn:aws:iam::role/CodeDeployDemo-EC2

Step 2: Create a role under the production account for cross-account deployment

Log in to production account and then go to IAM console. Select "Roles" in the menu and click "Create New Role". You need to create a new role under the production account that gives cross account permission to the development account. Call this role "CrossAccountRole".

Click "Next Step", under "Select Role Type", choose "Role for Cross-Account Access". Choose "Provide access between AWS accounts you own"

Click "Next Step", in the "Account ID" field, type the AWS account ID for the development account.

Attach the AmazonS3FullAccess and AWSCodeDeployDeployerAccess policies to this role. Follow the wizard to complete role creation.The policy section for the Prod Role should look like this:

 

Step 3: Give the IAM instance profile permission to the S3 bucket under the development account

Now log in to development account. The AWS CodeDeploy agent relies on the IAM instance profile to access the S3 bucket. In this post, development account contains the deployment revision. Update bucket policy for the S3 bucket under development account to give the production account "IAM instance profile" (arn:aws:iam::role/CodeDeployDemo-EC2) permission to retrieve the object from the bucket. You can follow the steps in granting cross-account bucket permission.

You can find the IAM instance profile by going to EC2 console and check the IAM Role associated with your EC2 instances under production account.

Here is what the policy looks like:

{
  "Version": "2012-10-17",
    "Statement": [
      {
        "Sid": "Example permissions",
        "Effect": "Allow",
        "Principal": {
        "AWS": "arn:aws:iam::role/CodeDeployDemo-EC2"
      },
     "Action": [
     "s3:List*",
     "s3:Get*"
     ],
    "Resource": "arn:aws:s3:::my-demo-application/*"
    }
  ]
}

Step 4: Give the IAM user under the development account access to the production role

In the development account IAM console, select the development-user IAM user and add the following policy.

{
  "Version": "2012-10-17",
    "Statement": {
    "Effect": "Allow",
    "Action": "sts:AssumeRole",
    "Resource": "arn:aws:iam::<production-account-id>:role/CrossAccountRole"
  }
}

This policy gives the development-user enough permission to assume the CrossAccountRole under the production account. You’ll find step by step instructions in the walkthrough Granting Users Access to the Role.

Step 5: Deploy to production account

That’s it. In four simple steps, you have set up everything you need to deploy from a development to a production account. To deploy:

Log in to your development account as development-user. For more information, see How Users Sign In to Your Account in the IAM documentation. In the upper-right, choose the user name. You will see the "Switch Role" link:

Alternatively, you can use the sign-in link, which you will find under "Summary" on the IAM role details page.

On the "Switch Role" page, provide the production account role information:

You only need to complete these steps once. The role will appear in the role history after you add it. For step-by-step instructions, see this blog post. After you switch to the production account role, open the AWS CodeDeploy console and deploy to the targeted application and deployment group.

Wait for the deployment to be completed successfully. The changes will be released to the production fleet.

Next steps

  • The CrossAccountRole we created in step 2 has access to any S3 bucket. You can scope the permissions down to just the required bucket (in this case, my-demo-application). Similarly, CrossAccountRole has access to deploy to any application and deployment group. You can scope this down to just the required application and deployment group. For more information, see AWS CodeDeploy User Access Permissions Reference. And also instead of using root account as trust entity, you can update trust relationships to only allow specific IAM User (e.g. development-user under development account) to assume this role.

We hope this blog post has been helpful. Are there other deployment workflow questions you would like us to answer? Let us know in the comments or in our user forum.

Setting Up the Jenkins Plugin for AWS CodeDeploy

by Shankar Sivadasan | on | in Best practices, How-to, Web app | | Comments

The following is a guest post by Maitreya Ranganath, Solutions Architect.


In this post, we’ll show you how to use the Jenkins plugin to automatically deploy your builds with AWS CodeDeploy. We’ll walk through the steps for creating an AWS CodeCommit repository, installing Jenkins and the Jenkins plugin, adding files to the CodeCommit repository, and configuring the plugin to create a deployment when changes are committed to an AWS CodeCommit repository.

Create an AWS CodeCommit Repository

First, we will create an AWS CodeCommit repository to store our sample code files.

1. Sign in to the AWS Management Console and open the AWS CodeCommit console in the us-east-1 (N. Virginia) Region.  Choose Get Started or Create Repository.

2. For Repository Name, type a name for your repository (for example, DemoRepository). For Description, type Repository for Jenkins Code Deploy.

3. Choose the Create repository button.

4. Choose the repository you just created to view its details.

5. Choose the Clone URL button, and then choose HTTPS. Copy the URL displayed into a clipboard. You’ll need it later to configure Jenkins.

 

Now that you have created an AWS CodeCommit repository, we’ll create a Jenkins server and AWS CodeDeploy environment.

Create a Jenkins Server and AWS CodeDeploy Environment

In this step, we’ll launch a CloudFormation template that will create the following resources:

  • An Amazon S3 bucket that will be used to store deployment files.
  • JenkinsRole, an IAM role and instance profile for the Amazon EC2 instance that will run Jenkins. This role allows Jenkins on the EC2 instance to assume the CodeDeployRole and access repositories in CodeCommit.
  • CodeDeployRole, an IAM role assumed by the CodeDeploy Jenkins plugin. This role has permissions to write files to the S3 bucket created by this template and to create deployments in CodeDeploy.
  • Jenkins server, an EC2 instance running Jenkins.
  • An Auto Scaling group of EC2 instances running Apache and the CodeDeploy agent fronted by an Elastic Load Balancing load balancer.

To create the CloudFormation stack, choose the link that corresponds to the AWS region where you want to work:

For the us-east-1 region:

or use the link below:

https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=JenkinsCodeDeploy&templateURL=https://s3.amazonaws.com/aws-codedeploy-us-east-1/templates/latest/CodeDeploy_SampleCF_Jenkins_Integration.json

For the us-west-2 region:

or use the link below:

https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=JenkinsCodeDeploy&templateURL=https://s3.amazonaws.com/aws-codedeploy-us-east-1/templates/latest/CodeDeploy_SampleCF_Jenkins_Integration.json

6. Choose Next and specify the following values:

  • For InstanceCount, accept the default of 3. (Three EC2 instances will be launched for CodeDeploy.)
  • For InstanceType, accept the default of t2.medium.
  • For KeyName, choose an existing EC2 key pair. You will use it to connect by using SSH to the Jenkins server. Ensure that you have access to the private key of this key pair.
  • For PublicSubnet1, choose a public subnet where the load balancer, Jenkins server, and CodeDeploy web servers will be launched.
  • For PublicSubnet2, choose a public subnet where the load balancers and CodeDeploy web servers will be launched.
  • For VpcId, choose the VPC for the public subnets you used in PublicSubnet1 and PublicSubnet2.
  • For YourIPRange, type the CIDR block of the network from where you will connect to the Jenkins server using HTTP and SSH. If your local machine has a static public IP address, find it by going to https://www.whatismyip.com/ and then entering it followed by a ‘/32’. If you do not have a static IP address (or aren’t sure if you have one), you may enter ‘0.0.0.0/0’ in this field then any address can reach your Jenkins server.

7. On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box, and then choose Create.

8. Wait for the CloudFormation stack status to change to CREATE_COMPLETE. This will take approximately 6-10 minutes.

 

9. Note the values displayed on the Outputs tab. You’ll need them later.

10. Point your browser to the ELBDNSName from the Outputs tab and verify that you can see the Sample Application page.

Secure Jenkins

Point your browser to the JenkinsServerDNSName (for example, ec2-54-163-4-211.compute-1.amazonaws.com) from the Outputs tab. You should be able to see the Jenkins home page:

The Jenkins installation is currently accessible through the Internet without any form of authentication. Before proceeding to the next step, let’s secure Jenkins. On the Jenkins home page, choose Manage Jenkins. Choose Configure Global Security, and then to enable Jenkins security, select the Enable security check box.

Under Security Realm, choose Jenkins’s own user database and select the Allow users to sign up check box. Under Authorization, choose Matrix-based security. Add a user (for example, admin) and give this user all privileges. Save your changes.

Now you will be asked to provide a user name and password for the user. Choose Create an account, provide the user name (for example, admin), a strong password, and then complete the user details. Now you will be able to sign in securely to Jenkins.

Create a Project and Configure the CodeDeploy Jenkins Plugin

Now we’ll create a project in Jenkins and configure the Jenkins plugin to poll for code updates from the AWS CodeCommit repository.

1. Sign in to Jenkins with the user name and password you created earlier.

2. Choose New Item, and then choose Freestyle project. Type a name for the project (for example, CodeDeployApp), and then choose OK.

3. On the project configuration page, under Source Code Management, choose Git. Paste the URL you noted when you created the AWS CodeCommit repository (step 5).

4. In Build Triggers, select the Poll SCM check box. In the Schedule text field, type H/2 * * * *. This tells Jenkins to poll CodeCommit every two minutes for updates. (This may be too frequent for production use, but it works well for testing because it returns results frequently.)

5. Under Post-build Actions, choose Add post-build actions, and then select the Deploy an application to AWS CodeDeploy check box.

6. Paste the values you noted on the Outputs tab when you created the CloudFormation stack (step 9):

  • For AWS CodeDeploy Application Name, paste the value of CodeDeployApplicationName.
  • For AWS CodeDeploy Deployment Group, paste the value of CodeDeployDeploymentGroup.
  • For AWS CodeDeploy Deployment Config, type CodeDeployDefault.OneAtATime.
  • For AWS Region, choose the region where you created the CodeDeploy environment.
  • For S3 Bucket, paste the value of S3BucketName.
  • Leave the other settings at their default (blank).

7. Choose Use temporary credentials, and then paste the value of JenkinsCodeDeployRoleArn that appeared in the CloudFormation output.

Note the External ID field displayed on this page. This is a unique random ID generated by the CodeDeploy Jenkins plugin. This ID can be used to add a condition to the IAM role to ensure that only the plugin can assume this role. To keep things simple, we will not use the External ID as a condition, but we strongly recommend you use it for added protection in a production scenario, especially when you are using cross-account IAM roles.

 

8. Choose Test Connection.

 

9. Confirm the text “Connection test passed” appears, and then choose Save to save your settings.

Add Files to the CodeCommit Repository

Now, we’ll use the git command-line tool to clone the AWS CodeCommit repository and then add files to it. These steps show you how to use SSH to connect to the Jenkins server. If you are more comfortable with Git integrated in your IDE, follow the steps in the CodeCommit documentation to clone the repository and add files to it.

1. Use SSH to connect to the public DNS name of the EC2 instance for Jenkins (JenkinsServerDNSName from the Outputs tab) and sign in as the ec2-user. Run the following commands to configure git. Replace the values enclosed in quotes with your name and email address.

$ aws configure set region us-east-1
$ aws configure set output json
$ git config --global credential.helper '!aws codecommit credential-helper $@'
$ git config --global credential.useHttpPath true
$ git config --global user.name "YOUR NAME"
$ git config --global user.email "example@example.com"

2. Clone the repository you created in the previous step.

$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DemoRepository
Cloning into 'DemoRepository'...
warning: You appear to have cloned an empty repository.
Checking connectivity... done.

3. Switch to the DemoRepository directory:

$ cd DemoRepository/

4. Now we’ll download the source for the Sample CodeDeploy application.

$ curl -OLs http://aws-codedeploy-us-east-1.s3.amazonaws.com/samples/latest/SampleApp_Linux.zip

5. Unzip the downloaded file:

$ unzip SampleApp_Linux.zip
Archive:  SampleApp_Linux.zip
extracting: scripts/install_dependencies  
extracting: scripts/start_server    
inflating: scripts/stop_server     
inflating: appspec.yml             
inflating: index.html              
inflating: LICENSE.txt 

6. Delete the ZIP file:

$ rm SampleApp_Linux.zip

7. Use a text editor to edit the index.html file:

$ vi index.html

8. Scroll down to the body tag and add the highlighted text:

9. Save the file and close the editor.

10. Add the files to git and commit them with a comment:

$ git add appspec.yml index.html LICENSE.txt scripts/*
$ git commit -m "Initial versions of files"

11. Now push these updates to CodeCommit:

$ git push

12. If your updates have been successfully pushed to CodeCommit, you should see something like the following:

Counting objects: 9, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (9/9), 5.05 KiB | 0 bytes/s, done.
Total 9 (delta 0), reused 0 (delta 0)
remote: 
To https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DemoRepository
 * [new branch]      master -> master

13. On the Jenkins dashboard, choose the CodeDeployApp project.

14. Choose Git Polling Log to see the results of polling git for updates. There may be a few failed polls from earlier when the repository was empty.

15. Within two minutes of pushing updates, a new build with a Build ID (for example, #2 or #3) should appear in the build history.

16. Choose the most recent build. On the Build details page, choose Console Output to view output from the build.

17. At the bottom of the output, check that the status of the build is SUCCESS.

18. In the CodeDeploy console, choose AWS CodeDeploy, and then choose Deployments.

 

19. Confirm that there are two deployments: the initial deployment created by CloudFormation and a recent deployment of the latest code from AWS CodeCommit. Confirm that the status of the recent deployment is Succeeded.

 

20. Point your browser to the ELBDNSName from the Outputs tab of CloudFormation. Confirm that the text “This version was deployed by Jenkins” appears on the page.

Congratulations, you have now successfully set up the CodeDeploy Jenkins plugin and used it to automatically deploy a revision to CodeDeploy when code updates are pushed to AWS CodeCommit.

You can experiment by committing more changes to the code and then pushing them to deploy the updates automatically.

Cleaning Up

In this section, we’ll delete the resources we’ve created so that you will not be charged for them going forward.

1. Sign in to the Amazon S3 console and choose the S3 bucket you created earlier. The bucket name will start with “jenkinscodedeploy-codedeploybucket.” Choose all files in the bucket, and from Actions, choose Delete.

2. Choose OK to confirm the deletion.

3. In the CloudFormation console, choose the stack named “JenkinsCodeDeploy,” and from Actions, choose Delete Stack. Refresh the Events tab of the stack until the stack disappears from the stack list.

AWS CloudFormation Security Best Practices

by George Huang | on | in Best practices, How-to | | Comments

The following is a guest post by Hubert Cheung, Solutions Architect.

AWS CloudFormation makes it easy for developers and systems administrators to create and manage a collection of related AWS resources by provisioning and updating them in an orderly and predictable way. Many of our customers use CloudFormation to control all of the resources in their AWS environments so that they can succinctly capture changes, perform version control, and manage costs in their infrastructure, among other activities.

Customers often ask us how to control permissions for CloudFormation stacks. In this post, we share some of the best security practices for CloudFormation, which include using AWS Identity and Access Management (IAM) policies, CloudFormation-specific IAM conditions, and CloudFormation stack policies. Because most CloudFormation deployments are executed from the AWS command line interface (CLI) and SDK, we focus on using the AWS CLI and SDK to show you how to implement the best practices.

Limiting Access to CloudFormation Stacks with IAM

With IAM, you can securely control access to AWS services and resources by using policies and users or roles. CloudFormation leverages IAM to provide fine-grained access control.

As a best practice, we recommend that you limit service and resource access through IAM policies by applying the principle of least privilege. The simplest way to do this is to limit specific API calls to CloudFormation. For example, you may not want specific IAM users or roles to update or delete CloudFormation stacks. The following sample policy allows all CloudFormation APIs access, but denies UpdateStack and DeleteStack APIs access on your production stack:

{
    "Version":"2012-10-17",
    "Statement":[{
        "Effect":"Allow",
        "Action":[        
            "cloudformation:*"
        ],
        "Resource":"*"
    },
    {
        "Effect":"Deny",
        "Action":[        
            "cloudformation:UpdateStack",
            "cloudformation:DeleteStack"
        ],
        "Resource":"arn:aws:cloudformation:us-east-1:123456789012:stack/MyProductionStack/*"
    }]
}

We know that IAM policies often need to allow the creation of particular resources, but you may not want them to be created as part of CloudFormation. This is where CloudFormation’s support for IAM conditions comes in.

IAM Conditions for CloudFormation

There are three CloudFormation-specific IAM conditions that you can add to your IAM policies:

  • cloudformation:TemplateURL
  • cloudformation:ResourceTypes
  • cloudformation:StackPolicyURL

With these three conditions, you can ensure that API calls for stack actions, such as create or update, use a specific template or are limited to specific resources, and that your stacks use a stack policy, which prevents stack resources from unintentionally being updated or deleted during stack updates.

Condition: TemplateURL

The first condition, cloudformation:TemplateURL, lets you specify where the CloudFormation template for a stack action, such as create or update, resides and enforce that it be used. In an IAM policy, it would look like this:

{
    "Version":"2012-10-17",
    "Statement":[{
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            “cloudformation:UpdateStack”
        ],
        "Resource": "*",
        "Condition": {
            "StringNotEquals": {
                "cloudformation:TemplateURL": [
                    "https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template"
                ]
            }
        }
    },
    {
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack"
        ],
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:TemplateURL": "true"
            }
        }
    }]
}

The first statement ensures that for all CreateStack or UpdateStack API calls, users must use the specified template. The second ensures that all CreateStack or UpdateStack API calls must include the TemplateURL parameter. From the CLI, your calls need to include the –template-url parameter:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template

Condition: ResourceTypes

CloudFormation also allows you to control the types of resources that are created or updated in templates with an IAM policy. The CloudFormation API accepts a ResourceTypes parameter. In your API call, you specify which types of resources can be created or updated. However, to use the new ResourceTypes parameter, you need to modify your IAM policies to enforce the use of this particular parameter by adding in conditions like this:

{
    "Version":"2012-10-17",
    "Statement":[{
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack"
        ],
        "Resource": "*",
        "Condition": {
            "ForAllValues:StringLike": {
                "cloudformation:ResourceTypes": [
                    "AWS::IAM::*"
                ]
            }
        }
    },
    {
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack"
        ],
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:ResourceTypes": "true"
            }
        }
    }]
}

From the CLI, your calls need to include a –resource-types parameter. A call to update your stack will look like this:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –resource-types=”[AWS::IAM::Group, AWS::IAM::User]”

Depending on the shell, the command might need to be enclosed in quotation marks as follow; otherwise, you’ll get a “No JSON object could be decoded” error:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –resource-types=’[“AWS::IAM::Group”, “AWS::IAM::User”]’

The ResourceTypes conditions ensure that CloudFormation creates or updates the right resource types and templates with your CLI or API calls. In the first example, our IAM policy would have blocked the API calls because the example included AWS::IAM resources. If our template included only AWS::EC2::Instance resources, the CLI command would look like this and would succeed:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –resource-types=’[“AWS::EC2::Instance”]’

The third condition is the StackPolicyURL condition. Before we explain how that works, we need to provide some additional context about stack policies.

Stack Policies

Often, the worst disruptions are caused by unintentional changes to resources. To help in mitigating this risk, CloudFormation provides stack policies, which prevent stack resources from unintentionally being updated or deleted during stack updates. When used in conjunction with IAM, stack policies provide a second layer of defense against both unintentional and malicious changes to your stack resources.

The CloudFormation stack policy is a JSON document that defines what can be updated as part of a stack update operation. To set or update the policy, your IAM users or roles must first have the ability to call the cloudformation:SetStackPolicy action.

You apply the stack policy directly to the stack. Note that this is not an IAM policy. By default, setting a stack policy protects all stack resources with a Deny to deny any updates unless you specify an explicit Allow. This means that if you want to restrict only a few resources, you must explicitly allow all updates by including an Allow on the resource "*" and a Deny for specific resources. 

For example, stack policies are often used to protect a production database because it contains data that will go live. Depending on the field that’s changing, there are times when the entire database could be replaced during an update. In the following example, the stack policy explicitly denies attempts to update your production database:

{
  "Statement" : [
    {
      "Effect" : "Deny",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "LogicalResourceId/ProductionDB_logical_ID"
    },
    {
      "Effect" : "Allow",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*"
    }
  ]
}

You can generalize your stack policy to include all RDS DB instances or any given ResourceType. To achieve this, you use conditions. However, note that because we used a wildcard in our example, the condition must use the "StringLike" condition and not "StringEquals":

{
  "Statement" : [
    {
      "Effect" : "Deny",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*",
      "Condition" : {
        "StringLike" : {
          "ResourceType" : ["AWS::RDS::DBInstance", "AWS::AutoScaling::*"]
        }
      }
    },
    {
      "Effect" : "Allow",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*"
    }
  ]
}

For more information about stack policies, see Prevent Updates to Stack Resources.

Finally, let’s ensure that all of your stacks have an appropriate pre-defined stack policy. To address this, we return to  IAM policies.

Condition:StackPolicyURL

From within your IAM policy, you can ensure that every CloudFormation stack has a stack policy associated with it upon creation with the StackPolicyURL condition:

{
    "Version":"2012-10-17",
    "Statement":[
    {
            "Effect": "Deny",
            "Action": [
                "cloudformation:SetStackPolicy"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringNotEquals": {
                    "cloudformation:StackPolicyUrl": [
                        "https://s3.amazonaws.com/samplebucket/sampleallowpolicy.json"
                    ]
                }
            }
        },    
       {
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack"
        ],
        "Resource": "*",
        "Condition": {
            "ForAnyValue:StringNotEquals": {
                "cloudformation:StackPolicyUrl": [
                    “https://s3.amazonaws.com/samplebucket/sampledenypolicy.json”
                ]
            }
        }
    },
    {
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack",
            “cloudformation:SetStackPolicy”
        ],
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:StackPolicyUrl": "true"
            }
        }
    }]
}

This policy ensures that there must be a specific stack policy URL any time SetStackPolicy is called. In this case, the URL is https://s3.amazonaws.com/samplebucket/sampleallowpolicy.json. Similarly, for any create and update stack operation, this policy ensures that the StackPolicyURL is set to the sampledenypolicy.json document in S3 and that a StackPolicyURL is always specified. From the CLI, a create-stack command would look like this:

aws cloudformation create-stack –stack-name cloudformation-demo –parameters ParameterKey=Password,ParameterValue=CloudFormationDemo –capabilities CAPABILITY_IAM –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –stack-policy-url https://s3-us-east-1.amazonaws.com/samplebucket/sampledenypolicy.json

Note that if you specify a new stack policy on a stack update, CloudFormation uses the existing stack policy: it uses the new policy only for subsequent updates. For example, if your current policy is set to deny all updates, you must run a SetStackPolicy command to change the stack policy to the one that allows updates. Then you can run an update command against the stack. To update the stack we just created, you can run this:

aws cloudformation set-stack-policy –stack-name cloudformation-demo –stack-policy-url https://s3-us-east-1.amazonaws.com/samplebucket/sampleallowpolicy.json

Then you can run the update:

aws cloudformation update-stack –stack-name cloudformation-demo –parameters ParameterKey=Password,ParameterValue=NewPassword –capabilities CAPABILITY_IAM –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –stack-policy-url https://s3-us-west-2.amazonaws.com/awshubfiles/sampledenypolicy.json

The IAM policy that we used ensures that a specific stack policy is applied to the stack any time a stack is updated or created.

Conclusion

CloudFormation provides a repeatable way to create and manage related AWS resources. By using a combination of IAM policies, users, and roles, CloudFormation-specific IAM conditions, and stack policies, you can ensure that your CloudFormation stacks are used as intended and minimize accidental resource updates or deletions.

You can learn more about this topic and other CloudFormation best practices in the recording of our re:Invent 2015 session, (DVO304) AWS CloudFormation Best Practices, and in our documentation.

AWS CloudFormation at AWS re:Invent 2015: Breakout Session Recap, Videos, and Slides

by George Huang | on | in Best practices | | Comments

The AWS CloudFormation team and others presented and shared many updates and best practices during several 2015 AWS re:Invent sessions in October. We wanted to take the opportunity to show you where our presentation slides and videos are located as well as highlight a few product updates and best practices that we shared at this year’s re:Invent.

  • DVO304 – AWS CloudFormation Best Practices: slides and video
  • ARC307 – Infrastructure as Code: slides and video
  • DVO303 – Scaling Infrastructure Operations with AWS: slides and video
  • ARC401 – Cloud First: New Architecture for New Infrastructure: slides and video
  • DVO310 – Benefit from DevOps When Moving to AWS for Windows: slides and video
  • DVO401 – Deep Dive into Blue/Green Deployments on AWS: slides and video
  • SEC312 – Reliable Design and Deployment of Security and Compliance: slides and video

AWS CloudFormation Designer

We introduced CloudFormation Designer in early October. During our re:Invent session DVO304 (AWS CloudFormation Best Practices), we introduced CloudFormation Designer and then did a live demo and walkthrough of its key features and use cases.

AWS CloudFormation Designer is a new visual tool that allows you to visually edit your CloudFormation templates as a diagram. It provides a drag-and-drop interface for adding resources to templates, and CloudFormation Designer automatically modifies the underlying JSON when you add or remove resources. You can also use the integrated text editor to view or specify template details, such as resource property values and input parameters.

To learn more about this feature:

  • Watch the CloudFormation Designer portion of our re:Invent talk to see a demo
  • View slides 3-13 to learn more about CloudFormation Designer from our re:Invent talk

Updated resource support in CloudFormation

In the same session, we also talked about the five new resources that CloudFormation can provision which we introduced in October. To stay up to-date on CloudFormation resource support updates, please visit here to see a list of all currently supported AWS resources.

Other topics covered in our “AWS CloudFormation Best Practices” breakout session

  • Using Cost Explorer to budget and estimate a stack’s cost
  • Collecting audit logs using the CloudTrail integration with CloudFormation
  • CloudFormation advanced language features
  • How to extend CloudFormation to resources that are not yet supported by CloudFormation
  • Security and user-access best practices
  • Best practices for writing CloudFormation templates when sharing templates with teams or users that have different environments or are using different AWS regions

Please reach us at the AWS CloudFormation forum if you have more feedback or questions. 

Under the Hood: AWS CodeDeploy and Auto Scaling Integration

by Jonathan Turpie | on | in Best practices | | Comments

Under the Hood: AWS CodeDeploy and Auto Scaling Integration

AWS CodeDeploy is a service that automates application deployments to your fleet of servers. Auto Scaling is a service that lets you dynamically scale your fleet based on load. Although these services are standalone, you can use them together for hands-free deployments! Whenever new Amazon EC2 instances are launched as part of an Auto Scaling group, CodeDeploy can automatically deploy your latest application revision to the new instances.

This blog post will cover how this integration works and conclude with a discussion of best practices. We assume you are familiar with CodeDeploy concepts and have completed the CodeDeploy walkthrough.

Configuring CodeDeploy with Auto Scaling

Configuring CodeDeploy with Auto Scaling is easy. Just go to the AWS CodeDeploy console and specify the Auto Scaling group name in your Deployment Group configuration.

In addition, you need to:

  1. Install the CodeDeploy agent on the Auto Scaling instance. You can either bake the agent as part of the base AMI or use user data to install the agent during launch.
  2. Make sure the service role used by CodeDeploy to interact with Auto Scaling has the correct permissions. You can use the AWSCodeDeployRole managed policy. For more information, see Create a Service Role for CodeDeploy.

For a step-by-step tutorial, see Using AWS CodeDeploy to Deploy an Application to an Auto Scaling Group.

Auto Scaling Lifecycle Hook

The communication between Auto Scaling and CodeDeploy during a scale in event is based on Auto Scaling lifecycle hooks. If the hooks are not set up correctly, the deployment will fail. We recommend that you do not try to manually set up or modify these hooks because CodeDeploy can do this for you. Auto Scaling lifecycle hooks tell Auto Scaling to send a notification when an instance is about to change to certain Auto Scaling lifecycle states. CodeDeploy listens only for notifications about instances that have launched and are about to be put in the InService state. This state occurs after the EC2 instance has finished booting, but before it is put behind any Elastic Load Balancing load balancers you have configured. Auto Scaling waits for a successful response from CodeDeploy before it continues working on the instance.

Hooks are part of the configuration of your Auto Scaling group. You can use the describe-lifecycle-hooks CLI command to see a list of hooks installed on your Auto Scaling group. When you create or modify a deployment group to contain an Auto Scaling group, CodeDeploy does the following:

  1. Uses the CodeDeploy service role passed in for use with the deployment group to gain permissions to the Auto Scaling group.
  2. Installs a lifecycle hook in the Auto Scaling group for instance launches that sends notifications to a queue owned by CodeDeploy.
  3. Adds a record of the installed hook to the deployment group.

When you remove an Auto Scaling group from a deployment group or delete a deployment group, CodeDeploy does the following:

  1. Uses the service role for the deployment group to gain access to the Auto Scaling group.
  2. Gets the recorded hook from the deployment group and removes it from the Auto Scaling hook.
  3. If the deployment group is being modified (not deleted), deletes the record of the hook from the deployment group.

If there are problems creating hooks, CodeDeploy will try to roll back the changes. If there are problems removing hooks, CodeDeploy will return the unsuccessful hook removals in the API response and continue.

Under the Hood

Here’s the sequence of events that occur during an Auto Scaling scale-in event:

  1. Auto Scaling asks EC2 for a new instance.
  2. EC2 spins up a new instance with the configuration provided by Auto Scaling.
  3. Auto Scaling sees the new instance, puts it into Pending:Wait status, and sends the notification to Code Deploy.
  4. CodeDeploy receives the instance launch notification from Auto Scaling.
  5. CodeDeploy validates the configuration of the instance and the deployment group.

    1. If the notification looks correct, but the deployment group no longer contains the Auto Scaling group (or we can determine the deployment group was previously deleted) then CodeDeploy will not deploy anything and tell Auto Scaling to CONTINUE with the instance launch. Auto Scaling will respect any other constraints on instance launch; this step does not force Auto Scaling to continue if something else is wrong.
    2. If CodeDeploy can’t process the message (for example, if the stored service role doesn’t grant appropriate permissions), then CodeDeploy will let the hook time out. The default timeout for CodeDeploy is 10 minutes.
  6. CodeDeploy creates a new deployment for the instance to deploy the target revision of the deployment group. (The target revision is the last successfully deployed revision to the deployment group. It is maintained by CodeDeploy.) You will need to deploy to your deployment group at least once for CodeDeploy to identify the target revision. You can use the get-deployment-group CLI command or the CodeDeploy console get the target revision for a deployment group.

    1. While the deployment is running, it sends heartbeats to Auto Scaling to let it know that the instance is still being worked on.
    2. If something goes wrong with the deployment, CodeDeploy will immediately tell Auto Scaling to ABANDON the instance launch. Auto Scaling terminates the instance and starts the process over again with a new instance.

 

Best Practices

Now that we know how the CodeDeploy and Auto Scaling integration works, let’s go over some best practices when using the two services together:

  • Setting up or modifying Auto Scaling lifecycle hooks – We recommend that you do not try to set up or modify the Auto Scaling hooks manually because configuration errors could break the CodeDeploy integration.
  • Beware of failed deployments – When a deployment to a new instance fails, CodeDeploy will mark the instance for termination. Auto Scaling will terminate the instance, spin up a new instance, and notify CodeDeploy to start a deployment. This is great when you have transient errors. However, the downside is that if you have an issue with your target revision (for example, if there is an error in your deployment script), this cycle of launching and terminating instances can go into a loop. We recommend that you closely monitor deployments and set up Auto Scaling notifications to keep track of EC2 instances launched and terminated by Auto Scaling.
  • Troubleshooting Auto Scaling deployments – Troubleshooting deployments involving Auto Scaling groups can be challenging. If you have a failed deployment, we recommend that you disassociate the Auto Scaling group from the deployment group to prevent Auto Scaling from continuously launching and terminating EC2 instances. Next, add a tagged EC2 instance launched with the same base AMI to your deployment group, deploy the target revision to that EC2 instance, and use that to troubleshoot your scripts. When you are confident, associate the deployment group with the Auto Scaling group, deploy the golden revision to your Auto Scaling group, scale up a new EC2 instance (by adjusting Min, Max, and Desired values), and verify that the deployment is successful.
  • Ordering execution of launch scripts – The CodeDeploy agent looks for and executes deployments as soon as it starts. There is no ordering between the deployment execution and   launch scripts such as user data, cfn-init, etc. We recommend you install the host agent as part of (and maybe as the last step in) the launch scripts so that you can be sure the deployment won’t be executed until the instance has installed dependencies that are not part of your CodeDeploy deployment. If you prefer baking the agent into the base AMI, we recommend that you keep the agent service in a stopped state and use the launch scripts to start the agent service.
  • Associating multiple deployment groups with the same Auto Scaling group – In general, you should avoid associating multiple deployment groups with the same Auto Scaling group. When Auto Scaling scales up an instance with multiple hooks associated with multiple deployment groups, it sends notifications for all of the hooks at the same time. As a result, multiple CodeDeploy deployments are created. There are several drawbacks to this. These deployments are executed in parallel, so you won’t be able to depend on any ordering between them. If any of the deployments fail, Auto Scaling will immediately terminate the instance. The other deployments that were running will start to fail when the instance shuts down, but they may take an hour to time out.  The host agent processes only one deployment command at a time, so you have two more limitations to consider. First, it’s possible for one of the deployments to be starved for time and fail. This might happen, for example, if the steps in your deployment take more than five minutes to complete. Second, there is no preemption between deployments, so there is no way to enforce step ordering between one deployment and another. We therefore recommend that you minimize the number of deployment groups associated with an Auto Scaling group and consolidate the deployments into a single deployment.

We hope this deep dive into the Auto Scaling integration with CodeDeploy gives you the insight needed to use it effectively.  Are there other features or scenarios with CodeDeploy that you’d be interested in understanding the inner details better?  Let us know in the comments.

Faster Auto Scaling in AWS CloudFormation Stacks with Lambda-backed Custom Resources

by Tom Maddox | on | in Best practices, How-to, Web app | | Comments
Many organizations use AWS CloudFormation (CloudFormation) stacks to facilitate blue/green deployments, routinely launching replacement AWS resources with updated packages for code releases, security patching, and change management. To facilitate blue/green deployments with CloudFormation, you typically pass code version identifiers (e.g., a commit hash) to new application stacks as template parameters. Application servers in an Auto Scaling group reference the parameters to fetch and install the correct versions of code.
 
Fetching code every time your application scales can impede bringing new application servers online. Organizations often compensate for reduced scaling agility by setting lower server utilization targets, which has a knock-on effect on cost, or by creating pre-built custom Amazon Machine Images (AMIs) for use in the deployment pipeline. Custom AMIs with pre-installed code can be referenced with new instance launches as part of an Auto Scaling group launch configuration. These application servers are ready faster than if code had to be fetched in the traditional way. However, hosting this type of application deployment pipeline often requires additional servers and adds the overhead of managing the AMIs.
 
In this post, we’ll look at how you can use CloudFormation custom resources with AWS Lambda (Lambda) to create and manage AMIs during stack creation and termination.
 
The following diagram shows how you can use a Lambda function that creates an AMI and returns a success code and the resulting AMI ID.
 
Visualization of AMIManager Custom Resource creation process
 
To orchestrate this process, you bootstrap a reference instance with a user data script, use wait conditions to trigger an AMI capture, and finally create an Auto Scaling group launch configuration that references the newly created AMI. The reference instance that is used to capture the AMI can then be terminated, or it can be repurposed for administrative access or for performing scheduled tasks. Here’s how this looks in a CloudFormation template:
 
"Resources": {
  "WaitHandlePendingAMI" : {
    "Type" : "AWS::CloudFormation::WaitConditionHandle"
  },
  "WaitConditionPendingAMI" : {
    "Type" : "AWS::CloudFormation::WaitCondition",
    "Properties" : {
      "Handle"  : { "Ref" : "WaitHandlePendingAMI" },
      "Timeout" : "7200"
    }
  },

  "WaitHandleAMIComplete" : {
    "Type" : "AWS::CloudFormation::WaitConditionHandle"
  },
  "WaitConditionAMIComplete" : {
    "Type" : "AWS::CloudFormation::WaitCondition",
    "Properties" : {
      "Handle"  : { "Ref" : "WaitHandleAMIComplete" },
      "Timeout" : "7200"
    }
  },

  "AdminServer" : {
    "Type" : "AWS::EC2::Instance",
    "Properties" : {
      ...
      "UserData": { "Fn::Base64": { "Fn::Join": [ "", [
        "#!/bin/bashn",
        "yum update -yn",
        "",
        "echo -e "n### Fetching and Installing Code..."n",
        "export CODE_VERSION="", {"Ref": "CodeVersionIdentifier"}, ""n",
        "# Insert application deployment code here!n",
        "",
        "echo -e "n### Signal for AMI capture"n",
        "history -cn",
        "/opt/aws/bin/cfn-signal -e 0 -i waitingforami '", { "Ref" : "WaitHandlePendingAMI" }, "' n",
        "",
        "echo -e "n### Waiting for AMI to be available"n",
        "aws ec2 wait image-available",
        "    --filters Name=tag:cloudformation:amimanager:stack-name,Values=", { "Ref" : "AWS::StackName" },
        "    --region ", {"Ref": "AWS::Region"}
        "",
        "/opt/aws/bin/cfn-signal -e $0 -i waitedforami '", { "Ref" : "WaitHandleAMIComplete" }, "' n"
        "",
        "# Continue with re-purposing or shutting down instance...n"
      ] ] } }
    }
  },

  "AMI": {
    "Type": "Custom::AMI",
    "DependsOn" : "WaitConditionPendingAMI",
    "Properties": {
      "ServiceToken": "arn:aws:lambda:REGION:ACCOUNTID:function:AMIManager",
      "StackName": { "Ref" : "AWS::StackName" },
      "Region" : { "Ref" : "AWS::Region" },
      "InstanceId" : { "Ref" : "AdminServer" }
    }
  },

  "AutoScalingGroup" : {
    "Type" : "AWS::AutoScaling::AutoScalingGroup",
    "Properties" : {
      ...
      "LaunchConfigurationName" : { "Ref" : "LaunchConfiguration" }
    }
  },

  "LaunchConfiguration": {
    "Type": "AWS::AutoScaling::LaunchConfiguration",
    "DependsOn" : "WaitConditionAMIComplete",
    "Properties": {
      ...
      "ImageId": { "Fn::GetAtt" : [ "AMI", "ImageId" ] }
    }
  }
}

With this approach, you don’t have to run and maintain additional servers for creating custom AMIs, and the AMIs can be deleted when the stack terminates. The following figure shows that as CloudFormation deletes the stacks, it also deletes the AMIs when the Delete signal is sent to the Lambda-backed custom resource.

Visualization of AMIManager Custom Resource deletion process

Let’s look at the Lambda function that facilitates AMI creation and deletion:

/**
* A Lambda function that takes an AWS CloudFormation stack name and instance id
* and returns the AMI ID.
**/

exports.handler = function (event, context) {

    console.log("REQUEST RECEIVED:n", JSON.stringify(event));

    var stackName = event.ResourceProperties.StackName;
    var instanceId = event.ResourceProperties.InstanceId;
    var instanceRegion = event.ResourceProperties.Region;

    var responseStatus = "FAILED";
    var responseData = {};


    var AWS = require("aws-sdk");
    var ec2 = new AWS.EC2({region: instanceRegion});

    if (event.RequestType == "Delete") {
        console.log("REQUEST TYPE:", "delete");
        if (stackName && instanceRegion) {
            var params = {
                Filters: [
                    {
                        Name: 'tag:cloudformation:amimanager:stack-name',
                        Values: [ stackName ]
                    },
                    {
                        Name: 'tag:cloudformation:amimanager:stack-id',
                        Values: [ event.StackId ]
                    },
                    {
                        Name: 'tag:cloudformation:amimanager:logical-id',
                        Values: [ event.LogicalResourceId ]
                    }
                ]
            };
            ec2.describeImages(params, function (err, data) {
                if (err) {
                    responseData = {Error: "DescribeImages call failed"};
                    console.log(responseData.Error + ":n", err);
                    sendResponse(event, context, responseStatus, responseData);
                } else if (data.Images.length === 0) {
                    sendResponse(event, context, "SUCCESS", {Info: "Nothing to delete"});
                } else {
                    var imageId = data.Images[0].ImageId;
                    console.log("DELETING:", data.Images[0]);
                    ec2.deregisterImage({ImageId: imageId}, function (err, data) {
                        if (err) {
                            responseData = {Error: "DeregisterImage call failed"};
                            console.log(responseData.Error + ":n", err);
                        } else {
                            responseStatus = "SUCCESS";
                            responseData.ImageId = imageId;
                        }
                        sendResponse(event, context, "SUCCESS");
                    });
                }
            });
        } else {
            responseData = {Error: "StackName or InstanceRegion not specified"};
            console.log(responseData.Error);
            sendResponse(event, context, responseStatus, responseData);
        }
        return;
    }

    console.log("REQUEST TYPE:", "create");
    if (stackName && instanceId && instanceRegion) {
        ec2.createImage(
            {
                InstanceId: instanceId,
                Name: stackName + '-' + instanceId,
                NoReboot: true
            }, function (err, data) {
                if (err) {
                    responseData = {Error: "CreateImage call failed"};
                    console.log(responseData.Error + ":n", err);
                    sendResponse(event, context, responseStatus, responseData);
                } else {
                    var imageId = data.ImageId;
                    console.log('SUCCESS: ', "ImageId - " + imageId);

                    var params = {
                        Resources: [imageId],
                        Tags: [
                            {
                                Key: 'cloudformation:amimanager:stack-name',
                                Value: stackName
                            },
                            {
                                Key: 'cloudformation:amimanager:stack-id',
                                Value: event.StackId
                            },
                            {
                                Key: 'cloudformation:amimanager:logical-id',
                                Value: event.LogicalResourceId
                            }
                        ]
                    };
                    ec2.createTags(params, function (err, data) {
                        if (err) {
                            responseData = {Error: "Create tags call failed"};
                            console.log(responseData.Error + ":n", err);
                        } else {
                            responseStatus = "SUCCESS";
                            responseData.ImageId = imageId;
                        }
                        sendResponse(event, context, responseStatus, responseData);
                    });
                }
            }
        );
    } else {
        responseData = {Error: "StackName, InstanceId or InstanceRegion not specified"};
        console.log(responseData.Error);
        sendResponse(event, context, responseStatus, responseData);
    }
};

//Sends response to the Amazon S3 pre-signed URL
function sendResponse(event, context, responseStatus, responseData) {
   var responseBody = JSON.stringify({
        Status: responseStatus,
        Reason: "See the details in CloudWatch Log Stream: " + context.logStreamName,
        PhysicalResourceId: context.logStreamName,
        StackId: event.StackId,
        RequestId: event.RequestId,
        LogicalResourceId: event.LogicalResourceId,
        Data: responseData
    });

    console.log("RESPONSE BODY:n", responseBody);

    var https = require("https");
    var url = require("url");

    var parsedUrl = url.parse(event.ResponseURL);
    var options = {
        hostname: parsedUrl.hostname,
        port: 443,
        path: parsedUrl.path,
        method: "PUT",
        headers: {
            "content-type": "",
            "content-length": responseBody.length
        }
    };

    var request = https.request(options, function (response) {
        console.log("STATUS: " + response.statusCode);
        console.log("HEADERS: " + JSON.stringify(response.headers));
        // Tell AWS Lambda that the function execution is done
        context.done();
    });

    request.on("error", function (error) {
        console.log("sendResponse Error:n", error);
        // Tell AWS Lambda that the function execution is done
        context.done();
    });

    // Write data to request body
    request.write(responseBody);
    request.end();
}

This Lambda function calls the Amazon EC2 DescribeImages, DeregisterImage, CreateImage, and CreateTags APIs, and logs data to Amazon CloudWatch Logs (CloudWatch Logs) for monitoring and debugging. To support this, we recommended that you create the following AWS Identity and Access Management (IAM) policy for the function’s IAM execution role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "ec2:CreateImage",
        "ec2:DeregisterImage",
        "ec2:DescribeImages",
        "ec2:CreateTags"
      ],
      "Effect": "Allow",
      "Resource": "*"
    },
    {
      "Action": [
        "logs:*"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

During testing, the Lambda function didn’t exceed the minimum Lambda memory allocation of 128 MB. Typically, create operations took 4.5 seconds, and delete operations took 25 seconds. At Lambda’s current pricing of $0.00001667 per GB-second, each stack’s launch and terminate cycle incurs custom AMI creation costs of just $0.000988. This is much less expensive than managing an independent code release application. Within the AWS Free Tier, using Lambda as described allows you to perform more than 9,000 custom AMI create and delete operations each month for free!