Category: How-to


Extending AWS CodeBuild with Custom Build Environments

by John Pignata | on | in How-to | | Comments

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. CodeBuild provides curated build environments for programming languages and runtimes such as Java, Ruby, Python, Go, Node.js, Android, and Docker. It can be extended through the use of custom build environments to support many more.

Build environments are Docker images that include a complete file system with everything required to build and test your project. To use a custom build environment in a CodeBuild project, you build a container image for your platform that contains your build tools, push it to a Docker container registry such as Amazon EC2 Container Registry (ECR), and reference it in the project configuration. When building your application, CodeBuild will retrieve the Docker image from the container registry specified in the project configuration and use the environment to compile your source code, run your tests, and package your application.

In this post, we’ll create a build environment for PHP applications and walk through the steps to configure CodeBuild to use this environment.

(more…)

Run Umbraco CMS with Flexible Load Balancing on AWS

by Ihab Shaaban | on | in How-to | | Comments

In version 7.3, Umbraco CMS the popular open source CMS introduced the flexible load balancing feature, which makes the setup of load-balanced applications a lot easier. In this blog post, we’ll follow the guidelines in the Umbraco documentation to set up a load-balanced Umbraco application on AWS. We’ll let AWS Elastic Beanstalk manage the deployments, load balancing, auto scaling, and health monitoring for us.

Application Architecture

When you use the flexible load balancing feature, any updates to Umbraco content will be stored in a queue in the master database. Each server in the load-balanced environment will automatically download, process, and cache the updates from the queue, so no matter which server is selected by the Elastic Load Balancing to handle the request, the user will always receive the same content. Umbraco administration doesn’t work correctly if accessed from a load-balanced server. For this reason, we’ll set up a non-balanced environment to be accessed only by the administrators and editors.

(more…)

Introducing Git Credentials: A Simple Way to Connect to AWS CodeCommit Repositories Using a Static User Name and Password

by Ankur Agarwal | on | in How-to, New stuff |

Today, AWS is introducing a simplified way to authenticate to your AWS CodeCommit repositories over HTTPS.

With Git credentials, you can generate a static user name and password in the Identity and Access Management (IAM) console that you can use to access AWS CodeCommit repositories from the command line, Git CLI, or any Git tool that supports HTTPS authentication.

Because these are static credentials, they can be cached using the password management tools included in your local operating system or stored in a credential management utility. This allows you to get started with AWS CodeCommit within minutes. You don’t need to download the AWS CLI or configure your Git client to connect to your AWS CodeCommit repository on HTTPS. You can also use the user name and password to connect to the AWS CodeCommit repository from third-party tools that support user name and password authentication, including popular Git GUI clients (such as TowerUI) and IDEs (such as Eclipse, IntelliJ, and Visual Studio).

So, why did we add this feature? Until today, users who wanted to use HTTPS connections were required to configure the AWS credential helper to authenticate their AWS CodeCommit operations. Customers told us our credential helper sometimes interfered with password management tools such as Keychain Access and Windows Vault, which caused authentication failures. Also, many Git GUI tools and IDEs require a static user name and password to connect with remote Git repositories and do not support the credential helper.

In this blog post, I’ll walk you through the steps for creating an AWS CodeCommit repository, generating Git credentials, and setting up CLI access to AWS CodeCommit repositories.


Git Credentials Walkthrough
Let’s say Dave wants to create a repository on AWS CodeCommit and set up local access from his computer.

Prerequisite: If Dave had previously configured his local computer to use the credential helper for AWS CodeCommit, he must edit his .gitconfig file to remove the credential helper information from the file. Additionally, if his local computer is running macOS, he might need to clear any cached credentials from Keychain Access.

With Git credentials, Dave can now create a repository and start using AWS CodeCommit in four simple steps.

Step 1: Make sure the IAM user has the required permissions
Dave must have the following managed policies attached to his IAM user (or their equivalent permissions) before he can set up access to AWS CodeCommit using Git credentials.

  • AWSCodeCommitPowerUser (or an appropriate CodeCommit managed policy)
  • IAMSelfManageServiceSpecificCredentials
  • IAMReadOnlyAccess

Step 2: Create an AWS CodeCommit repository
Next, Dave signs in to the AWS CodeCommit console and create a repository, if he doesn’t have one already. He can choose any repository in his AWS account to which he has access. The instructions to create Git credentials are shown in the help panel. (Choose the Connect button if the instructions are not displayed.) When Dave clicks the IAM user link, the IAM console will open and he can generate the credentials.

GitCred_Blog1

 

Step 3: Create HTTPS Git credentials in the IAM console
On the IAM user page, Dave selects the Security Credentials tab and clicks Generate under HTTPS Git credentials for AWS CodeCommit section. This creates and displays the user name and password. Dave can then download the credentials.

GitCred_Blog2

Note: This is the only time the password is available to view or download.

 

Step 4: Clone the repository on the local machine
On the AWS CodeCommit console page for the repository, Dave chooses Clone URL, and then copy the HTTPS link for cloning the repository. At the command line or terminal, Dave will use the link he just copied to clone the repository. For example, Dave copies:

GitCred_Blog3

 

And then at the command line or terminal, Dave types:

$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/TestRepo_Dave

When prompted for user name and password, Dave provides the Git credentials (user name and password) he generated in step 3.

Dave is now ready to start pushing his code to the new repository.

Git credentials can be made active or inactive based on your requirements. You can also reset the password if you would like to use the existing username with a new password.

Next Steps

  1. You can optionally cache your credentials using the Git credentials caching command here.
  2. Want to invite a collaborator to work on your AWS CodeCommit repository? Simply create a new IAM user in your AWS account, create Git credentials for that user, and securely share the repository URL and Git credentials with the person you want to collaborate on the repositories.
  3. Connect to any third-party client that supports connecting to remote Git repositories using Git credentials (a stored user name and password). Virtually all tools and IDEs allow you to connect with static credentials. We’ve tested these:
    • Visual Studio (using the default Git plugin)
    • Eclipse IDE (using the default Git plugin)
    • Git Tower UI

For more information, see the AWS CodeCommit documentation.

We are excited to provide this new way of connecting to AWS CodeCommit. We look forward to hearing from you about the many different tools and IDEs you will be able to use with your AWS CodeCommit repositories.

Integrating Git with AWS CodePipeline

by Jay McConnell and Karthik Thirugnanasambandam | on | in How-to | | Comments


AWS CodePipeline
is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software. The service currently supports GitHub, AWS CodeCommit, and Amazon S3 as source providers. This blog post will cover how to integrate AWS CodePipeline with GitHub Enterprise, Bitbucket, GitLab, or any other Git server that supports the webhooks functionality available in most Git software.

Note: The steps outlined in this guide can also be used with AWS CodeBuild. AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. Once the “Test a commit” step is completed the output zip file can be used as an S3 input for a build project. Be sure to include a Build Specification file in the root of your repository.

Architecture overview

Webhooks notify a remote service by issuing an HTTP POST when a commit is pushed to the repository. AWS Lambda receives the HTTP POST through Amazon API Gateway, and then downloads a copy of the repository. It places a zipped copy of the repository into a versioned S3 bucket. AWS CodePipeline can then use the zip file in S3 as a source; the pipeline will be triggered whenever the Git repository is updated.

Architectural diagram

Architectural overview

There are two methods you can use to get the contents of a repository. Each method exposes Lambda functions that have different security and scalability properties.

  • Zip download uses the Git provider’s HTTP API to download an already-zipped copy of the current state of the repository.
    • No need for external libraries.
    • Smaller Lambda function code.
    • Large repo size limit (500 MB).
  • Git pull uses SSH to pull from the repository. The repository contents are then zipped and uploaded to S3.
    • Efficient for repositories with a high volume of commits, because each time the API is triggered, it downloads only the changed files.
    • Suitable for any Git server that supports hooks and SSH; does not depend on personal access tokens or OAuth2.
    • More extensible because it uses a standard Git library.

Build the required AWS resources

For your convenience, there is an AWS CloudFormation template that includes the AWS infrastructure and configuration required to build out this integration. To launch the CloudFormation stack setup wizard, click the link for your desired region. (The following AWS regions support all of the services required for this integration.)

For a list of services available in AWS regions, see the AWS Region Table.

The stack setup wizard will prompt you to enter several parameters. Many of these values must be obtained from your Git service.

OutputBucketName: The name of the bucket where your zipped code will be uploaded. CloudFormation will create a bucket with this name. For this reason, you cannot use the name of an existing S3 bucket.

Note: By default, there is no lifecycle policy on this bucket, so previous versions of your code will be retained indefinitely.  If you want to control the retention period of previous versions, see Lifecycle Configuration for a Bucket with Versioning in the Amazon S3 User Guide.

AllowedIps: Used only with the git pull method described earlier. A comma-separated list of IP CIDR blocks used for Git provider source IP authentication. The Bitbucket Cloud IP ranges are provided as defaults.

ApiSecret: Used only with the git pull method described earlier. This parameter is used for webhook secrets in GitHub Enterprise and GitLab. If a secret is matched, IP range authentication is bypassed. The secret cannot contain commas (,), slashes (\), or quotation marks ().

GitToken: Used only with the zip download method described earlier. This is a personal access token generated by GitHub Enterprise or GitLab.

OauthKey/OuathSecret: Used only with the zip download method described earlier. This is an OAuth2 key and secret provided by Bitbucket.

At least one parameter for your chosen method and provider must be set.

The process for setting up webhook secrets and API tokens differs between vendors and product versions. Consult your Git provider’s documentation for details.

After you have entered values for these parameters, you can complete the steps in the wizard and start the stack creation. If your desired values change over time, you can use CloudFormation’s update stack functionality to modify your parameters.

After the CloudFormation stack creation is complete, make a note of the GitPullWebHookApi, ZipDownloadWebHookApi, OutputBucketName and PublicSSHKey. You will need these in the following steps.

Configure the source repository

Depending on the method (git pull or zip download) you would like to use, in your Git provider’s interface, set the destination URL of your webhook to either the GitPullWebHookApi or ZipDownloadWebHookApi. If you create a secret at this point, be sure to update the ApiSecret parameter in your CloudFormation stack.

If you are using the git pull method, the Git repo is downloaded over SSH. For this reason, the PublicSSHKey output must be imported into Git as a deployment key.

Test a commit

After you have set up webhooks on your repository, run the git push command to create a folder structure and zip file in the S3 bucket listed in your CloudFormation output as OutputBucketName. If the zip file is not created, you can check the following sources for troubleshooting help:

Set up AWS CodePipeline

The final step is to create a pipeline in AWS CodePipeline using the zip file as an S3 source. For information about creating a pipeline, see the Simple Pipeline Walkthrough in the AWS CodePipeline User Guide. After your pipeline is set up, commits to your repository will trigger an update to the zip file in S3, which, in turn, triggers a pipeline execution.

We hope this blog post will help you integrate your Git server. Feel free to leave suggestions or approaches on integration in the comments.

Deploying a Spring Boot Application on AWS Using AWS Elastic Beanstalk

by Juan Villa | on | in How-to | | Comments

In this blog post, I will show you how to deploy a sample Spring Boot application using AWS Elastic Beanstalk and how to customize the Spring Boot configuration through the use of environment variables.

Spring Boot is often described as a quick and easy way of building production-grade Spring Framework-based applications. To accomplish this, Spring Boot comes prepackaged with auto configuration modules for most libraries typically used with the Spring Framework. This is often referred to as “convention over configuration.”

AWS Elastic Beanstalk offers a similar approach to application deployment. It provides convention over configuration while still giving you the ability to dig under the hood to make adjustments, as needed. This makes Elastic Beanstalk a perfect match for Spring Boot.

The sample application used in this blog post is the gs-accessing-data-rest sample project provided as part of the Accessing JPA Data with REST topic in the Spring Getting Started Guide. The repository is located in GitHub at https://github.com/spring-guides/gs-accessing-data-rest.

(more…)

Building a Cross-Region/Cross-Account Code Deployment Solution on AWS

by BK Chaurasiya | on | in How-to | | Comments

Many of our customers have expressed a desire to build an end-to-end release automation workflow solution that can deploy changes across multiple regions or different AWS accounts.

In this post, I will show you how you can easily build an automated cross-region code deployment solution using AWS CodePipeline (a continuous delivery service), AWS CodeDeploy (an automated application deployment service), and AWS Lambda (a serverless compute service). In the Taking This Further section, I will also show you how to extend what you’ve learned so that you can create a cross-account deployment solution.

We will use AWS CodeDeploy and AWS CodePipeline to create a multi-pipeline solution running in two regions (Region A and Region B). Any update to the source code in Region A will trigger validation and deployment of source code changes in the pipeline in Region A. A successful processing of source code in all of its AWS CodePipeline stages will invoke a Lambda function as a custom action, which will copy the source code into an S3 bucket in Region B. After the source code is copied into this bucket, it will trigger a similar chain of processes into the different AWS CodePipeline stages in Region B. See the following diagram.

                                                   Diagram1

This architecture follows best practices for multi-region deployments, sequentially deploying code into one region at a time upon successful testing and validation. This architecture lets you place controls to stop the deployment if a problem is identified with release. This prevents a bad version from being propagated to your next environments.

This post is based on the Simple Pipeline Walkthrough in the AWS CodePipeline User Guide. I have provided an AWS CloudFormation template that automates the steps for you.

 

Prerequisites

You will need an AWS account with administrator permissions. If you don’t have an account, you can sign up for one here. You will also need sample application source code that you can download here.

We will use the CloudFormation template provided in this post to create the following resources:

  • Amazon S3 buckets to host the source code for the sample application. You can use a GitHub repository if you prefer, but you will need to change the CloudFormation template.
  • AWS CodeDeploy to deploy the sample application.
  • AWS CodePipeline with predefined stages for this setup.
  • AWS Lambda as a custom action in AWS CodePipeline. It invokes a function to copy the source code into another region or account. If you are deploying to multiple accounts, cross-account S3 bucket permissions are required.

Note: The resources created by the CloudFormation template may result in charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

Let’s Get Started

Choose your source and destination regions for a continuous delivery of your source code. In this post, we are deploying the source code to two regions: first to Region A (Oregon) and then to Region B (N. Virginia/US Standard). You can choose to extend the setup to three or more regions if your business needs require it.

Step 1: Create Amazon S3 buckets for hosting your application source code in your source and destination regions. Make sure versioning is enabled on these buckets. For more information, see these steps in the AWS CodePipeline User Guide.

For example:

xrdeployment-sourcecode-us-west-2-<AccountID>           (Source code bucket in Region A – Oregon)

xrdeployment-sourcecode-us-east-1-<AccountID>           (Source code bucket in Region B – N. Virginia/US Standard)

Note: The source code bucket in Region B is also the destination bucket in Region A. Versioning on the bucket ensures that AWS CodePipeline is executed automatically when source code is changed.

Configuration Setup in Source Region A

Be sure you are in the US West (Oregon) region. You can use the drop-down menu to switch regions.

 

Step 2: In the AWS CloudFormation console, choose launch-stack to launch the CloudFormation template. All of the steps in the Simple Pipeline Walkthrough are automated when you use this template.

This template creates a custom Lambda function and AWS CodePipeline and AWS CodeDeploy resources to deploy a sample application. You can customize any of these components according to your requirements later.

On the Specify Details page, do the following:

  1. In Stack name, type a name for the stack (for example, XRDepDemoStackA).
  2. In AppName, you can leave the default, or you can type a name of up to 40 characters. Use only lowercase letters, numbers, periods, and hyphens.
  3. In InstanceCount and InstanceType, leave the default values. You might want to change them when you extend this setup for your use case.
  4. In S3SourceCodeBucket, specify the name of the S3 bucket where source code is placed (xrdeployment-sourcecode-us-west-2-<AccountID>). See step 1.
  5. In S3SourceCodeObject, specify the name of the source code zip file. The sample source code, xrcodedeploy_linux.zip, is provided for you.
  6. Choose a destination region from the drop-down list. For the steps in this blog post, choose us-east-1.
  7. In DestinationBucket, type the name of the bucket in the destination region where the source code will be copied  (xrdeployment-sourcecode-us-east-1-<AccountID>). See step 1.
  8. In KeyPairName, choose the name of Amazon EC2 key pair. This enables remote login to your instances. You cannot sign in to your instance without generating a key pair and downloading a private key file. For information about generating a key pair, see these steps.
  9. In SSHLocation, type the IP address from which you will access the resources created in this stack. This is a recommended security best practice.
  10. In TagValue, type a value that identifies the deployment stage in the target deployment (for example, Alpha).
  11. Choose Next.

 

Diagram3

(Optional) On the Options page, in Key, type Name. In Value, type a name that will help you easily identify the resources created in this stack. This name will be used to tag all of the resources created by the template. These tags are helpful if you want to use or modify these resources later on. Choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. (It will.) Review the other settings, and then choose Create.

                                                Diagram4

It will take several minutes for CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console.

When the stack has been created, you will see a CREATE_COMPLETE message in the Status column on the Overview tab.

                  Diagram5

Configuration Setup in Destination Region B

Step 3: We now need to create AWS resources in Region B. Use the drop-down menu to switch to US East (N. Virginia).

In the AWS CloudFormation console, choose launch-stack to launch the CloudFormation template.

On the Specify Details page, do the following:

  1. In Stack name, type a name for the stack (for example, XRDepDemoStackB).
  2. In AppName, you can leave the default, or you can type a name of up to 40 characters. Use only lowercase letters, numbers, periods, and hyphens.
  3. In InstanceCount and InstanceType, leave the default values. You might want to change them when you extend this setup for your use case.
  4. In S3SourceCodeBucket, specify the name of the S3 bucket where the source code is placed (xrdeployment-sourcecode-us-east-1-<AccountID>).  This is same as the DestinationBucket in step 2.
  5. In S3SourceCodeObject, specify the name of the source code zip file. The sample source code (xrcodedeploy_linux.zip) is provided for you.
  6. From the DestinationRegion drop-down list, choose none.
  7. In DestinationBucket, type none. This is our final destination region for this setup.
  8. In the KeyPairName, choose the name of the EC2 key pair.
  9. In SSHLocation, type the IP address from which you will access the resources created in this stack.
  10. In TagValue, type a value that identifies the deployment stage in the target deployment (for example, Beta).

Repeat the steps in the Configuration Setup in Source Region A until the CloudFormation stack has been created. You will see a CREATE_COMPLETE message in the Status column of the console.

So What Just Happened?

We have created an EC2 instance in both regions. These instances are running a sample web application. We have also configured AWS CodeDeploy deployment groups and created a pipeline where source changes propagate to AWS CodeDeploy groups in both regions. AWS CodeDeploy deploys a web page to each of the Amazon EC2 instances in the deployment groups. See the diagram at the beginning of this post.

The pipelines in both regions will start automatically as they are created. You can view your pipelines in the AWS CodePipeline console. You’ll find a link to AWS CodePipeline on the Outputs section of your CloudFormation stack.

                                  Diagram7

Note: Your pipeline will fail during its first automatic execution because we haven’t placed source code into the S3SourceCodeBucket in the source region (Region A).

Step 4: Download the sample source code file, xrcodedeploy_linux.zip, from this link and place it in the source code S3 bucket for Region A. This will kick off AWS CodePipeline.

Step 5: Watch the progress of your pipeline in the source region (Region A) as it completes the actions configured for each of its stages and invokes a custom Lambda action that copies the source code into Region B. Then watch the progress of your pipeline in Region B (final destination region) after the pipeline succeeds in the source region (Region A). The pipeline in the destination region (Region B) should kick off automatically as soon as AWS CodePipeline in the source region (Region A) completes execution.

When each stage is complete, it turns from blue (in progress) to green (success).

                                                              Diagram8

Congratulations! You just created a cross-region deployment solution using AWS CodePipeline, AWS CodeDeploy, and AWS Lambda. You can place a new version of source code in your S3 bucket and watch it progress through AWS CodePipeline in all the regions.

Step 6: Verify your deployment. When Succeeded is displayed for the pipeline status in the final destination region, view the deployed application:

  1. In the status area for Betastage in the final destination region, choose Details. The details of the deployment will appear in the AWS CodeDeploy console. You can also pick any other stage in other regions.
  2. In the Deployment Details section, in Instance ID, choose the instance ID of any of the successfully deployed instance.
  3. In the Amazon EC2 console, on the Description tab, in Public DNS, copy the address, and then paste it into the address bar of your web browser. The web page opens the sample web application that was built for you

                             Diagram9

Taking This Further

  1. Using the CloudFormation template provided to you in this post, you can extend the setup to three regions.
  2. So far we have deployed code in two regions within one AWS account. There may be a case where your environments exist in different AWS accounts. For example, assume a scenario in which:
  • You have your development environment running in Region A in AWS Account A.
  • You have your QA environment running in Region B in AWS Account B.
  • You have a staging or production environment running in Region C in AWS Account C.

 

Diagram10

You will need to configure cross-account permissions on your destination S3 bucket and delegate these permissions to a role that Lambda assumed in the source account. Without these permissions, the Lambda function in AWS CodePipeline will not be able to copy the source code into the destination S3 bucket. (See the lambdaS3CopyRole in the CloudFormation template provided with this post.)

Create the following bucket policy on the destination bucket in Account B:

{

                  “Version”: “2012-10-17”,

                  “Statement”: [

                                    {

                                                      “Sid”: “DelegateS3Access”,

                                                      “Effect”: “Allow”,

                                                      “Principal”: {

                                                                        “AWS”: “arn:aws:iam::<Account A ID>:root”

                                                      },

                                                      “Action”: “s3:*”,

                                                      “Resource”: [

                                                                        “arn:aws:s3::: <destination bucket > /*”,

                                                                        “arn:aws:s3::: <destination bucket > “

                                                      ]

                                    }

                  ]

}

 

Repeat this step as you extend the setup to additional accounts.

Create your CloudFormation stacks in Account B and Account C (follow steps 2 and 3 in these accounts, respectively) and your pipeline will execute sequentially.

You can use another code repository solution like AWS CodeCommit or Github as your source and target repositories.

Wrapping Up

After you’ve finished exploring your pipeline and its associated resources, you can do the following:

  • Extend the setup. Add more stages to your pipeline in AWS CodePipeline.
  • Delete the stack in AWS CloudFormation, which deletes the pipeline, its resources, and the stack itself.

This is the option to choose if you no longer want to use the pipeline or any of its resources. Cleaning up resources you’re no longer using is important because you don’t want to continue to be charged.

To delete the CloudFormation stack:

  1. Delete the Amazon S3 buckets used as the artifact store in AWS CodePipeline in the source and destination regions. Although this bucket was created as part of the CloudFormation stack, Amazon S3 does not allow CloudFormation to delete buckets that contain objects.To delete this bucket, open the Amazon S3 console, select the buckets you created in this setup, and then delete them. For more information, see Delete or Empty a Bucket.
  2. Follow the steps to delete a stack in the AWS CloudFormation User Guide.

 

I would like to thank my colleagues Raul Frias, Asif Khan and Frank Li for their contributions to this post.

From ELK Stack to EKK: Aggregating and Analyzing Apache Logs with Amazon Elasticsearch Service, Amazon Kinesis, and Kibana

by Pubali Sen | on | in How-to |

By Pubali Sen, Shankar Ramachandran

Log aggregation is critical to your operational infrastructure. A reliable, secure, and scalable log aggregation solution makes all the difference during a crunch-time debugging session.

In this post, we explore an alternative to the popular log aggregation solution, the ELK stack (Elasticsearch, Logstash, and Kibana): the EKK stack (Amazon Elasticsearch Service, Amazon Kinesis, and Kibana). The EKK solution eliminates the undifferentiated heavy lifting of deploying, managing, and scaling your log aggregation solution. With the EKK stack, you can focus on analyzing logs and debugging your application, instead of managing and scaling the system that aggregates the logs.

In this blog post, we describe how to use an EKK stack to monitor Apache logs. Let’s look at the components of the EKK solution.

Amazon Elasticsearch Service is a popular search and analytics engine that provides real-time application monitoring and log and clickstream analytics. For this post, you will store and index Apache logs in Amazon ES. As a managed service, Amazon ES is easy to deploy, operate, and scale in the AWS Cloud. Using a managed service also eliminates administrative overhead, like patch management, failure detection, node replacement, backing up, and monitoring. Because Amazon ES includes built-in integration with Kibana, it eliminates installing and configuring that platform. This simplifies your process further. For more information about Amazon ES, see the Amazon Elasticsearch Service detail page.

Amazon Kinesis Agent is an easy-to-install standalone Java software application that collects and sends data. The agent continuously monitors the Apache log file and ships new data to the delivery stream. This agent is also responsible for file rotation, checkpointing, retrying upon failures, and delivering the log data reliably and in a timely manner. For more information, see Writing to Amazon Kinesis Firehose Using Amazon Kinesis Agent or Amazon Kinesis Agent in GitHub.

Amazon Kinesis Firehose provides the easiest way to load streaming data into AWS. In this post, Firehose helps you capture and automatically load the streaming log data to Amazon ES and back it up in Amazon Simple Storage Service (Amazon S3). For more information, see the Amazon Kinesis Firehose detail page.

You’ll provision an EKK stack by using an AWS CloudFormation template. The template provisions an Apache web server and sends the Apache access logs to an Amazon ES cluster using Amazon Kinesis Agent and Firehose. You’ll back up the logs to an S3 bucket. To see the logs, you’ll leverage the Amazon ES Kibana endpoint.

By using the template, you can quickly complete the following tasks:

·      Provision an Amazon ES cluster.

·      Provision an Amazon Elastic Compute Cloud (Amazon EC2) instance.

·      Install Apache HTTP Server version 2.4.23.

·      Install the Amazon Kinesis Agent on the web server.

·      Provision an Elastic Load Balancing load balancer.

·      Create the Amazon ES index and the associated log mappings.

·      Create an Amazon Kinesis Firehose delivery stream.

·      Create all AWS Identity and Access Management (IAM) roles and policies. For example, the Firehose delivery stream backs up the Apache logs to an S3 bucket. This requires that the Firehose delivery stream be associated with a role that gives it permission to upload the logs to the correct S3 bucket.

·      Configure Amazon CloudWatch Logs log streams and log groups for the Firehose delivery stream. This helps you to troubleshoot when the log events don’t reach their destination.

EKK Stack Architecture
The following architecture diagram shows how an EKK stack works.

Arch8-2-2-2

Prerequisites
To build the EKK stack, you must have the following:

·      An Amazon EC2 key pair in the US West (Oregon) Region. If you don’t have one, create one.

·      An S3 bucket in the US West (Oregon) Region. If you don’t have one, create one.

·      A default VPC in the US West (Oregon) Region. If you have deleted the default VPC, request one.

·      Administrator-level permissions in IAM to enable Amazon ES and Amazon S3 to receive the log data from the EC2 instance through Firehose.

Getting Started
Begin by launching the AWS CloudFormation template to create the stack.

1.     In the AWS CloudFormation console, choose  to   launch-stack the AWS CloudFormation template. Make sure that you are in the US West (Oregon) region.

Note: If you want to download the template to your computer and then upload it to AWS CloudFormation, you can do so from this Amazon S3 bucket. Save the template to a location on your computer that’s easy to remember.

2.     Choose Next.

3.     On the Specify Details page, provide the following:

Screen Shot 2016-11-01 at 9.44.20 AM

a)    Stack Name: A name for your stack.

b)    InstanceType: Select the instance family for the EC2 instance hosting the web server.

c)     KeyName: Select the Amazon EC2 key pair in the US West (Oregon) Region.

d)    SSHLocation: The IP address range that can be used to connect to the EC2 instance by using SSH. Accept the default, 0.0.0.0/0.

e)    WebserverPort: The TCP/IP port of the web server. Accept the default, 80.

4.     Choose Next.

5.     On the Options page, optionally specify tags for your AWS CloudFormation template, and then choose Next.

createStack2

6.     On the Review page, review your template details. Select the Acknowledgement checkbox, and then choose Create to create the stack.

It takes about 10-15 minutes to create the entire stack.

Configure the Amazon Kinesis Agent
After AWS CloudFormation has created the stack, configure the Amazon Kinesis Agent.

1.     In the AWS CloudFormation console, choose the Resources tab to find the Firehose delivery stream name. You need this to configure the agent. Record this value because you will need it in step 3.

createStack3

2.     On the Outputs tab, find and record the public IP address of the web server. You need it to connect to the web server using SSH to configure the agent. For instructions on how to connect to an EC2 instance using SSH, see Connecting to Your Linux Instance Using SSH.

outputs

3. On the web server’s command line, run the following command:

sudo vi /etc/aws-kinesis/agent.json

This command opens the configuration file, agent.json, as follows.

{ "cloudwatch.emitMetrics": true, "firehose.endpoint": "firehose.us-west-2.amazonaws.com", "awsAccessKeyId": "", "awsSecretAccessKey": "", "flows": [ { "filePattern": "/var/log/httpd/access_log", "deliveryStream": "", "dataProcessingOptions": [ { "optionName": "LOGTOJSON", "logFormat": "COMMONAPACHELOG" } ] } ] } 

4.     For the deliveryStream key, type the value of the KinesisFirehoseDeliveryName that you retrieved from the stack’s Resources tab. After you type the value, save and terminate the agent.json file.

5.     Run the following command on the CLI:

sudo service aws-kinesis-agent restart

6.     On the AWS CloudFormation console choose the resources tab and note the name of the Amazon ES cluster corresponding to the LogicalID ESDomain.

7.     Go to AWS Management Console, and choose Amazon Elasticsearch Service. Under My Domains, you can see the Amazon ES domain that the AWS CloudFormation template created.

createStac5

Configure Kibana and View Your Apache Logs
Amazon ES provides a default installation of Kibana with every Amazon ES domain. You can find the Kibana endpoint on your domain dashboard in the Amazon ES console.

1.     In the Amazon ES console, choose the Kibana endpoint.

2.     In Kibana, for Index name or pattern, type logmonitor. logmonitor is the name of the AWS ES index that you created for the web server access logs. The health checks from Amazon Elastic Load Balancing generate access logs on the web server, which flow through the EKK pipeline to Kibana for discovery and visualization.

3.     In Time-field name, select datetime.

kibana1

4.     On the Kibana console, choose the Discover tab to see the Apache logs.

kibana3

Use Kibana to visualize the log data by creating bar charts, line and scatter plots, histograms, pie charts, etc.

Kibana4

Pie chart of IP addresses accessing the web server in the last 30 days

Kibana5

Bar chart of IP addresses accessing the web server in the last 5 minutes

You can graph information about http response, bytes, or IP address to provide meaningful insights on the Apache logs. Kibana also facilitates making dashboards by combining graphs.

Monitor Your Log Aggregator

To monitor the Firehose delivery stream, navigate to the Firehose console. Choose the stream, and then choose the Monitoring tab to see the Amazon CloudWatch metrics for the stream.

monito1

 

When log delivery fails, the Amazon S3 and Amazon ES logs help you troubleshoot. For example, the following screenshot shows logs when delivery to an Amazon ES destination fails because the date mapping on the index was not in line with the ingest log.

monitor2

Conclusion
In this post, we showed how to ship Apache logs to Kibana by using Amazon Kinesis Agent, Amazon ES, and Firehose. It’s worth pointing out that Firehose automatically scales up or down based on the rate at which your application generates logs. To learn more about scaling Amazon ES clusters, see the Amazon Elasticsearch Service Developer Guide.

Managed services like Amazon ES and Amazon Kinesis Firehose simplify provisioning and managing a log aggregation system. The ability to run SQL queries against your streaming log data using Amazon Kinesis Analytics further strengthens the case for using an EKK stack. The AWS CloudFormation template used in this post is available to extend and build your own EKK stack.

 

Building End-to-End Continuous Delivery and Deployment Pipelines in AWS and TeamCity

by Balaji Iyer | on | in Best practices, How-to, Partners, Web app | | Comments

By Balaji Iyer, Janisha Anand, and Frank Li

Organizations that transform their applications to cloud-optimized architectures need a seamless, end-to-end continuous delivery and deployment workflow: from source code, to build, to deployment, to software delivery.

Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. The practice expands on continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has undergone a standardized test process.

Continuous deployment is the process of deploying application revisions to a production environment automatically, without explicit approval from a developer. This process makes the entire software release process automated. Features are released as soon as they are ready, providing maximum value to customers.

These two techniques enable development teams to deploy software rapidly, repeatedly, and reliably.

In this post, we will build an end-to-end continuous deployment and delivery pipeline using AWS CodePipeline (a fully managed continuous delivery service), AWS CodeDeploy (an automated application deployment service), and TeamCity’s AWS CodePipeline plugin. We will use AWS CloudFormation to setup and configure the end-to-end infrastructure and application stacks. The ­­pipeline pulls source code from an Amazon S3 bucket, an AWS CodeCommit repository, or a GitHub repository. The source code will then be built and tested using TeamCity’s continuous integration server. Then AWS CodeDeploy will deploy the compiled and tested code to Amazon EC2 instances.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3.

Overview

Here are the steps:

  1. Continuous integration server setup using TeamCity.
  2. Continuous deployment using AWS CodeDeploy.
  3. Building a delivery pipeline using AWS CodePipeline.

In less than an hour, you’ll have an end-to-end, fully-automated continuous integration, continuous deployment, and delivery pipeline for your application. Let’s get started!

(more…)

Secure AWS CodeCommit with Multi-Factor Authentication

by Steffen Grunwald | on | in How-to | | Comments

This blog post shows you how to set up AWS CodeCommit if you want to enforce multi-factor authentication (MFA) for your repository users. One of the most common reasons for using MFA for your AWS CodeCommit repository is to secure sensitive data or prevent accidental pushes to the repository that could trigger a sensitive change process.

By using the MFA capabilities of AWS Identity and Access Management (IAM) you can add an extra layer of protection to sensitive code in your AWS CodeCommit repository. AWS Security Token Service (STS) and IAM allow you to stretch the period during which the authentication is valid from 15 minutes to 36 hours, depending on your needs. AWS CLI profile configuration and the AWS CodeCommit credential helper transparently use the MFA information as soon as it has been issued, so you can work with MFA with minimal impact to your daily development process.

Solution Overview

AWS CodeCommit currently provides two communication protocols and authentication methods:

  • SSH authentication uses keys configured in IAM user profiles.
  • HTTPS authentication uses IAM keys or temporary security credentials retrieved when assuming an IAM role.

It is possible to use SSH in a manner that incorporates multiple factors. An SSH private key can be considered something you have and its passphrase something you know. However, the passphrase cannot technically be enforced on the client side. Neither is it issued on an independent device.

That is why the solution described in this post uses the assumption of IAM roles to enforce MFA. STS can validate MFA information from devices that issue time-based one-time passwords (TOTPs).

A typical scenario involves the use of multiple AWS accounts (for example, Dev and Prod). One account is used for authentication and another contains the resource to be accessed (in this case, your AWS CodeCommit repository). You could also apply this solution to a single account.

This is what the workflow looks like:

Overview

  1. A user authenticates with IAM keys and a token from her MFA device and retrieves temporary credentials from STS. Temporary credentials consist of an access key ID, a secret access key, and a session token. The expiration of these keys can be configured with a duration of up to 36 hours.
  2. To access the resources in a different account, role delegation comes into play. The local Git repository is configured to use the temporary credentials to assume an IAM role that has access to the AWS CodeCommit repository. Here again, STS provides temporary credentials, but they are valid for a maximum of one hour.
  3. When Git is calling AWS CodeCommit, the credentials retrieved in step 2 are used to authenticate the requests. When the credentials expire, they are reissued with the credentials from step 1.

You could use permanent IAM keys to directly assume the role in step 2 without the temporary credentials from step 1. The process reduces the frequency with which a developer needs to use MFA by increasing the lifetime of the temporary credentials.

Account Setup Tasks

The tasks to set up the MFA scenario are as follows:

  1. Create a repository in AWS CodeCommit.
  2. Create a role that is used to access the repository.
  3. Create a group allowed to assume the role.
  4. Create a user with an MFA device who belongs to the group.

The following steps assume that you have set up the AWS CLI and configured it with the keys of users who have the required permissions to IAM and AWS CodeCommit in two accounts. Following the workflow, we will create the following admin users and AWS CLI profiles:

  • admin-account-a needs permissions to administer IAM (built-in policy IAMFullAccess)
  • admin-account-b needs permissions to administer IAM and AWS CodeCommit (built-in policies IAMFullAccess and AWSCodeCommitFullAccess)

At the time of this writing, AWS CodeCommit is available in us-east-1 only, so use that region for the region profile attribute for account B.

The following scripts work for Linux and Mac OS. For readability, line breaks are separated by back slashes. If you want to run these scripts on Microsoft Windows, you will need to adapt them or run them on an emulation layer (for example, Cygwin).

Replace placeholders like <XXXX> before issuing the commands.

Task 1: Create a repository in AWS CodeCommit

Create an AWS CodeCommit repository in Account B:

aws codecommit create-repository 
   --repository-name myRepository 
   --repository-description "My Repository" 
   --profile admin-account-b

Task 2: Create a role that is used to access the repository

  1. Create an IAM policy that grants access to the repository in Account B. Name it MyRepositoryContributorPolicy.

    Here is the MyRepositoryContributorPolicy.json policy document:

    {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "codecommit:CreateBranch",
                "codecommit:GetBlob",
                "codecommit:GetBranch",
                "codecommit:GetObjectIdentifier",
                "codecommit:GetRepository",
                "codecommit:GetTree",
                "codecommit:GitPull",
                "codecommit:GitPush",
                "codecommit:ListBranches"
            ],
            "Resource": [
                "arn:aws:codecommit:<ACCOUNT_B_REGION>:<ACCOUNT_B_ID>:myRepository"
            ]
        }
    ]
    }

    Create the policy:

    aws iam create-policy 
        --policy-name MyRepositoryContributorPolicy 
        --policy-document file://./MyRepositoryContributorPolicy.json 
        --profile admin-account-b

  2. Create a MyRepositoryContributorRole role that has the MyRepositoryContributorPolicy attached in Account B.

    Here is the MyRepositoryContributorTrustPolicy.json trust policy document:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<ACCOUNT_A_ID>:root"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }

    Create the role:

    aws iam create-role 
    --role-name MyRepositoryContributorRole 
    --assume-role-policy-document file://./MyRepositoryContributorTrustPolicy.json 
    --profile admin-account-b

    Attach the MyRepositoryContributorPolicy:

    aws iam attach-role-policy 
    --role-name MyRepositoryContributorRole 
    --policy-arn arn:aws:iam::<ACCOUNT_B_ID>:policy/MyRepositoryContributorPolicy 
    --profile admin-account-b

Task 3: Create a group allowed to assume the role

  1. Create a MyRepositoryContributorAssumePolicy policy for users who are allowed to assume the role in Account A.

    Here is the MyRepositoryContributorAssumePolicy.json policy document:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "sts:AssumeRole"
                ],
                "Resource": [
                    "arn:aws:iam::<ACCOUNT_B_ID>:role/MyRepositoryContributorRole"
                ],
                "Condition": {
                    "NumericLessThan": {
                        "aws:MultiFactorAuthAge": "86400"
                    }
                }
            }
        ]
    }

    The aws:MultiFactorAuthAge attribute is used to specify the validity, in seconds, after the temporary credentials with MFA information have been issued. After this period, the user can’t issue new temporary credentials by assuming a role. However, the old credentials retrieved by role assumption may still be valid for one hour to make calls to the repository.

    For this example, we set the value to 24 hours (86400 seconds).

    Create the policy:

    aws iam create-policy 
        --policy-name MyRepositoryContributorAssumePolicy 
        --policy-document file://./MyRepositoryContributorAssumePolicy.json 
        --profile admin-account-a

  2. Create the group for all users who need access to the repository:

    aws iam create-group 
        --group-name MyRepositoryContributorGroup 
        --profile admin-account-a

  3. Attach the policy to the group:

    aws iam attach-group-policy 
        --group-name MyRepositoryContributorGroup 
        --policy-arn arn:aws:iam::<ACCOUNT_A_ID>:policy/MyRepositoryContributorAssumePolicy 
        --profile admin-account-a

Task 4: Create a user with an MFA device who belongs to the group

  1. Create an IAM user in Account A:

    aws iam create-user 
        --user-name MyRepositoryUser 
        --profile admin-account-a

  2. Add the user to the IAM group:

    aws iam add-user-to-group 
        --group-name MyRepositoryContributorGroup 
        --user-name MyRepositoryUser 
        --profile admin-account-a

  3. Create a virtual MFA device for the user. You can use the AWS CLI, but in this case it is easier to create one in the AWS Management Console.

  4. Create IAM access keys for the user. Make note of the output of AccessKeyId and SecretAccessKey. They will be referenced as <ACCESS_KEY_ID> and <SECRET_ACCESS> later in this post.

    aws iam create-access-key 
       --user-name MyRepositoryUser 
       --profile admin-account-a

You’ve now completed the account setup. To create more users, repeat task 4. Now we can continue to the local setup of the contributor’s environment.

Initialize the Contributor’s Environment

Each contributor must perform the setup in order to have access to the repository.

Setup Tasks:

  1. Create a profile for the IAM user who fetches temporary credentials.
  2. Create a profile that is used to access the repository.
  3. Populate the role assuming profile with temporary credentials.

Task 1: Create a profile for the IAM user who fetches temporary credentials

By default, the AWS CLI maintains two files in ~/.aws/ that contain per-profile settings. One is credentials, which stores sensitive information for means of authentication (for example, the secret access keys). The other is config, which defines all other settings, such as the region or the MFA device to use.

Add the IAM keys for MyRepositoryUser that you created in Account Setup task 4 to ~/.aws/credentials:

[FetchMfaCredentials]
aws_access_key_id=<ACCESS_KEY_ID>
aws_secret_access_key=<SECRET_ACCESS>

Add the following lines to ~/.aws/config:

[profile FetchMfaCredentials]
mfa_serial=arn:aws:iam::<ACCOUNT_A_ID>:mfa/MyRepositoryUser
get_session_token_duration_seconds=86400

get_session_token_duration_seconds is a custom attribute that is used later by a script. It must not exceed the value of aws:MultiFactorAuthAge that we used in the assume policy.

Task 2: Create a profile that is used to access the repository

Add the following lines to ~/.aws/config:

[profile MyRepositoryContributor]
region=<ACCOUNT_B_REGION>
role_arn=arn:aws:iam::<ACCOUNT_B_ID>:role/MyRepositoryContributorRole
source_profile=MyRepositoryAssumer

When the MyRepositoryContributor profile is used, the MyRepositoryContributorRole is assumed with credentials of the MyRepositoryAssumer profile. You may have noticed that we have not put MyRepositoryAssumer in the credentials file yet. The following task shows how the file is populated.

Task 3: Populate the role assuming profile with temporary credentials

  1. Create the populateSessionTokenProfile.sh script in your home directory or any other location:

    #!/bin/bash
    
    # Parameter 1 is the name of the profile that is populated
    # with keys and tokens.
    KEY_PROFILE="$1"
    
    # Parameter 2 is the name of the profile that calls the
    # session token service.
    # It must contain IAM keys and mfa_serial configuration
    
    # The STS response contains an expiration date/ time.
    # This is checked to only set the keys if they are expired.
    EXPIRATION=$(aws configure get expiration --profile "$1")
    
    RELOAD="true"
    if [ -n "$EXPIRATION" ];
    then
            # get current time and expiry time in seconds since 1-1-1970
            NOW=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
    
            # if tokens are set and have not expired yet
            if [[ "$EXPIRATION" > "$NOW" ]];
            then
                    echo "Will not fetch new credentials. They expire at (UTC) $EXPIRATION"
                    RELOAD="false"
            fi
    fi
    
    if [ "$RELOAD" = "true" ];
    then
            echo "Need to fetch new STS credentials"
            MFA_SERIAL=$(aws configure get mfa_serial --profile "$2")
            DURATION=$(aws configure get get_session_token_duration_seconds --profile "$2")
            read -p "Token for MFA Device ($MFA_SERIAL): " TOKEN_CODE
            read -r AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN EXPIRATION AWS_ACCESS_KEY_ID < <(aws sts get-session-token 
                    --profile "$2" 
                    --output text 
                    --query 'Credentials.*' 
                    --serial-number $MFA_SERIAL 
                    --duration-seconds $DURATION 
                    --token-code $TOKEN_CODE)
    
            aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" --profile "$KEY_PROFILE"
            aws configure set aws_session_token "$AWS_SESSION_TOKEN" --profile "$KEY_PROFILE"
            aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" --profile "$KEY_PROFILE"
            aws configure set expiration "$EXPIRATION" --profile "$1"
    fi

    This script takes the credentials from the profile from the second parameter to request temporary credentials. These will be written to the profile specified in the first parameter.

  2. Run the script once. You might need to set execution permission (for example, chmod 755) before you run it.

    ~/populateSessionTokenProfile.sh MyRepositoryAssumer FetchMfaCredentials
    Need to fetch new STS credentials
    Token for MFA Device (arn:aws:iam::<ACCOUNT_A_ID>:mfa/MyRepositoryUser): XXXXXX

    This populates information retrieved from STS to the ~/.aws/config and ~/.aws/credentials file.

  3. Clone the repository, configure Git to use temporary credentials, and create an alias to renew MFA credentials:

    git clone --config 'credential.helper=!aws codecommit 
        --profile MyRepositoryContributor 
        credential-helper $@' 
        --config 'credential.UseHttpPath=true' 
        --config 'alias.mfa=!~/populateSessionTokenProfile.sh 
        MyRepositoryAssumer FetchMfaCredentials' 
        $(aws codecommit get-repository 
        --repository-name myRepository 
        --profile MyRepositoryContributor 
        --output text 
        --query repositoryMetadata.cloneUrlHttp)

    This clones the repository from AWS CodeCommit. You can issue subsequent calls as long as the temporary credentials retrieved in step 2 have not expired. As soon as they have expired, the credential helper will return an error with prompts for username and password:

    A client error (ExpiredToken) occurred when calling the AssumeRole operation:
    The security token included in the request is expired

    In this case, you should cancel the Git command (Ctrl-C) and trigger the renewal of the token by calling the alias in your repository:

    git mfa

We hope you find the steps for enforcing MFA for your repository users helpful. Feel free to leave your feedback in the comments.

Building a Microsoft BackOffice Server Solution on AWS with AWS CloudFormation

by Bill Jacobi | on | in Best practices, How-to, New stuff | | Comments

Last month, AWS released the AWS Enterprise Accelerator: Microsoft Servers on the AWS Cloud along with a deployment guide and CloudFormation template. This blog post will explain how to deploy complex Windows workloads and how AWS CloudFormation solves the problems related to server dependencies.

This AWS Enterprise Accelerator solution deploys the four most requested Microsoft servers ─ SQL Server, Exchange Server, Lync Server, and SharePoint Server ─ in a highly available, multi-AZ architecture on AWS. It includes Active Directory Domain Services as the foundation. By following the steps in the solution, you can take advantage of the email, collaboration, communications, and directory features provided by these servers on the AWS IaaS platform.  

There are a number of dependencies between the servers in this solution, including:

  • Active Directory
  • Internet access
  • Dependencies within server clusters, such as needing to create the first server instance before adding additional servers to the cluster.
  • Dependencies on AWS infrastructure, such as sharing a common VPC, NAT gateway, Internet gateway, DNS, routes, and so on.

The infrastructure and servers are built in three logical layers. The Master template orchestrates the stack builds with one stack per Microsoft server and manages inter-stack dependencies. Each of the CloudFormation stacks uses PowerShell to stand up the Microsoft servers at the OS level. Before it configures the OS, CloudFormation configures the AWS infrastructure required by each Windows server. Together, CloudFormation and PowerShell create a quick, repeatable deployment pattern for the servers. The solution supports 10,000 users. Its modularity at both the infrastructure and application level enables larger user counts.

MSServers Solution - 6 CloudFormation Stacks

Managing Stack Dependencies

To explain how we enabled the dependencies between the stacks, the SQLStack is dependent on ADStack since SQL Server is dependent on Active Directory; and, similarly, SharePointStack is dependent on SQLStack, both as required by Microsoft. Lync is dependendent on Exchange since both servers must extend the AD schema independently. In Master, these server dependencies are coded in CloudFormation as follows:

"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",

             "Properties": …
       }
and
"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",
             "Properties": …
       },
       "SharePointStack": {
            "Type": "AWS::CloudFormation::Stack",
            "DependsOn": "SQLStack",
            "Properties": …
       }

The “DependsOn” statements in the stack definitions forces the order of stack execution to match the diagram. Lower layers are executed and successfully completed before the upper layers. If you do not use “DependsOn”, CloudFormation will execute your stacks in parallel. An example of parallel execution is what happens after ADStack returns SUCCESS. The two higher-level stacks, SQLStack and ExchangeStack, are executed in parallel at the next level (layer 2).  SharePoint and Lync are executed in parallel at layer 3. The arrows in the diagram indicate stack dependencies.

Passing Parameters Between Stacks

If you have concerns about how to pass infrastructure parameters between the stack layers, let’s use an example in which we want to pass the same VPCCIDR to all of the stacks in the solution. VPCCIDR is defined as a parameter in Master as follows:

"VPCCIDR": {
            "AllowedPattern": "[a-zA-Z0-9]+\..+",
            "Default": "10.0.0.0/16",
            "Description": "CIDR Block for the VPC",
            "Type": "String"
           }

By defining VPCCIDR in Master and soliciting user input for this value, this value is then passed to ADStack by the use of an identically named and typed parameter between Master and the stack being called.

"VPCCIDR": {
            "Description": "CIDR Block for the VPC",
            "Type": "String",
            "Default": "10.0.0.0/16",
            "AllowedPattern": "[a-zA-Z0-9]+\..+"
           }

After Master defines VPCCIDR, ADStack can use “Ref”: “VPCCIDR” in any resource (such as the security group, DomainController1SG) that needs the VPC CIDR range of the first domain controller. Instead of passing commonly-named parameters between stacks, another option is to pass outputs from one stack as inputs to the next. For example, if you want to pass VPCID between two stacks, you could accomplish this as follows. Create an output like VPCID in the first stack:

Outputs”  : {
               “VPCID” : {
                          “Value” : “ {“Ref” : “VPC”},
                          “Description” : “VPC ID”
               }, …
}

In the second stack, create a parameter with the same name and type:

Parameters” : {
               “VPCID” : {
                          “Type” : “AWS::EC2::VPC::Id”,
               }, …
}

When the first template calls the second template, VPCID is passed as an output of the first template to become an input (parameter) to the second.

Managing Dependencies Between Resources Inside a Stack

All of the dependencies so far have been between stacks. Another type of dependency is one between resources within a stack. In the Microsoft servers case, an example of an intra-stack dependency is the need to create the first domain controller, DC1, before creating the second domain controller, DC2.

DC1, like many cluster servers, must be fully created first so that it can replicate common state (domain objects) to DC2.  In the case of the Microsoft servers in this solution, all of the servers require that a single server (such as DC1 or Exch1) must be fully created to define the cluster or farm configuration used on subsequent servers.

Here’s another intra-stack dependency example: The Microsoft servers must fully configure the Microsoft software on the Amazon EC2 instances before those instances can be used. So there is a dependency on software completion within the stack after successful creation of the instance, before the rest of stack execution (such as deploying subsequent servers) can continue. These intra-stack dependencies like “software is fully installed” are managed through the use of wait conditions. Wait conditions are CloudFormation resources just like EC2 instances and allow the “DependsOn” attribute mentioned earlier to manage dependencies inside a stack. For example, to pause the creation of DC2 until DC1 is complete, we configured the following “DependsOn” attribute using a wait condition. See (1) in the following diagram:

"DomainController1": {
            "Type": "AWS::EC2::Instance",
            "DependsOn": "NATGateway1",
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "configSets": {
                        "config": [
                            "setup",
                            "rename",
                            "installADDS",
                            "configureSites",
                            "installADCS",
                            "finalize"
                        ]
                    }, …
             },
             "Properties" : …
},
"DomainController2": {
             "Type": "AWS::EC2::Instance",
[1]          "DependsOn": "DomainController1WaitCondition",
             "Metadata": …,
             "Properties" : …
},

The WaitCondition (2) uses on a CloudFormation resource called a WaitConditionHandle (3), which receives a SUCCESS or FAILURE signal from the creation of the first domain controller:

"DomainController1WaitCondition": {
            "Type": "AWS::CloudFormation::WaitCondition",
            "DependsOn": "DomainController1",
            "Properties": {
                "Handle": {
[2]                    "Ref": "DomainController1WaitHandle"
                },
                "Timeout": "3600"
            }
     },
     "DomainController1WaitHandle": {
[3]            "Type": "AWS::CloudFormation::WaitConditionHandle"
     }

SUCCESS is signaled in (4) by cfn-signal.exe –exit-code 0 during the “finalize” step of DC1, which enables CloudFormation to execute DC2 as an EC2 resource via the wait condition.

                "finalize": {
                       "commands": {
                           "a-signal-success": {
                               "command": {
                                   "Fn::Join": [
                                       "",
                                       [
[4]                                            "cfn-signal.exe -e 0 "",
                                           {
                                               "Ref": "DomainController1WaitHandle"

                                            },
                                           """
                                       ]
                                   ]
                               }
                           }
                       }
                   }
               }

If the timeout had been reached in step (2), this would have automatically signaled a FAILURE and stopped stack execution of ADStack and the Master stack.

As we have seen in this blog post, you can create both nested stacks and nested dependencies and can pass parameters between stacks by passing standard parameters or by passing outputs. Inside a stack, you can configure resources that are dependent on other resources through the use of wait conditions and the cfn-signal infrastructure. The AWS Enterprise Accelerator solution uses both techniques to deploy multiple Microsoft servers in a single VPC for a Microsoft BackOffice solution on AWS.  

In a future blog post, we will illustrate how PowerShell can be used to bootstrap and configure Windows instances with downloaded cmdlets, all integrated into CloudFormation stacks.