Category: Best practices


Ensuring Security of Your Code in a Cross-Region/Cross-Account Deployment Solution

There are multiple ways you can protect your data while it is in transit and at rest. You can protect your data in transit by using SSL or by using client-side encryption. AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create, control, rotate, and use your encryption keys. AWS KMS allows you to create custom keys. You can then share these keys with AWS Identity and Access Management (IAM) users and roles in your AWS account or in an AWS account owned by someone else.

In my previous post, I described a solution for building a cross-region/cross-account code deployment solution on AWS. In this post, I describe a few options for protecting your source code as it travels between regions and between AWS accounts.

To recap, you deployed the infrastructure as shown in the following diagram.

  • You had your development environment running in Region A in AWS Account A.
  • You had your QA environment running in Region B in AWS Account B.
  • You had a staging or production environment running in Region C in AWS Account C.

An update to the source code in Region A triggered validation and deployment of source code changes in the pipeline in Region A. A successful processing of source code in all of its AWS CodePipeline states invoked a Lambda function, which copied the source code into an S3 bucket in Region B. After the source code was copied into this bucket, it triggered a similar chain of processes into the different AWS CodePipeline stages in Region B.

 

Ensuring Security for Your Source Code

You might choose to encrypt the source code .zip file before uploading to the S3 bucket that is in Account A, Region A, using Amazon S3 server-side encryption:

1. Using the Amazon S3 service master key

Refer back to the Lambda function created for you by the CloudFormation stack in the previous post. Go to the AWS Lambda console and your function name should be <stackname>-CopytoDest-XXXXXXX.

 

 

Use the following parameter for the copyObject function – ServerSideEncryption: ‘AES256’

Note: The set-up already uses this option by default.

The copyObject function decrypts the .zip file and copies the object into account B.

 

2. Using an AWS KMS master key

Since the KMS keys are constrained in a region, copying the object (source code .zip file) into a different account across the region requires cross-account access to the KMS key. This must occur before Amazon S3 can use that key for encryption and decryption.

Use the following parameter for the copyObject function – ServerSideEncryption: ‘aws:kms’ and provide an SSEKMSKeyId: ‘<keyeid>’

To enable cross-account access for the KMS key and use it in Lambda function

a. Create a KMS key in the source account (Account A), region B – for example, XRDepTestKey

Note: This key must be created in region B. This is because the source code will be copied in an S3 bucket that exists in region B and the KMS key must be accessible in this region.

b. To enable the Lambda function to be able to use this KMS key, add lambdaS3CopyRole as a user for this key. The Lambda function and associated role and policies are defined in the CloudFormation template.

c. Note the ARN of the key that you generated.

d. Provide the external account (Account B) permission to use this key. For more information, see Sharing custom encryption keys securely between accounts.

arn:aws:iam::<Account B ID>:root

e. In Account B, delegate the permission to use this key to the role that AWS CodePipeline is using. In the CloudFormation template, you can see that CodePipelineTrustRole is used. Attach the following policy to the role. Ensure that you update the region and Account ID accordingly.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowUseOfTheKey",
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:kms:<regionB>:<AccountA ID>:key/<KMS Key in Region B ID>"
            ]
        },
        {
            "Sid": "AllowAttachmentOfPersistentResources",
            "Effect": "Allow",
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": [
                "arn:aws:kms:<regionB>:<AccountA ID>:key/<KMS Key in Region B ID>"
            ],
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": true
                }
          }
        }
    ]
}

f. Update the Lambda function, CopytoDest, to use the following in the parameter definition.

 

ServerSideEncryption: 'aws:kms',\n",
SSEKMSKeyId: '< keyeid >'  
//ServerSideEncryption: 'AES256'\n",

And there you go! You have enabled secure delivery of your source code into your cross-region/cross-account deployment solution.

About the Author


BK Chaurasiya is a Solutions Architect with Amazon Web Services. He provides technical guidance, design advice and thought leadership to some of the largest and successful AWS customers and partners.

Building a Continuous Delivery Pipeline for AWS Service Catalog (Sync AWS Service Catalog with Version Control)

AWS Service Catalog enables organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multitier application architectures. You can use AWS Service Catalog to centrally manage commonly deployed IT services. It also helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

However, as the number of Service Catalog portfolios and products increases across an organization, centralized management and scaling can become a challenge. In this blog post, I walk you through a solution that simplifies management of AWS Service Catalog portfolios and related products. This solution also enables portfolio sharing with other accounts, portfolio tagging, and granting access to users. Finally, the solution delivers updates to the products using a continuous delivery in AWS CodePipeline. This enables you to maintain them in version control, thereby adopting “Infrastructure as Code” practices.

Solution overview

  1. Authors (developers, operations, architects, etc.) create the AWS CloudFormation templates based on the needs of their organizations. These templates are the reusable artifacts. They can be shared among various teams within the organizations. You can name these templates product-A.yaml or product-B.yaml. For example, if the template creates an Amazon VPC that is based on organization needs, as described in the Amazon VPC Architecture Quick Start, you can save it as product-vpc.yaml.

The authors also define a mapping.yaml file, which includes the list of products that you want to include in the portfolio and related metadata. The mapping.yaml file is the core configuration component of this solution. This file defines your portfolio and its associated permissions and products. This configuration file determines how your portfolio will look in AWS Service Catalog, after the solution deploys it. A sample mapping.yaml is described here. Configuration properties of this mapping.yaml are explained here.

 

  1. Product template files and the mappings are committed to version control. In this example, we use AWS CodeCommit. The folder structure on the file system looks like the following:
    • portfolio-infrastructure (folder name)
      – product-a.yaml
      – product-b.yaml
      – product-c.yaml
      – mapping.yaml
    • portfolio-example (folder name)
      – product-c.yaml
      – product-d.yaml
      – mapping.yaml

    The name of the folder must start with portfolio- because the AWS Lambda function iterates through all folders whose names start with portfolio-, and syncs them with AWS Service Catalog.

    Checking in any code in the repository triggers an AWS CodePipeline orchestration and invokes the Lambda function.

  2. The Lambda function downloads the code from version control and iterates through all folders with names that start with portfolio-. The function gets a list of all existing portfolios in AWS Service Catalog. Then it checks whether the display name of the portfolio matches the “name” property in the mapping.yaml under each folder. If the name doesn’t match, a new portfolio is created. If the name matches, the description and owner fields are updated and synced with what is in the file. There must be only one mapping.yaml file in each folder with a name starting with portfolio-.
  3. and 5. The Lambda function iterates through the list of products in the mapping.yaml file. If the name of product matches any of the products already associated with the portfolio, a new version of the product is created and is associated with the portfolio. If the name of the product doesn’t match, a new product is created. The CloudFormation template file (as specified in the template property for that product in the mapping file) is uploaded to Amazon S3 with a unique ID. A new version of the product is created and is pointed to the unique S3 path.

Try it out!

Get started using this solution, which is available in this AWSLabs GitHub repository.

  1. Clone the repository. It contains the AWS CloudFormation templates that we use in this walkthrough.
git clone https://github.com/awslabs/aws-pipeline-to-service-catalog.git
cd aws-pipeline-to-service-catalog
  1. Examine mapping.yaml under the portfolio-infrastructure folder. Replace the account number with the account number with which to share the portfolio. To share the portfolio with multiple other accounts, you can append more account numbers to the list. These account numbers must be valid AWS accounts, and must not include the account number in which this solution is being created. Optionally, edit this file and provide the values you want for the name, description, and owner properties. You can also choose to leave these values as they are, which creates a portfolio with the name, description, and owners described in the file.
  2. Optional – If you don’t have the AWS Command Line Interface (AWS CLI) installed, install it as described here. To prepare your access keys or assumed role to make calls to AWS, configure the AWS CLI as described here.
  3. Create a pipeline. This orchestrates continuous integration with the AWS CodeCommit repository created in step 2, and continuously syncs AWS Service Catalog with the code.
aws cloudformation deploy --template-file pipeline-to-service-catalog.yaml \
--stack-name service-catalog-sync-pipeline --capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides RepositoryName=blogs-pipeline-to-service-catalog

This creates the following resources.

  1. An AWS CodeCommit repository to push the code to. You can get the repository URL to push the code from the outputs of the stack that we just created. Connect, commit, and push code to this repository as described here.

    1. An S3 bucket, which holds the built artifacts (CloudFormation templates) and the Lambda function code.
    2. The AWS IAM roles and policies, with least privileges for this solution to work.
    3. An AWS CodeBuild project, which builds the Lambda function. This Python-based Lambda function has the logic, as explained earlier.
    4. A pipeline with the following four stages:
      • Stage-1: Checks out source from the repository created in step 2
      • Stage-2: Builds the Lambda function using AWS CodeBuild, which has the logic to sync the AWS Service Catalog products and portfolios with code.
      • Stage-3: Deploys the Lambda function using CloudFormation.
      • Stage-4: Invokes the Lambda function. Once this stage completes successfully, you see an AWS Service Catalog portfolio and two products created, as shown below.

 

Optional next steps!

You can deploy the Lambda function as we explained in this post to sync AWS Service Catalog products, portfolios, and permissions across multiple accounts that you own with version control. You can create a secure cross-account continuous delivery pipeline, as explained here. To do this:

  1. Delete all the resources created earlier.
aws cloudformation delete-stack -- stack-name service-catalog-sync-pipeline
  1. Follow the steps in this blog post. The sample Lambda function, described here, is the same as what I explained in this post.

Conclusion

You can use AWS Lambda to make API calls to AWS Service Catalog to keep portfolios and products in sync with a mapping file. The code includes the CloudFormation templates and the mapping file and folder structure, which resembles the portfolios in AWS Service Catalog. When checked in to an AWS CodeCommit repository, it invokes the Lambda function, orchestrated by AWS CodePipeline.

Database Continuous Integration and Automated Release Management Workflow with AWS and Datical DB

Just as a herd can move only as fast as its slowest member, companies must increase the speed of all parts of their release process, especially the database change process, which is often manual. One bad database change can bring down an app or compromise data security.

We need to make database code deployment as fast and easy as application release automation, while eliminating risks that cause application downtime and data security vulnerabilities. Let’s take a page from the application development playbook and bring a continuous deployment approach to the database.

By creating a continuous deployment database, you can:

  • Discover mistakes more quickly.
  • Deliver updates faster and frequently.
  • Help developers write better code.
  • Automate the database release management process.

The database deployment package can be promoted automatically with application code changes. With database continuous deployment, application development teams can deliver smaller, less risky deployments, making it possible to respond more quickly to business or customer needs.

In our previous post, Building End-to-End Continuous Delivery and Deployment Pipelines in AWS, we walked through steps for implementing a continuous deployment and automated delivery pipeline for your application.

In this post, we walk through steps for building a continuous deployment workflow for databases using AWS CodePipeline (a fully managed continuous delivery service) and Datical DB (a database release automation application). We use AWS CodeCommit for source code control and Amazon RDS for database hosting to demonstrate end-to-end database change management — from check-in to final deployment.

As part of this example, we will show how a database change that does not meet standards is rejected automatically and actionable feedback is provided to the developer. Just like a code unit test, Datical DB evaluates changes and enforces your organization’s standards. In the sample use case, database table indexes of more than three columns are disallowed. In some cases, this type of index can slow performance.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CodePipeline, AWS CodeCommit, Amazon RDS, Amazon EC2, and Amazon S3.

From Datical DB, you’ll need access to software.datical.com portal, your license key, a database, and JDBC drivers. You can request a free trial of Datical here.

Overview

Here are the steps:

  1. Install and configure Datical DB.
  2. Create an RDS database instance running the Oracle database engine.
  3. Configure Datical DB to manage database changes across your software development life cycle (SDLC).
  4. Set up database version control using AWS CodeCommit.
  5. Set up a continuous integration server to stage database changes.
  6. Integrate the continuous integration server with Datical DB.
  7. Set up automated release management for your database through AWS CodePipeline.
  8. Enforce security governance and standards with the Datical DB Rules Engine.

1. Install and configure Datical DB

Navigate to https://software.datical.com and sign in with your credentials. From the left navigation menu, expand the Common folder, and then open the Datical_DB_Folder. Choose the latest version of the application by reviewing the date suffix in the name of the folder. Download the installer for your platform — Windows (32-bit or 64-bit) or Linux (32-bit or 64-bit).

Verify the JDK Version

In a terminal window, run the following command to ensure you’re running JDK version 1.7.x or later.

# java –version
java version "1.7.0_75"
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) Client VM (build 24.75-b04, mixed mode, sharing)

The Datical DB installer contains a graphical (GUI) and command line (CLI) interface that can be installed on Windows and Linux operating systems.

Install Datical DB (GUI)

  1. Double-click on the installer
  2. Follow the prompts to install the application.
  3. When prompted, type the path to a valid license.

Install JDBC drivers

  1. Open the Datical DB application.
  2. From the menu, choose Help, and then choose Install New Software.
  3. From the Work with drop-down list, choose Database Drivers – http://update.datical.com/drivers/updates.
  4. Follow the prompts to install the drivers.

Install Datical DB (CLI)

Datical DB (CLI only) can be installed on a headless Linux system. Select the correct 32-bit or 64-bit Linux installer for your system.

  1. Run the installer as root and install it to /usr/local/DaticalDB.
    sudo java -jar ../installers/<Datical Installer>.jar -console
  2. Follow the prompts to install the application.
  3. When prompted, type the path to a valid license.

Install JDBC drivers

  1. Copy JDBC drivers to /usr/local/DaticalDB/jdbc_drivers.
    sudo mkdir /usr/local/DaticalDB/jdbc_drivers
    copy JDBC Drivers from software.datical.com to /usr/local/DaticalDB/jdbc_drivers
  2. Copy the license file to /usr/local/DaticalDB/repl.
    sudo cp <license_filename> /usr/local/DaticalDB/repl
    sudo chmod 777 /usr/local/DaticalDB/repl/<license_filename>

2. Create an RDS instance running the Oracle database engine

Datical DB supports database engines like Oracle, MySQL, Microsoft SQL Server, PostgreSQL, and IBM DB2. The example in this post uses a DB instance running Oracle. To create a DB instance running Oracle, follow these steps.
Make sure that you can access the Oracle port (1521) from the location where you will be using Datical DB. Just like SQLPlus or other database management tools, Datical DB must be able to connect to the Oracle port. When you configure the security group for your RDS instance, make sure you can access port 1521 from your location.

3. Manage database changes across the SDLC

This one-time process is required to ensure databases are in sync so that you can manage database changes across the SDLC:

  1. Create a Datical DB deployment plan with connections to the databases to be managed.
  2. Baseline the first database (DEV/CI). This must be the original or best configured database – your reference database.
  3. For each additional database (TEST and PROD):
    a. Compare databases to ensure the application schema are in sync.
    b. Resolve any differences.
    c. Perform a change log sync to get each setup for Datical DB management.

Datical DB creates an initial model change log from one of the databases. It also creates in each database a metadata table named DATABASECHANGELOG that will be used to track the state. Now the databases look like this:

Datical DB Model

Note: In the preceding figure, the Datical DB model and metadata table are a representation of the actual model.

Create a deployment plan

    1. In Datical DB, right-click Deployment Plans, and choose New.
    2. On the New Deployment Plan page, type a name for your project (for example, AWS-Sample-Project), and then choose Next.
    3. Select Oracle 11g Instant Client, type a name for the database (for example, DevDatabase), and then choose Next.
    4. On the following page, provide the database connection information.
      1. For Hostname, enter the RDS endpoint..
      2. Select SID, and then type ORCL.
      3. Type the user name and password used to connect to the RDS instance running Oracle.
      4. Before you choose Finish, choose the Test Connection button.

When Datical DB creates the project, it also creates a baseline snapshot that captures the current state of the database schema. Datical DB stores the snapshot in Datical change sets for future forecasting and modification.

Create a database change set

A change set describes the change/refactoring to apply to the database.
From the AWS-Sample-Project project in the left pane, right-click Change Log, select New, and then select Change Set. Choose the type of change to make, and then choose Next. In this example, we’re creating a table. For Table Name, type a name. Choose Add Column, and then provide information to add one or more columns to the new table. Follow the prompts, and then choose Finish.

Add Columns

The new change set will be added at the end of your current change log. You can tag change sets with a sprint label. Depending on the environment, changes can be deployed based on individual labels or by the higher-level grouping construct.
Datical DB also provides an option to load SQL scripts into a database, where the change sets are labeled and captured as objects. This makes them ready for deployment in other environments.

Best practices for continuous delivery

Change sets are stored in an XML file inside the Datical DB project. The file, changelog.xml, is stored inside the Changelog folder. (In the Datical DB UI, it is called Change Log.)

Just like any other files stored in your source code repository, the Datical DB change log can be branched and merged to support agile software development, where individual work spaces are isolated until changes are merged into the parent branch.

To implement this best practice, your Datical DB project should be checked into the same location as your application source code. That way, branches and merges will be applied to your Datical DB project automatically. Use unique change set IDs to avoid collisions with other scrum teams.

4. Set up database version control using AWS CodeCommit

To create a new CodeCommit repository, follow these steps.

Note: On some versions of Windows and Linux, you might see a pop-up dialog box asking for your user name and password. This is the built-in credential management system, but it is not compatible with the credential helper for AWS CodeCommit. Choose Cancel.

Commit the contents located in the Datical working directory (for example, ~/datical/AWS-Sample-Project) to the AWS CodeCommit repository.

5. Set up a continuous integration server to stage database changes

In this example, Jenkins is the continuous integration server. To create a Jenkins server, follow these steps. Be sure your instance security group allows port 8080 access.

sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

For more information about installing Jenkins, see the Jenkins wiki.

After setup, connect to your Jenkins server, and create a job.

  1. Install the following Jenkins plugins:
    For this project, you will need to install the following Jenkins plugins:

    1. AWS CodeCommit plugin
    2. DaticalDB4Jenkins plugin
    3. Hudson Post Build Task plugin
    4. HTML Publisher plugin
  2. To configure Jenkins for AWS CodeCommit, follow these steps.
  3. To configure Jenkins with Datical DB, navigate to Jenkins, choose Manage Jenkins, and then choose Configure System. In the Datical DB section, provide the requested directory information.

For example:

Add a build step:

Go to your newly created Jenkins project and choose Configure. On the Build tab, under Build, choose Add build step, and choose Datical DB.

In Project Dir, enter the Datical DB project directory (in this example, /var/lib/jenkins/workspace/demo/MyProject). You can use Jenkins environment variables like $WORKSPACE. The first build action is Check Drivers. This allow you to verify that Datical DB and Jenkins are configured correctly.

Choose Save. Choose Build Now to test the configuration.

After you’ve verified the drivers are installed, add forecast and deploy steps.

Add forecast and deploy steps:


Choose Save. Then choose Build Now to test the configuration.

6. Configure the continuous integration server to publish Datical DB reports

In this step, we will configure Jenkins to publish Datical DB forecast and HTML reports. In your Jenkins project, select Delete workspace before build starts.

Add post-build steps

1. Archive the Datical DB reports, logs, and snapshots

Archive
To expose Datical DB reports in Jenkins, you must create a post-build task step to copy the forecast and deployment HTML reports to a location easily published, and then publish the HTML reports.

2. Copy the forecast and deploy HTML reports

mkdir /var/lib/jenkins/workspace/Demo/MyProject/report
cp -rv /var/lib/jenkins/workspace/Demo/MyProject/Reports/*/*/*/forecast*/* /var/lib/jenkins/workspace/Demo/MyProject/report 2>NUL
cp -rv /var/lib/jenkins/workspace/Demo/MyProject/Reports/*/*/*/deploy*/deployReport.html /var/lib/jenkins/workspace/Demo/MyProject/report 2>NUL

Post build task

 

3. Publish HTML reports

Use the information in the following screen shot. Depending on the location where you configured Jenkins to build, your details might be different.

Note: Datical DB HTML reports use CSS, so update the JENKINS_JAVA_OPTIONS in your config file as follows:

Edit /etc/sysconfig/jenkins and set JENKINS_JAVA_OPTIONS to:

JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Dhudson.model.DirectoryBrowserSupport.CSP= "

7. Enable automated release management for your database through AWS CodePipeline

To create an automated release process for your databases using AWS CodePipeline, follow these instructions.

  1. Sign in to the AWS Management Console and open the AWS CodePipeline console at http://console.aws.amazon.com/codepipeline.
  2. On the introductory page, choose Get started. If you see the All pipelines page, choose Create pipeline.
  3. In Step 1: Name, in Pipeline name, type DatabasePipeline, and then choose Next step.
  4. In Step 2: Source, in Source provider, choose AWS CodeCommit. In Repository name, choose the name of the AWS CodeCommit repository you created earlier. In Branch name, choose the name of the branch that contains your latest code update. Choose Next step.

  5. In Step 3: Build, chose Jenkins.

To complete the deployment workflow, follow steps 6 through 9 in the Create a Simple Pipeline Tutorial.

8. Enforce database standards and compliance with the Datical DB Rules Engine

The Datical DB Dynamic Rules Engine automatically inspects the Virtual Database Model to make sure that proposed database code changes are safe and in compliance with your organization’s database standards. The Rules Engine also makes it easy to identify complex changes that warrant further review and empowers DBAs to efficiently focus only on the changes that require their attention. It also provides application developers with a self-service validation capability that uses the same automated build process established for the application. The consistent evaluation provided by the Dynamic Rules Engine removes uncertainty about what is acceptable and empowers application developers to write safe, compliant changes every time.

Earlier, you created a Datical DB project with no changes. To demonstrate rules, you will now create changes that violate a rule.

First, create a table with four columns. Then try to create an index on the table that comprises all four columns. For some databases, having more than three columns in an index can cause performance issues. For this reason, create a rule that will prevent the creation of an index on more than three columns, long before the change is proposed for production. Like a unit test that will fail the build, the Datical DB Rules Engine fails the build at the forecast step and provides feedback to the development team about the rule and the change to fix.

Create a Datical DB rule

To create a Datical DB rule, open the Datical DB UI and navigate to your project. Expand the Rules folder. In this example, you will create a rule in the Forecast folder.

Right-click the Forecast folder, and then select Create Rules File. In the dialog box, type a unique file name for your rule. Use a .drl extension.

.

In the editor window that opens, type the following:

package com.datical.hammer.core.forecast
import java.util.Collection;
import java.util.List;
import java.util.Arrays;
import java.util.ArrayList;
import org.apache.commons.lang.StringUtils;
import org.apache.commons.collections.ListUtils;
import com.datical.db.project.Project;
import com.datical.hammer.core.rules.Response;
import com.datical.hammer.core.rules.Response.ResponseType;

// ************************************* Models *************************************

// Database Models

import com.datical.dbsim.model.DbModel;
import com.datical.dbsim.model.Schema;
import com.datical.dbsim.model.Table;
import com.datical.dbsim.model.Index;
import com.datical.dbsim.model.Column;
import org.liquibase.xml.ns.dbchangelog.CreateIndexType;
import org.liquibase.xml.ns.dbchangelog.ColumnType;


/* @return false if validation fails; true otherwise */

function boolean validate(List columns)
{

// FAIL If more than 3 columns are included in new index
if (columns.size() > 3)
return false;
else
return true;

}

rule "Index Too Many Columns Error"
salience 1
when
$createIndex : CreateIndexType($indexName: indexName, $columns: columns, $tableName: tableName, $schemaName: schemaName)
eval(!validate($columns))
then
String errorMessage = "The new index [" + $indexName + "] contains more than 3 columns.";
insert(new Response(ResponseType.FAIL, errorMessage, drools.getRule().getName()));
end

Save the new rule file, and then right-click the Forecast folder, and select Check Rules. You should see “Rule Validation returned no errors.”

Now check your rule into source code control and request a new build. The build will fail, which is expected. Go back to Datical DB, and change the index to comprise only three columns. After your check-in, you will see a successful deployment to your RDS instance.

The following forecast report shows the Datical DB rule violation:

To implement database continuous delivery into your existing continuous delivery process, consider creating a separate project for your database changes that use the Datical DB forecast functionality at the same time unit tests are run on your code. This will catch database changes that violate standards before deployment.

Summary:

In this post, you learned how to build a modern database continuous integration and automated release management workflow on AWS. You also saw how Datical DB can be seamlessly integrated with AWS services to enable database release automation, while eliminating risks that cause application downtime and data security vulnerabilities. This fully automated delivery mechanism for databases can accelerate every organization’s ability to deploy software rapidly and reliably while improving productivity, performance, compliance, and auditability, and increasing data security. These methodologies simplify process-related overhead and make it possible for organizations to serve their customers efficiently and compete more effectively in the market.

I hope you enjoyed this post. If you have any feedback, please leave a comment below.


About the Authors

 

Balaji Iyer

Balaji Iyer is an Enterprise Consultant for the Professional Services Team at Amazon Web Services. In this role, he has helped several Fortune 500 customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly scalable distributed systems, serverless architectures, large scale migrations, operational security, and leading strategic AWS initiatives. Before he joined Amazon, Balaji spent more than a decade building operating systems, big data analytics solutions, mobile services, and web applications. In his spare time, he enjoys experiencing the great outdoors and spending time with his family.

Robert Reeves is a Co-Founder & Chief Technology Officer at Datical. In this role, he advocates for Datical’s customers and provides technical architecture leadership. Prior to cofounding Datical, Robert was a Director at the Austin Technology Incubator. At ATI, he provided real-world entrepreneurial expertise to ATI member companies to aid in market validation, product development, and fundraising efforts. Robert cofounded Phurnace Software in 2005. He invented and created the flagship product, Phurnace Deliver, which provides middleware infrastructure management to multiple Fortune 500 companies. As Chief Technology Officer for Phurnace, he led technical evangelism efforts, product vision, and large account technical sales efforts. After BMC Software acquired Phurnace in 2009, Robert served as Chief Architect and lead worldwide technology evangelism.


Building End-to-End Continuous Delivery and Deployment Pipelines in AWS and TeamCity

By Balaji Iyer, Janisha Anand, and Frank Li

Organizations that transform their applications to cloud-optimized architectures need a seamless, end-to-end continuous delivery and deployment workflow: from source code, to build, to deployment, to software delivery.

Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. The practice expands on continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has undergone a standardized test process.

Continuous deployment is the process of deploying application revisions to a production environment automatically, without explicit approval from a developer. This process makes the entire software release process automated. Features are released as soon as they are ready, providing maximum value to customers.

These two techniques enable development teams to deploy software rapidly, repeatedly, and reliably.

In this post, we will build an end-to-end continuous deployment and delivery pipeline using AWS CodePipeline (a fully managed continuous delivery service), AWS CodeDeploy (an automated application deployment service), and TeamCity’s AWS CodePipeline plugin. We will use AWS CloudFormation to setup and configure the end-to-end infrastructure and application stacks. The ­­pipeline pulls source code from an Amazon S3 bucket, an AWS CodeCommit repository, or a GitHub repository. The source code will then be built and tested using TeamCity’s continuous integration server. Then AWS CodeDeploy will deploy the compiled and tested code to Amazon EC2 instances.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3.

Overview

Here are the steps:

  1. Continuous integration server setup using TeamCity.
  2. Continuous deployment using AWS CodeDeploy.
  3. Building a delivery pipeline using AWS CodePipeline.

In less than an hour, you’ll have an end-to-end, fully-automated continuous integration, continuous deployment, and delivery pipeline for your application. Let’s get started!

(more…)

Building a Microsoft BackOffice Server Solution on AWS with AWS CloudFormation

Last month, AWS released the AWS Enterprise Accelerator: Microsoft Servers on the AWS Cloud along with a deployment guide and CloudFormation template. This blog post will explain how to deploy complex Windows workloads and how AWS CloudFormation solves the problems related to server dependencies.

This AWS Enterprise Accelerator solution deploys the four most requested Microsoft servers ─ SQL Server, Exchange Server, Lync Server, and SharePoint Server ─ in a highly available, multi-AZ architecture on AWS. It includes Active Directory Domain Services as the foundation. By following the steps in the solution, you can take advantage of the email, collaboration, communications, and directory features provided by these servers on the AWS IaaS platform.  

There are a number of dependencies between the servers in this solution, including:

  • Active Directory
  • Internet access
  • Dependencies within server clusters, such as needing to create the first server instance before adding additional servers to the cluster.
  • Dependencies on AWS infrastructure, such as sharing a common VPC, NAT gateway, Internet gateway, DNS, routes, and so on.

The infrastructure and servers are built in three logical layers. The Master template orchestrates the stack builds with one stack per Microsoft server and manages inter-stack dependencies. Each of the CloudFormation stacks uses PowerShell to stand up the Microsoft servers at the OS level. Before it configures the OS, CloudFormation configures the AWS infrastructure required by each Windows server. Together, CloudFormation and PowerShell create a quick, repeatable deployment pattern for the servers. The solution supports 10,000 users. Its modularity at both the infrastructure and application level enables larger user counts.

MSServers Solution - 6 CloudFormation Stacks

Managing Stack Dependencies

To explain how we enabled the dependencies between the stacks, the SQLStack is dependent on ADStack since SQL Server is dependent on Active Directory; and, similarly, SharePointStack is dependent on SQLStack, both as required by Microsoft. Lync is dependendent on Exchange since both servers must extend the AD schema independently. In Master, these server dependencies are coded in CloudFormation as follows:

"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",

             "Properties": …
       }
and
"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",
             "Properties": …
       },
       "SharePointStack": {
            "Type": "AWS::CloudFormation::Stack",
            "DependsOn": "SQLStack",
            "Properties": …
       }

The “DependsOn” statements in the stack definitions forces the order of stack execution to match the diagram. Lower layers are executed and successfully completed before the upper layers. If you do not use “DependsOn”, CloudFormation will execute your stacks in parallel. An example of parallel execution is what happens after ADStack returns SUCCESS. The two higher-level stacks, SQLStack and ExchangeStack, are executed in parallel at the next level (layer 2).  SharePoint and Lync are executed in parallel at layer 3. The arrows in the diagram indicate stack dependencies.

Passing Parameters Between Stacks

If you have concerns about how to pass infrastructure parameters between the stack layers, let’s use an example in which we want to pass the same VPCCIDR to all of the stacks in the solution. VPCCIDR is defined as a parameter in Master as follows:

"VPCCIDR": {
            "AllowedPattern": "[a-zA-Z0-9]+\..+",
            "Default": "10.0.0.0/16",
            "Description": "CIDR Block for the VPC",
            "Type": "String"
           }

By defining VPCCIDR in Master and soliciting user input for this value, this value is then passed to ADStack by the use of an identically named and typed parameter between Master and the stack being called.

"VPCCIDR": {
            "Description": "CIDR Block for the VPC",
            "Type": "String",
            "Default": "10.0.0.0/16",
            "AllowedPattern": "[a-zA-Z0-9]+\..+"
           }

After Master defines VPCCIDR, ADStack can use “Ref”: “VPCCIDR” in any resource (such as the security group, DomainController1SG) that needs the VPC CIDR range of the first domain controller. Instead of passing commonly-named parameters between stacks, another option is to pass outputs from one stack as inputs to the next. For example, if you want to pass VPCID between two stacks, you could accomplish this as follows. Create an output like VPCID in the first stack:

Outputs”  : {
               “VPCID” : {
                          “Value” : “ {“Ref” : “VPC”},
                          “Description” : “VPC ID”
               }, …
}

In the second stack, create a parameter with the same name and type:

Parameters” : {
               “VPCID” : {
                          “Type” : “AWS::EC2::VPC::Id”,
               }, …
}

When the first template calls the second template, VPCID is passed as an output of the first template to become an input (parameter) to the second.

Managing Dependencies Between Resources Inside a Stack

All of the dependencies so far have been between stacks. Another type of dependency is one between resources within a stack. In the Microsoft servers case, an example of an intra-stack dependency is the need to create the first domain controller, DC1, before creating the second domain controller, DC2.

DC1, like many cluster servers, must be fully created first so that it can replicate common state (domain objects) to DC2.  In the case of the Microsoft servers in this solution, all of the servers require that a single server (such as DC1 or Exch1) must be fully created to define the cluster or farm configuration used on subsequent servers.

Here’s another intra-stack dependency example: The Microsoft servers must fully configure the Microsoft software on the Amazon EC2 instances before those instances can be used. So there is a dependency on software completion within the stack after successful creation of the instance, before the rest of stack execution (such as deploying subsequent servers) can continue. These intra-stack dependencies like “software is fully installed” are managed through the use of wait conditions. Wait conditions are CloudFormation resources just like EC2 instances and allow the “DependsOn” attribute mentioned earlier to manage dependencies inside a stack. For example, to pause the creation of DC2 until DC1 is complete, we configured the following “DependsOn” attribute using a wait condition. See (1) in the following diagram:

"DomainController1": {
            "Type": "AWS::EC2::Instance",
            "DependsOn": "NATGateway1",
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "configSets": {
                        "config": [
                            "setup",
                            "rename",
                            "installADDS",
                            "configureSites",
                            "installADCS",
                            "finalize"
                        ]
                    }, …
             },
             "Properties" : …
},
"DomainController2": {
             "Type": "AWS::EC2::Instance",
[1]          "DependsOn": "DomainController1WaitCondition",
             "Metadata": …,
             "Properties" : …
},

The WaitCondition (2) uses on a CloudFormation resource called a WaitConditionHandle (3), which receives a SUCCESS or FAILURE signal from the creation of the first domain controller:

"DomainController1WaitCondition": {
            "Type": "AWS::CloudFormation::WaitCondition",
            "DependsOn": "DomainController1",
            "Properties": {
                "Handle": {
[2]                    "Ref": "DomainController1WaitHandle"
                },
                "Timeout": "3600"
            }
     },
     "DomainController1WaitHandle": {
[3]            "Type": "AWS::CloudFormation::WaitConditionHandle"
     }

SUCCESS is signaled in (4) by cfn-signal.exe –exit-code 0 during the “finalize” step of DC1, which enables CloudFormation to execute DC2 as an EC2 resource via the wait condition.

                "finalize": {
                       "commands": {
                           "a-signal-success": {
                               "command": {
                                   "Fn::Join": [
                                       "",
                                       [
[4]                                            "cfn-signal.exe -e 0 "",
                                           {
                                               "Ref": "DomainController1WaitHandle"

                                            },
                                           """
                                       ]
                                   ]
                               }
                           }
                       }
                   }
               }

If the timeout had been reached in step (2), this would have automatically signaled a FAILURE and stopped stack execution of ADStack and the Master stack.

As we have seen in this blog post, you can create both nested stacks and nested dependencies and can pass parameters between stacks by passing standard parameters or by passing outputs. Inside a stack, you can configure resources that are dependent on other resources through the use of wait conditions and the cfn-signal infrastructure. The AWS Enterprise Accelerator solution uses both techniques to deploy multiple Microsoft servers in a single VPC for a Microsoft BackOffice solution on AWS.  

In a future blog post, we will illustrate how PowerShell can be used to bootstrap and configure Windows instances with downloaded cmdlets, all integrated into CloudFormation stacks.

Explore Continuous Delivery in AWS with the Pipeline Starter Kit

By Chris Munns, David Nasi, Shankar Sivadasan, and Susan Ferrell

Continuous delivery, automating your software delivery process from code to build to deployment, is a powerful development technique and the ultimate goal for many development teams. AWS provides services, including AWS CodePipeline (a continuous delivery service) and AWS CodeDeploy (an automated application deployment service) to help you reach this goal. With AWS CodePipeline, any time a change to the code occurs, that change runs automatically through the delivery process you’ve defined. If you’ve ever wanted to try these services, but not wanted to set up the resources, we’ve created a starter kit you can use. This starter kit sets up a complete pipeline that builds and deploys a sample application in just a few steps. The starter kit includes an AWS CloudFormation template to create the pipeline and all of its resources in the US East (N. Virginia) Region. Specifically, the CloudFormation template creates:

  • An AWS Virtual Private Cloud (VPC), including all the necessary routing tables and routes, an Internet gateway, and network ACLs for EC2 instances to be launched into.
  • An Amazon EC2 instance that hosts a Jenkins server (also installed and configured for you).
  • Two AWS CodeDeploy applications, each of which contains a deployment group that deploys to a single Amazon EC2 instance.
  • All IAM service and instance roles required to run the resources.
  • A pipeline in AWS CodePipeline that builds the sample application and deploys it. This includes creating an Amazon S3 bucket to use as the artifact store for this pipeline.

What you’ll need:

  • An AWS account. (Sign up for one here if you don’t have one already.)
  • An Amazon EC2 key pair in the US East (N. Virginia) Region. (Learn how to create one here if you don’t have one.)
  • Administrator-level permissions in IAM, AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3. (Not sure how to set permissions in these services? See the sample policy in Troubleshooting Problems with the Starter Kit.)
  • Optionally, a GitHub account so you can fork the repository for the sample application. Alternatively, if you do not want to create a GitHub account, you can use the Amazon S3 bucket configured in the starter kit template, but you will not be able to edit the application or see your changes automatically run through the pipeline.

That’s it! The starter kit will create everything else for you.

Note: The resources created in the starter kit exceed what’s included in the AWS Free Tier so the use of the kit will result in charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

Let’s get started.

Decide how you want to source the provided sample application. AWS CodePipeline currently allows you use either an Amazon S3 bucket or a GitHub repository as the source location for your application. The CloudFormation template allows you to choose either of these methods. If you choose to use a GitHub repository, you will have a little more set up work to do, but you will be able to easily test modifying the application and seeing the changes run automatically through the pipeline. If you choose to use the Amazon S3 bucket already configured as the source in the startup kit, set up is simpler, but you won’t be able to modify the application.

Follow the steps for your choice:

GitHub:

  1. Sign in to GitHub and fork the sample application repository at https://github.com/awslabs/aws-codedeploy-sample-tomcat.
  2. Navigate to https://github.com/settings/tokens and generate a token to use with the starter kit. The token requires the permissions needed to integrate with AWS CodePipeline: repo and admin:repo_hook. For more information, see the AWS CodePipeline User Guide. Make sure you copy the token after you create it.

Amazon S3:

  1. If you’re using the bucket configured in the starter kit, there’s nothing else for you to do but continue on to step 3. If you want to use your own bucket, see Troubleshooting Problems with the Starter Kit.

Choose  to launch the starter kit template directly in the AWS CloudFormation console. Make sure that you are in the US East (N. Virginia) region.

Note: If you want to download the template to your own computer and then upload it directly to AWS CloudFormation, you can do so from this Amazon S3 bucket. Save the aws-codedeploy-codepipeline-starter-kit.template file to a location on your computer that’s easy to remember.

Choose Next.

On the Specify Details page, do the following:

  1. In Stack name, type a name for the stack. Choose something short and simple for easy reference.
  2. In AppName, you can leave the default as-is, or you can type a name of no more than 15 characters (for example, starterkit-demo). The name has the following restrictions:

    • The only allowed characters are lower-case letters, numbers, periods, and hyphens.
    • The name must be unique in your AWS account, so be sure to choose a new name each time you use the starter kit.
  3. In AppSourceType, choose S3 or GitHub, depending on your preference for a source location, and then do the following:

    • If you want to use the preconfigured Amazon S3 bucket as the source for your starter kit, leave all the default information as-is. (If you want to use your own Amazon S3 bucket, see Troubleshooting Problems with the Starter Kit.)
    • If you want to use a GitHub repo as the source for your starter kit, in Application Source – GitHub, type the name of your user account in GitHubUser. In GitHubToken, paste the token you created earlier. In GitHubRepoName, type the name of the forked repo. In GitHubBranchName, type the name of the branch (by default, master).
  4. In Key Name, choose the name of your Amazon EC2 key pair.
  5. In YourIP, type the IP address from which you will access the resources created by this starter kit. This is a recommended security best practice.

Choose Next.

(Optional) On the Options page, in Key, type Name. In Value, type a name that will help you easily identify the resources created for the starter kit. This name will be used to tag all of the resources created by the starter kit. Although this step is optional, it’s a good idea, particularly if you want to use or modify these resources later on. Choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. (It will.) Review the other settings, and then choose Create.

It will take several minutes for CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console.

When the stack has been created, you will see a CREATE_COMPLETE message in the Status column of the console and on the Overview tab.

Congratulations! You’ve created your first pipeline, complete with all required resources. The pipeline has four stages, each with a single action. The pipeline will start automatically as soon as it is created.

(If CloudFormation fails to create your resources and pipeline, it will roll back all resource creation automatically. The most common reason for failure is that you specified a stack name that is allowed in CloudFormation but not allowed in Amazon S3, and you chose Amazon S3 for your source location. For more information, see the Troubleshooting problems with the starter kit section at the end of this post.)

To view your pipeline, open the AWS CodePipeline console at http://console.aws.amazon.com/codepipeline. On the dashboard page, choose the name of your new pipeline (for example, StarterKitDemo-Pipeline). Your pipeline, which might or might not have started its first run, will appear on the view pipeline page.

You can watch the progress of your pipeline as it completes the action configured for each of its four stages (a source stage, a build stage, and two deployment stages).

The pipeline flows as follows:

  1. The source stage contains an action that retrieves the application from the source location (the Amazon S3 bucket created for you to store the app or the GitHub repo you specified).
  2. The build stage contains an action that builds the app in Jenkins, which is hosted on an Amazon EC2 instance.
  3. The first deploy stage contains an action that uses AWS CodeDeploy to deploy the app to a beta website on an Amazon EC2 instance.
  4. The second deploy stage contains an action that again uses AWS CodeDeploy to deploy the app, this time to a separate, production website on a different Amazon EC2 instance.

When each stage is complete, it turns from blue (in progress) to green (success).

You can view the details of any stage except the source stage by choosing the Details link for that stage. For example, choosing the Details link for the Jenkins build action in the build stage opens the status page for that Jenkins build:

Note: The first time the pipeline runs, the link to the build will point to Build #2. Build #1 is a failed build left over from the initial instance and Jenkins configuration process in AWS CloudFormation.

To view the details of the build, choose the link to the log file. To view the Maven project created in Jenkins to build the application, choose Back to Project.

While you’re in Jenkins, we strongly encourage you to consider securing it if you’re going to keep the resource for any length of time. From the Jenkins dashboard, choose Manage Jenkins, choose Setup Security, and choose the security options that are best for your organization. For more information about Jenkins security, see Standard Security Setup.

When Succeeded is displayed for the pipeline status, you can view the application you built and deployed:

  1. In the status area for the ProdDeploy action in the Prod stage, choose Details. The details of the deployment will appear in the AWS CodeDeploy console.
  2. In the Deployment Details section, in Instance ID, choose the instance ID of the successfully deployed instance.
  3. In the Amazon EC2 console, on the Description tab, in Public DNS, copy the address, and then paste it into the address bar of your web browser. The web page opens on the application you built:

Tip: You can also find the IP addresses of each instance in AWS CloudFormation on the Outputs tab of the stack.

Now that you have a pipeline, try experimenting with it. You can release a change, disable and enable transitions, edit the pipeline to add more actions or change the existing ones – whatever you want to do, you can do it. It’s yours to play with. You can make changes to the source in your GitHub repository (if you chose GitHub as your source location) and watch those pushed changes build and deploy automatically. You can also explore the links to the resources used by the pipeline, such as the application and deployment groups in AWS CodeDeploy and the Jenkins server.

What to Do Next

After you’ve finished exploring your pipeline and its associated resources, you can do one of two things:

  •      Delete the stack in AWS CloudFormation, which deletes the pipeline, its resources, and the stack itself. This is the option to choose if you no longer want to use the pipeline or any of its resources. Cleaning up resources you’re no longer using is important, because you don’t want to be charged for resources you no longer using.

To delete the stack:

  1. Delete the Amazon S3 bucket used as the artifact store in AWS CodePipeline. Although this bucket was created as part of the CloudFormation stack, Amazon S3 does not allow CloudFormation to delete buckets that contain objects. To delete this bucket, open the Amazon S3 console, select the bucket whose name starts with demo and ends with the name you chose for your stack, and then delete it. For more information, see Delete or Empty a Bucket.
  2. Follow the steps in Delete the stack.
  • Change the pipeline and its resources to start building applications you actually care about. Maybe you’re not ready to get into the business of creating bespoke suits for dogs. (We understand that dogs can be difficult clients to dress well, and that not everyone wants to be paid in dog treats.) However, perhaps you do have an application or two that you would like to set up for continuous delivery with AWS CodePipeline. AWS CodePipeline integrates with other services you might already be using for your software development, as well as GitHub. You can edit the pipeline to remove the actions or stages and add new actions and stages that more accurately reflect the delivery process for your applications. You can even create your own custom actions, if you want to integrate your own solutions.


If you decide to keep the pipeline and some or all of its resources, here are some things to consider:

We hope you’ve enjoyed the starter kit and this blog post. If you have any feedback or questions, feel free to get in touch with us on the AWS CodePipeline forum.

Troubleshooting Problems with the Starter Kit

You can use the events on the Events tab of the CloudFormation stack to help you troubleshoot problems if the stack fails to complete creation or deletion.

Problem: The stack creation fails when trying to create the custom action in AWS CodePipeline.

Possible Solution: You or someone who shares your AWS account number might have used the starter kit once and chosen the same name for the application. Custom actions must have unique names within an AWS account. Another possibility is that you or someone else then deleted the resources, including the custom action. You cannot create a custom action using the name of a deleted custom action. In either case, delete the failed stack, and then try to create the stack again using a different application name.

Problem: The stack creation fails in AWS CloudFormation without any error messages.

Possible Solution: You’re probably missing one or more required permissions. Creating resources with the template in AWS CloudFormation requires the following policy or its equivalent permissions:

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Allow",

            "Action": [

                "cloudformation:*",

                "codedeploy:*",

                "codepipeline:*",

                "ec2:*",

                "iam:AddRoleToInstanceProfile",

                "iam:CreateInstanceProfile",

                "iam:CreateRole",

                "iam:DeleteInstanceProfile",

                "iam:DeleteRole",

                "iam:DeleteRolePolicy",

                "iam:GetRole",

                "iam:PassRole",

                "iam:PutRolePolicy",

                "iam:RemoveRoleFromInstanceProfile",

                "s3:*"

            ],

            "Resource": "*"

        }

    ]

}

 

Problem: Deleting the stack fails when trying to delete the Amazon S3 bucket created by the stack.

Possible solution:  One or more files or folders might be left in the bucket created by the stack. To delete this bucket, follow the instructions in Delete or Empty a Bucket, and then delete the stack in AWS CloudFormation.

Problem: I want to use my own Amazon S3 bucket as the source location for a pipeline, not the bucket pre-configured in the template.

Possible solution: Create your own bucket, following these steps:

 

  1. Download the sample application from GitHub at https://github.com/awslabs/aws-codedeploy-sample-tomcat and upload the suitsfordogs.zip application to an Amazon S3 bucket that was created in the US East (N. Virginia) Region.
  2. Sign into the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3.
  3. Choose your bucket from the list of buckets available, and on the Properties tab for the bucket, choose to add or edit the bucket policy.
  4. Make sure that your bucket has the following permissions set to Allow:

    • s3:PutObject
    • s3:List*
    • s3:Get*

    For more information, see Editing Bucket Permissions.

  5. When configuring details in CloudFormation, on the Specify Details page, in AppSourceType, choose S3, but then replace the information in Application Source – S3 with the details of your bucket and object.

Optimize AWS CloudFormation Templates

The following post is by guest blogger Julien Lépine, Solutions Architect at AWS. He explains how to optimize templates so that AWS CloudFormation quickly deploys your environments.

______________________________________________________________________________________

Customers sometimes ask me if there’s a way to optimize large AWS CloudFormation templates, which can take several minutes to deploy a stack. Often stack creation is slow because one resource depends on the availability of another resource before it can be provisioned. Examples include:

  • A front-end web server that has a dependency on an application server
  • A service that waits for another remote service to be available

In this post, I describe how to speed up stack creation when resources have dependencies on other resources.

Note: I show how to launch Windows instances with Windows PowerShell, but you can apply the same concepts to Linux instances launched with shell scripts.

How CloudFormation Creates Stacks

When CloudFormation provisions two instances, it provisions them randomly. Defining one resource before another in a template doesn’t guarantee that CloudFormation will provision that resource first. You need to explicitly tell CloudFormation the right order for instance provisioning.

To demonstrate how to do this, I’ll start with the following CloudFormation template:

{
    "AWSTemplateFormatVersion" : "2010-09-09",
    "Description": "This is a demonstration AWS CloudFormation template containing two instances",
    "Parameters": {
        "ImageId" : {
            "Description": "Identifier of the base Amazon Machine Image (AMI) for the instances in this sample (please use Microsoft Windows Server 2012 R2 Base)",
            "Type" : "AWS::EC2::Image::Id"
        },
        "InstanceType" : {
            "Description": "EC2 instance type to use for the instances in this sample",
            "Type" : "String"
        },
    },
    "Resources" : { 
        "Instance1": {
            "Type": "AWS::EC2::Instance",
            "Properties": {
                "ImageId": { "Ref" : "ImageId" },
                "InstanceType": { "Ref": "InstanceType" },
            }
        },

        "Instance2": {
            "Type": "AWS::EC2::Instance",
            "Properties": {
                "ImageId": { "Ref" : "ImageId" },
                "InstanceType": { "Ref": "InstanceType" },
            }
        }
    }
}

CloudFormation would likely create the stack in the following sequence:

This is fast, but if Instance2 is dependent on Instance1, you would ordinarily need to hard code or script the provisioning sequence to ensure that Instance1 is provisioned first.

Specifying Dependencies

When you need CloudFormation to wait to provision one resource until another one has been provisioned, you can use the DependsOn attribute.

    "Instance2": {
        "DependsOn": ["Instance1"]
        "Type": "AWS::EC2::Instance",
        "Properties": {
            "ImageId": { "Ref" : "ImageId" },
            "InstanceType": { "Ref": "InstanceType" }
        }
    }

You can also introduce references between elements by using either the { "Ref": "MyResource" } or the { "Fn::GetAtt" : [ "MyResource" , "MyAttribute" ] } functions. When you use one of these functions, CloudFormation behaves as if you’ve added a DependsOn attribute to the resource. In the following example, the identifier of Instance1 is used in a tag for Instance2.

    "Instance2": {
        "Type": "AWS::EC2::Instance",
        "Properties": {
            "ImageId": { "Ref" : "ImageId" },
            "InstanceType": { "Ref": "InstanceType" },
            "Tags": [ { "Key" : "Dependency", "Value" : { "Ref": "Instance1" } } ]
        }
    }

Both methods of specifying dependencies result in the same sequence:

Now, CloudFormation waits for Instance1 to be provisioned before provisioning Instance2. But I’m not guaranteed that services hosted on Instance1 will be available, so I will have to address that in the template.

Note that instances are provisioned quickly in CloudFormation. In fact, it happens in the time it takes to call the RunInstances Amazon Elastic Compute Cloud (EC2) API. But it takes longer for an instance to fully boot than it does to provision the instance.

Using Creation Policies to Wait for On-Instance Configurations

In addition to provisioning the instances in the right order, I want to ensure that a specific setup milestone has been achieved inside Instance1 before contacting it. To do this, I use a CreationPolicy attribute. A CreationPolicy is an attribute you can add to an instance to prevent it from being marked CREATE_COMPLETE until it has been fully initialized.

In addition to adding the CreationPolicy attribute, I want to ask Instance1 to notify CloudFormation after it’s done initializing. I can do this in the instance’s UserData section. On Windows instances, I can use this section to execute code in batch files or in Windows PowerShell in a process called bootstrapping.

I’ll execute a batch script, then tell CloudFormation that the creation process is done by sending a signal specifying that Instance1 is ready. Here’s the code with a CreationPolicy attribute and a UserData section that includes a script that invokes cfn-signal.exe:

    "Instance1": {
      "Type": "AWS::EC2::Instance",
      "CreationPolicy" : {
        "ResourceSignal" : {
          "Timeout": "PT15M",
          "Count"  : "1"
        }
      },
      "Properties": {
        "ImageId": { "Ref" : "ImageId" },
        "InstanceType": { "Ref": "InstanceType" },
        "UserData": {
          "Fn::Base64": {
            "Fn::Join": [ "n", [
                "<script>",

                "REM ...Do any instance configuration steps deemed necessary...",

                { "Fn::Join": ["", [ "cfn-signal.exe -e 0 --stack "", { "Ref": "AWS::StackName" }, "" --resource "Instance1" --region "", { "Ref" : "AWS::Region" }, """ ] ] },
                "</script>"
            ] ]
          }
        }
      }
    }

I don’t need to change the definition of Instance2 because it’s already coded to wait for Instance1. I now know that Instance1 will be completely set up before Instance2 is provisioned. The sequence looks like this:

Optimizing the Process with Parallel Provisioning

It takes only a few seconds to provision an instance in CloudFormation, but it can take several minutes for an instance to boot and be ready because it must wait for the complete OS boot sequence, activation and the execution of the UserData scripts. As we saw in the figures, the time it takes to create the complete CloudFormation stack is about twice the boot and initialization time for a resource. Depending on the complexity of our processes, booting can take up to 10 minutes.

I can reduce waiting time by running instance creation in parallel and waiting only when necessary – before the application is configured. I can do this by splitting instance preparation into two steps: booting and initialization. Booting happens in parallel for both instances, but initialization for Instance2 starts only when Instance1 is completely ready.

This is the new sequence:

Because I’m doing some tasks in parallel, it takes much less time for Instance2 to become available.

The only problem is that CloudFormation has no built-in construct to enter a dependency in the middle of the booting process of another resource. Let’s devise a solution for this.

Using Wait Conditions

Creation policies also provide a notification mechanism. I can decouple notification for the creation of an instance from the notification that the instance is fully ready by using a wait condition.

    "Instance1WaitCondition" : {
        "Type" : "AWS::CloudFormation::WaitCondition",
        "DependsOn" : ["Instance1"],
        "CreationPolicy" : {
        "ResourceSignal" : {
                "Timeout": "PT15M",
                "Count"  : "1"
            }
        }
    }

Then I need to ask Instance1 to notify the wait condition after it’s done processing, instead of notifying itself. I’ll use the UserData section of the instance to do this.

    "Instance1": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        "ImageId": { "Ref" : "ImageId" },
        "InstanceType": { "Ref": "InstanceType" },
        "UserData": {
          "Fn::Base64": {
            "Fn::Join": [ "n", [
                "<script>",

                "REM ...Do any instance configuration steps deemed necessary...",

                { "Fn::Join": ["", [ "cfn-signal.exe -e 0 --stack "", { "Ref": "AWS::StackName" }, "" --resource "Instance1WaitCondition" --region "", { "Ref" : "AWS::Region" }, """ ] ] },
                "</script>"
            ] ]
          }
        }
      }
    }

Note that CreationPolicy is now defined inside Instance1WaitCondition, and the call to cfn-signal.exe notifies Instance1WaitCondition instead of Instance1.

We now have two resources that signal two different states of Instance1:

  • Instance1 is marked as created as soon as it is provisioned.
  • Instance1WaitCondition is marked as created only when Instance1 is fully initialized.

Let’s see how we can use this technique to optimize the booting process.

PowerShell to the Rescue

The DependsOn attribute is only available at the top level of resources, but I want to wait for Instance1 after the boot of Instance2. To allow that I need  a way to check the status of resources from within the instance’s initialization script so that I can see when resource creation for Instance1WaitCondition is complete. Let’s use Windows PowerShell to provide some automation.

To check resource status from within an instance’s initialization script, I’ll use AWS Tools for Windows PowerShell, a package that is installed by default on every Microsoft Windows Server image provided by Amazon Web Services. The package includes more than 1,100 cmdlets, giving us access to all of the APIs available on the AWS cloud.

The Get-CFNStackResources cmdlet allows me to see whether resource creation for Instance1WaitCondition is complete. This PowerShell script loops until a resource is created:

    $region = ""
    $stack = ""
    $resource = "Instance1WaitCondition"
    $output = (Get-CFNStackResources -StackName $stack -LogicalResourceId $resource -Region $region)
    while (($output -eq $null) -or ($output.ResourceStatus -ne "CREATE_COMPLETE") -and ($output.ResourceStatus -ne "UPDATE_COMPLETE"))
    {
        Start-Sleep 10
        $output = (Get-CFNStackResource -StackName $stack -LogicalResourceId $resource -Region $region)
    }

Securing Access to the Resources

When calling an AWS API, I need to be authenticated and authorized. I can do this by providing an access key and a secret key to each API call, but there’s a much better way. I can simply create an AWS Identity and Access Management (IAM) role for the instance. When an instance has an IAM role, code that runs on the instance (including our PowerShell code in UserData) is authorized to make calls to the AWS APIs that are granted in the role.

When creating this role in IAM, I specify only the required actions, and limit these actions to only the current CloudFormation stack.

    "DescribeRole": {
        "Type"      : "AWS::IAM::Role",
        "Properties": {
            "AssumeRolePolicyDocument": {
                "Version" : "2012-10-17",
                "Statement": [ 
                    { 
                        "Effect": "Allow",
                        "Principal": { "Service": [ "ec2.amazonaws.com" ] },
                        "Action": [ "sts:AssumeRole" ]
                    }
                ]
            },
            "Path": "/",
            "Policies": [
                {
                    "PolicyName"    : "DescribeStack",
                    "PolicyDocument": {
                        "Version"  : "2012-10-17",
                        "Statement": [
                            {
                                "Effect" : "Allow",
                                "Action" : ["cloudformation:DescribeStackResource", "cloudformation:DescribeStackResources"],
                                "Resource" : [ { "Ref" : "AWS::StackId" } ]
                            }
                        ]
                    }
                }
            ]
        }
    },
    "DescribeInstanceProfile": {
        "Type"      : "AWS::IAM::InstanceProfile",
        "Properties": {
            "Path" : "/",
            "Roles": [ { "Ref": "DescribeRole" } ]
        }
    }

Creating the Resources

The description for Instance1WaitCondition and Instance1 is fine, but I need to update Instance2 to add the IAM Role and include the PowerShell wait script. In the UserData section, I will add a scripted reference to Instance1WaitCondition. This "soft" reference doesn’t introduce any dependency in CloudFormation as this is just a simple string. In the UserData section, I will also add a GetAtt reference to Instance1 so that these instances will be provisioned quickly, one after another, without having to wait for the full instance to boot. I also need to secure my API calls by specifying the IAM role we have created as an IamInstanceProfile.

    "Instance2": {
        "Type": "AWS::EC2::Instance",
        "Properties": {
            "ImageId": { "Ref" : "ImageId" },
            "InstanceType": { "Ref": "InstanceType" },
            "IamInstanceProfile": { "Ref": "DescribeInstanceProfile" },
            "UserData": {
                "Fn::Base64": { 
                    "Fn::Join": [ "n", [
                        "",
                        "$resource = "Instance1WaitCondition"",
                        { "Fn::Join": ["", [ "$region = '", { "Ref" : "AWS::Region" }, "'" ] ] },
                        { "Fn::Join": ["", [ "$stack = '", { "Ref" : "AWS::StackId" }, "'" ] ] },

                        "#...Wait for instance 1 to be fully available...",

                        "$output = (Get-CFNStackResources -StackName $stack -LogicalResourceId $resource -Region $region)",
                        "while (($output -eq $null) -or ($output.ResourceStatus -ne "CREATE_COMPLETE") -and ($output.ResourceStatus -ne "UPDATE_COMPLETE")) {",
                        "    Start-Sleep 10",
                        "    $output = (Get-CFNStackResources -StackName $stack -LogicalResourceId $resource -Region $region)",
                        "}",

                        "#...Do any instance configuration steps you deem necessary...",

                        { "Fn::Join": ["", [ "$instance1Ip = '", { "Fn::GetAtt" : [ "Instance1" , "PrivateIp" ] }, "'" ] ] },

                        "#...You can use the private IP address from Instance1 in your configuration scripts...",

                        ""
                    ] ]
                }
            }
        }
    }

Now, CloudFormation provisions Instance2 just after Instance1, saving a lot of time because Instance2 boots while Instance1 is booting, but Instance2 then waits for Instance1 to be fully operational before finishing its configuration.

During new environment creation, when a stack contains numerous resources, some with cascading dependencies, this technique can save a lot of time. And when you really need to get an environment up and running quickly, for example, when you’re performing disaster recovery, that’s important.

More Optimization Options

If you want a more reliable way to execute multiple scripts on an instance in CloudFormation, check out AWS::CloudFormation::cfn-init, which provides a flexible and powerful way to configure an instance when it’s started. To automate and simplify scripting your instances and reap the benefits of automatic domain joining for instances, see Amazon EC2 Simple Systems Manager (SSM). To operate your Windows instances in a full DevOps environment, consider using AWS OpsWorks.

AWS CodeDeploy: Deploying from a Development Account to a Production Account

AWS CodeDeploy helps users deploy software to a fleet of Amazon EC2 or on-premises instances. A software revision is typically deployed and tested through multiple stages (development, testing, staging, and so on) before it’s deployed to production. It’s also a common practice to use a separate AWS account for each stage. In this blog post, we will show you how to deploy a revision that is tested in one account to instances in another account.

Prerequisites

We assume you are already familiar with AWS CodeDeploy concepts and have completed the Basic Deployment Walkthrough. In addition, we assume you have a basic understanding of AWS Identity and Access Management (IAM) and have read the Cross-Account Access Using Roles topic.

Setup

Let’s assume you have development and production AWS accounts with the following details:

  • AWS account ID for development account: <development-account-id>
  • S3 bucket under development account: s3://my-demo-application/
  • IAM user for development account: development-user (This is the IAM user you use for AWS CodeDeploy deployments in your development account.)
  • AWS account ID for production account: <production-account-id>

You have already tested your revision in the development account and it’s available in s3://my-demo-application/. You want to deploy this revision to instances in your production account.

Step 1: Create the application and deployment group in the production account

You will need to create the application and deployment group in the production account. Keep in mind that deployment groups, and the Amazon EC2 instances to which they are configured to deploy, are strictly tied to the accounts under which they were created. Therefore, you cannot add an instance in the production account to a deployment group in the developer account. Also, make sure the EC2 instances in the production account have the AWS CodeDeploy agent installed and are launched with an IAM instance profile. You can follow the steps in this topic. For example, in this post, we will use the following settings:

  • Application name: CodeDeployDemo
  • Deployment group name: Prod
  • IAM instance profile: arn:aws:iam::role/CodeDeployDemo-EC2

Step 2: Create a role under the production account for cross-account deployment

Log in to production account and then go to IAM console. Select "Roles" in the menu and click "Create New Role". You need to create a new role under the production account that gives cross account permission to the development account. Call this role "CrossAccountRole".

Click "Next Step", under "Select Role Type", choose "Role for Cross-Account Access". Choose "Provide access between AWS accounts you own"

Click "Next Step", in the "Account ID" field, type the AWS account ID for the development account.

Attach the AmazonS3FullAccess and AWSCodeDeployDeployerAccess policies to this role. Follow the wizard to complete role creation.The policy section for the Prod Role should look like this:

 

Step 3: Give the IAM instance profile permission to the S3 bucket under the development account

Now log in to development account. The AWS CodeDeploy agent relies on the IAM instance profile to access the S3 bucket. In this post, development account contains the deployment revision. Update bucket policy for the S3 bucket under development account to give the production account "IAM instance profile" (arn:aws:iam::role/CodeDeployDemo-EC2) permission to retrieve the object from the bucket. You can follow the steps in granting cross-account bucket permission.

You can find the IAM instance profile by going to EC2 console and check the IAM Role associated with your EC2 instances under production account.

Here is what the policy looks like:

{
  "Version": "2012-10-17",
    "Statement": [
      {
        "Sid": "Example permissions",
        "Effect": "Allow",
        "Principal": {
        "AWS": "arn:aws:iam::role/CodeDeployDemo-EC2"
      },
     "Action": [
     "s3:List*",
     "s3:Get*"
     ],
    "Resource": "arn:aws:s3:::my-demo-application/*"
    }
  ]
}

Step 4: Give the IAM user under the development account access to the production role

In the development account IAM console, select the development-user IAM user and add the following policy.

{
  "Version": "2012-10-17",
    "Statement": {
    "Effect": "Allow",
    "Action": "sts:AssumeRole",
    "Resource": "arn:aws:iam::<production-account-id>:role/CrossAccountRole"
  }
}

This policy gives the development-user enough permission to assume the CrossAccountRole under the production account. You’ll find step by step instructions in the walkthrough Granting Users Access to the Role.

Step 5: Deploy to production account

That’s it. In four simple steps, you have set up everything you need to deploy from a development to a production account. To deploy:

Log in to your development account as development-user. For more information, see How Users Sign In to Your Account in the IAM documentation. In the upper-right, choose the user name. You will see the "Switch Role" link:

Alternatively, you can use the sign-in link, which you will find under "Summary" on the IAM role details page.

On the "Switch Role" page, provide the production account role information:

You only need to complete these steps once. The role will appear in the role history after you add it. For step-by-step instructions, see this blog post. After you switch to the production account role, open the AWS CodeDeploy console and deploy to the targeted application and deployment group.

Wait for the deployment to be completed successfully. The changes will be released to the production fleet.

Next steps

  • The CrossAccountRole we created in step 2 has access to any S3 bucket. You can scope the permissions down to just the required bucket (in this case, my-demo-application). Similarly, CrossAccountRole has access to deploy to any application and deployment group. You can scope this down to just the required application and deployment group. For more information, see AWS CodeDeploy User Access Permissions Reference. And also instead of using root account as trust entity, you can update trust relationships to only allow specific IAM User (e.g. development-user under development account) to assume this role.

We hope this blog post has been helpful. Are there other deployment workflow questions you would like us to answer? Let us know in the comments or in our user forum.

Setting Up the Jenkins Plugin for AWS CodeDeploy

The following is a guest post by Maitreya Ranganath, Solutions Architect.


In this post, we’ll show you how to use the Jenkins plugin to automatically deploy your builds with AWS CodeDeploy. We’ll walk through the steps for creating an AWS CodeCommit repository, installing Jenkins and the Jenkins plugin, adding files to the CodeCommit repository, and configuring the plugin to create a deployment when changes are committed to an AWS CodeCommit repository.

Create an AWS CodeCommit Repository

First, we will create an AWS CodeCommit repository to store our sample code files.

1. Sign in to the AWS Management Console and open the AWS CodeCommit console in the us-east-1 (N. Virginia) Region.  Choose Get Started or Create Repository.

2. For Repository Name, type a name for your repository (for example, DemoRepository). For Description, type Repository for Jenkins Code Deploy.

3. Choose the Create repository button.

4. Choose the repository you just created to view its details.

5. Choose the Clone URL button, and then choose HTTPS. Copy the URL displayed into a clipboard. You’ll need it later to configure Jenkins.

 

Now that you have created an AWS CodeCommit repository, we’ll create a Jenkins server and AWS CodeDeploy environment.

Create a Jenkins Server and AWS CodeDeploy Environment

In this step, we’ll launch a CloudFormation template that will create the following resources:

  • An Amazon S3 bucket that will be used to store deployment files.
  • JenkinsRole, an IAM role and instance profile for the Amazon EC2 instance that will run Jenkins. This role allows Jenkins on the EC2 instance to assume the CodeDeployRole and access repositories in CodeCommit.
  • CodeDeployRole, an IAM role assumed by the CodeDeploy Jenkins plugin. This role has permissions to write files to the S3 bucket created by this template and to create deployments in CodeDeploy.
  • Jenkins server, an EC2 instance running Jenkins.
  • An Auto Scaling group of EC2 instances running Apache and the CodeDeploy agent fronted by an Elastic Load Balancing load balancer.

To create the CloudFormation stack, choose the link that corresponds to the AWS region where you want to work:

For the us-east-1 region:

or use the link below:

https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=JenkinsCodeDeploy&templateURL=https://s3.amazonaws.com/aws-codedeploy-us-east-1/templates/latest/CodeDeploy_SampleCF_Jenkins_Integration.json

For the us-west-2 region:

or use the link below:

https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=JenkinsCodeDeploy&templateURL=https://s3.amazonaws.com/aws-codedeploy-us-east-1/templates/latest/CodeDeploy_SampleCF_Jenkins_Integration.json

6. Choose Next and specify the following values:

  • For InstanceCount, accept the default of 3. (Three EC2 instances will be launched for CodeDeploy.)
  • For InstanceType, accept the default of t2.medium.
  • For KeyName, choose an existing EC2 key pair. You will use it to connect by using SSH to the Jenkins server. Ensure that you have access to the private key of this key pair.
  • For PublicSubnet1, choose a public subnet where the load balancer, Jenkins server, and CodeDeploy web servers will be launched.
  • For PublicSubnet2, choose a public subnet where the load balancers and CodeDeploy web servers will be launched.
  • For VpcId, choose the VPC for the public subnets you used in PublicSubnet1 and PublicSubnet2.
  • For YourIPRange, type the CIDR block of the network from where you will connect to the Jenkins server using HTTP and SSH. If your local machine has a static public IP address, find it by going to https://www.whatismyip.com/ and then entering it followed by a ‘/32’. If you do not have a static IP address (or aren’t sure if you have one), you may enter ‘0.0.0.0/0’ in this field then any address can reach your Jenkins server.

7. On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box, and then choose Create.

8. Wait for the CloudFormation stack status to change to CREATE_COMPLETE. This will take approximately 6-10 minutes.

 

9. Note the values displayed on the Outputs tab. You’ll need them later.

10. Point your browser to the ELBDNSName from the Outputs tab and verify that you can see the Sample Application page.

Secure Jenkins

Point your browser to the JenkinsServerDNSName (for example, ec2-54-163-4-211.compute-1.amazonaws.com) from the Outputs tab. You should be able to see the Jenkins home page:

The Jenkins installation is currently accessible through the Internet without any form of authentication. Before proceeding to the next step, let’s secure Jenkins. On the Jenkins home page, choose Manage Jenkins. Choose Configure Global Security, and then to enable Jenkins security, select the Enable security check box.

Under Security Realm, choose Jenkins’s own user database and select the Allow users to sign up check box. Under Authorization, choose Matrix-based security. Add a user (for example, admin) and give this user all privileges. Save your changes.

Now you will be asked to provide a user name and password for the user. Choose Create an account, provide the user name (for example, admin), a strong password, and then complete the user details. Now you will be able to sign in securely to Jenkins.

Create a Project and Configure the CodeDeploy Jenkins Plugin

Now we’ll create a project in Jenkins and configure the Jenkins plugin to poll for code updates from the AWS CodeCommit repository.

1. Sign in to Jenkins with the user name and password you created earlier.

2. Choose New Item, and then choose Freestyle project. Type a name for the project (for example, CodeDeployApp), and then choose OK.

3. On the project configuration page, under Source Code Management, choose Git. Paste the URL you noted when you created the AWS CodeCommit repository (step 5).

4. In Build Triggers, select the Poll SCM check box. In the Schedule text field, type H/2 * * * *. This tells Jenkins to poll CodeCommit every two minutes for updates. (This may be too frequent for production use, but it works well for testing because it returns results frequently.)

5. Under Post-build Actions, choose Add post-build actions, and then select the Deploy an application to AWS CodeDeploy check box.

6. Paste the values you noted on the Outputs tab when you created the CloudFormation stack (step 9):

  • For AWS CodeDeploy Application Name, paste the value of CodeDeployApplicationName.
  • For AWS CodeDeploy Deployment Group, paste the value of CodeDeployDeploymentGroup.
  • For AWS CodeDeploy Deployment Config, type CodeDeployDefault.OneAtATime.
  • For AWS Region, choose the region where you created the CodeDeploy environment.
  • For S3 Bucket, paste the value of S3BucketName.
  • Leave the other settings at their default (blank).

7. Choose Use temporary credentials, and then paste the value of JenkinsCodeDeployRoleArn that appeared in the CloudFormation output.

Note the External ID field displayed on this page. This is a unique random ID generated by the CodeDeploy Jenkins plugin. This ID can be used to add a condition to the IAM role to ensure that only the plugin can assume this role. To keep things simple, we will not use the External ID as a condition, but we strongly recommend you use it for added protection in a production scenario, especially when you are using cross-account IAM roles.

 

8. Choose Test Connection.

 

9. Confirm the text “Connection test passed” appears, and then choose Save to save your settings.

Add Files to the CodeCommit Repository

Now, we’ll use the git command-line tool to clone the AWS CodeCommit repository and then add files to it. These steps show you how to use SSH to connect to the Jenkins server. If you are more comfortable with Git integrated in your IDE, follow the steps in the CodeCommit documentation to clone the repository and add files to it.

1. Use SSH to connect to the public DNS name of the EC2 instance for Jenkins (JenkinsServerDNSName from the Outputs tab) and sign in as the ec2-user. Run the following commands to configure git. Replace the values enclosed in quotes with your name and email address.

$ aws configure set region us-east-1
$ aws configure set output json
$ git config --global credential.helper '!aws codecommit credential-helper $@'
$ git config --global credential.useHttpPath true
$ git config --global user.name "YOUR NAME"
$ git config --global user.email "example@example.com"

2. Clone the repository you created in the previous step.

$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DemoRepository
Cloning into 'DemoRepository'...
warning: You appear to have cloned an empty repository.
Checking connectivity... done.

3. Switch to the DemoRepository directory:

$ cd DemoRepository/

4. Now we’ll download the source for the Sample CodeDeploy application.

$ curl -OLs http://aws-codedeploy-us-east-1.s3.amazonaws.com/samples/latest/SampleApp_Linux.zip

5. Unzip the downloaded file:

$ unzip SampleApp_Linux.zip
Archive:  SampleApp_Linux.zip
extracting: scripts/install_dependencies  
extracting: scripts/start_server    
inflating: scripts/stop_server     
inflating: appspec.yml             
inflating: index.html              
inflating: LICENSE.txt 

6. Delete the ZIP file:

$ rm SampleApp_Linux.zip

7. Use a text editor to edit the index.html file:

$ vi index.html

8. Scroll down to the body tag and add the highlighted text:

9. Save the file and close the editor.

10. Add the files to git and commit them with a comment:

$ git add appspec.yml index.html LICENSE.txt scripts/*
$ git commit -m "Initial versions of files"

11. Now push these updates to CodeCommit:

$ git push

12. If your updates have been successfully pushed to CodeCommit, you should see something like the following:

Counting objects: 9, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (9/9), 5.05 KiB | 0 bytes/s, done.
Total 9 (delta 0), reused 0 (delta 0)
remote: 
To https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DemoRepository
 * [new branch]      master -> master

13. On the Jenkins dashboard, choose the CodeDeployApp project.

14. Choose Git Polling Log to see the results of polling git for updates. There may be a few failed polls from earlier when the repository was empty.

15. Within two minutes of pushing updates, a new build with a Build ID (for example, #2 or #3) should appear in the build history.

16. Choose the most recent build. On the Build details page, choose Console Output to view output from the build.

17. At the bottom of the output, check that the status of the build is SUCCESS.

18. In the CodeDeploy console, choose AWS CodeDeploy, and then choose Deployments.

 

19. Confirm that there are two deployments: the initial deployment created by CloudFormation and a recent deployment of the latest code from AWS CodeCommit. Confirm that the status of the recent deployment is Succeeded.

 

20. Point your browser to the ELBDNSName from the Outputs tab of CloudFormation. Confirm that the text “This version was deployed by Jenkins” appears on the page.

Congratulations, you have now successfully set up the CodeDeploy Jenkins plugin and used it to automatically deploy a revision to CodeDeploy when code updates are pushed to AWS CodeCommit.

You can experiment by committing more changes to the code and then pushing them to deploy the updates automatically.

Cleaning Up

In this section, we’ll delete the resources we’ve created so that you will not be charged for them going forward.

1. Sign in to the Amazon S3 console and choose the S3 bucket you created earlier. The bucket name will start with “jenkinscodedeploy-codedeploybucket.” Choose all files in the bucket, and from Actions, choose Delete.

2. Choose OK to confirm the deletion.

3. In the CloudFormation console, choose the stack named “JenkinsCodeDeploy,” and from Actions, choose Delete Stack. Refresh the Events tab of the stack until the stack disappears from the stack list.

AWS CloudFormation Security Best Practices

The following is a guest post by Hubert Cheung, Solutions Architect.

AWS CloudFormation makes it easy for developers and systems administrators to create and manage a collection of related AWS resources by provisioning and updating them in an orderly and predictable way. Many of our customers use CloudFormation to control all of the resources in their AWS environments so that they can succinctly capture changes, perform version control, and manage costs in their infrastructure, among other activities.

Customers often ask us how to control permissions for CloudFormation stacks. In this post, we share some of the best security practices for CloudFormation, which include using AWS Identity and Access Management (IAM) policies, CloudFormation-specific IAM conditions, and CloudFormation stack policies. Because most CloudFormation deployments are executed from the AWS command line interface (CLI) and SDK, we focus on using the AWS CLI and SDK to show you how to implement the best practices.

Limiting Access to CloudFormation Stacks with IAM

With IAM, you can securely control access to AWS services and resources by using policies and users or roles. CloudFormation leverages IAM to provide fine-grained access control.

As a best practice, we recommend that you limit service and resource access through IAM policies by applying the principle of least privilege. The simplest way to do this is to limit specific API calls to CloudFormation. For example, you may not want specific IAM users or roles to update or delete CloudFormation stacks. The following sample policy allows all CloudFormation APIs access, but denies UpdateStack and DeleteStack APIs access on your production stack:

{
    "Version":"2012-10-17",
    "Statement":[{
        "Effect":"Allow",
        "Action":[        
            "cloudformation:*"
        ],
        "Resource":"*"
    },
    {
        "Effect":"Deny",
        "Action":[        
            "cloudformation:UpdateStack",
            "cloudformation:DeleteStack"
        ],
        "Resource":"arn:aws:cloudformation:us-east-1:123456789012:stack/MyProductionStack/*"
    }]
}

We know that IAM policies often need to allow the creation of particular resources, but you may not want them to be created as part of CloudFormation. This is where CloudFormation’s support for IAM conditions comes in.

IAM Conditions for CloudFormation

There are three CloudFormation-specific IAM conditions that you can add to your IAM policies:

  • cloudformation:TemplateURL
  • cloudformation:ResourceTypes
  • cloudformation:StackPolicyURL

With these three conditions, you can ensure that API calls for stack actions, such as create or update, use a specific template or are limited to specific resources, and that your stacks use a stack policy, which prevents stack resources from unintentionally being updated or deleted during stack updates.

Condition: TemplateURL

The first condition, cloudformation:TemplateURL, lets you specify where the CloudFormation template for a stack action, such as create or update, resides and enforce that it be used. In an IAM policy, it would look like this:

{
    "Version":"2012-10-17",
    "Statement":[{
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            “cloudformation:UpdateStack”
        ],
        "Resource": "*",
        "Condition": {
            "StringNotEquals": {
                "cloudformation:TemplateURL": [
                    "https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template"
                ]
            }
        }
    },
    {
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack"
        ],
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:TemplateURL": "true"
            }
        }
    }]
}

The first statement ensures that for all CreateStack or UpdateStack API calls, users must use the specified template. The second ensures that all CreateStack or UpdateStack API calls must include the TemplateURL parameter. From the CLI, your calls need to include the –template-url parameter:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template

Condition: ResourceTypes

CloudFormation also allows you to control the types of resources that are created or updated in templates with an IAM policy. The CloudFormation API accepts a ResourceTypes parameter. In your API call, you specify which types of resources can be created or updated. However, to use the new ResourceTypes parameter, you need to modify your IAM policies to enforce the use of this particular parameter by adding in conditions like this:

{
    "Version":"2012-10-17",
    "Statement":[{
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack"
        ],
        "Resource": "*",
        "Condition": {
            "ForAllValues:StringLike": {
                "cloudformation:ResourceTypes": [
                    "AWS::IAM::*"
                ]
            }
        }
    },
    {
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack"
        ],
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:ResourceTypes": "true"
            }
        }
    }]
}

From the CLI, your calls need to include a –resource-types parameter. A call to update your stack will look like this:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –resource-types=”[AWS::IAM::Group, AWS::IAM::User]”

Depending on the shell, the command might need to be enclosed in quotation marks as follow; otherwise, you’ll get a “No JSON object could be decoded” error:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –resource-types=’[“AWS::IAM::Group”, “AWS::IAM::User”]’

The ResourceTypes conditions ensure that CloudFormation creates or updates the right resource types and templates with your CLI or API calls. In the first example, our IAM policy would have blocked the API calls because the example included AWS::IAM resources. If our template included only AWS::EC2::Instance resources, the CLI command would look like this and would succeed:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –resource-types=’[“AWS::EC2::Instance”]’

The third condition is the StackPolicyURL condition. Before we explain how that works, we need to provide some additional context about stack policies.

Stack Policies

Often, the worst disruptions are caused by unintentional changes to resources. To help in mitigating this risk, CloudFormation provides stack policies, which prevent stack resources from unintentionally being updated or deleted during stack updates. When used in conjunction with IAM, stack policies provide a second layer of defense against both unintentional and malicious changes to your stack resources.

The CloudFormation stack policy is a JSON document that defines what can be updated as part of a stack update operation. To set or update the policy, your IAM users or roles must first have the ability to call the cloudformation:SetStackPolicy action.

You apply the stack policy directly to the stack. Note that this is not an IAM policy. By default, setting a stack policy protects all stack resources with a Deny to deny any updates unless you specify an explicit Allow. This means that if you want to restrict only a few resources, you must explicitly allow all updates by including an Allow on the resource "*" and a Deny for specific resources. 

For example, stack policies are often used to protect a production database because it contains data that will go live. Depending on the field that’s changing, there are times when the entire database could be replaced during an update. In the following example, the stack policy explicitly denies attempts to update your production database:

{
  "Statement" : [
    {
      "Effect" : "Deny",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "LogicalResourceId/ProductionDB_logical_ID"
    },
    {
      "Effect" : "Allow",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*"
    }
  ]
}

You can generalize your stack policy to include all RDS DB instances or any given ResourceType. To achieve this, you use conditions. However, note that because we used a wildcard in our example, the condition must use the "StringLike" condition and not "StringEquals":

{
  "Statement" : [
    {
      "Effect" : "Deny",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*",
      "Condition" : {
        "StringLike" : {
          "ResourceType" : ["AWS::RDS::DBInstance", "AWS::AutoScaling::*"]
        }
      }
    },
    {
      "Effect" : "Allow",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*"
    }
  ]
}

For more information about stack policies, see Prevent Updates to Stack Resources.

Finally, let’s ensure that all of your stacks have an appropriate pre-defined stack policy. To address this, we return to  IAM policies.

Condition:StackPolicyURL

From within your IAM policy, you can ensure that every CloudFormation stack has a stack policy associated with it upon creation with the StackPolicyURL condition:

{
    "Version":"2012-10-17",
    "Statement":[
    {
            "Effect": "Deny",
            "Action": [
                "cloudformation:SetStackPolicy"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringNotEquals": {
                    "cloudformation:StackPolicyUrl": [
                        "https://s3.amazonaws.com/samplebucket/sampleallowpolicy.json"
                    ]
                }
            }
        },    
       {
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack"
        ],
        "Resource": "*",
        "Condition": {
            "ForAnyValue:StringNotEquals": {
                "cloudformation:StackPolicyUrl": [
                    “https://s3.amazonaws.com/samplebucket/sampledenypolicy.json”
                ]
            }
        }
    },
    {
        "Effect": "Deny",
        "Action": [
            "cloudformation:CreateStack",
            "cloudformation:UpdateStack",
            “cloudformation:SetStackPolicy”
        ],
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:StackPolicyUrl": "true"
            }
        }
    }]
}

This policy ensures that there must be a specific stack policy URL any time SetStackPolicy is called. In this case, the URL is https://s3.amazonaws.com/samplebucket/sampleallowpolicy.json. Similarly, for any create and update stack operation, this policy ensures that the StackPolicyURL is set to the sampledenypolicy.json document in S3 and that a StackPolicyURL is always specified. From the CLI, a create-stack command would look like this:

aws cloudformation create-stack –stack-name cloudformation-demo –parameters ParameterKey=Password,ParameterValue=CloudFormationDemo –capabilities CAPABILITY_IAM –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –stack-policy-url https://s3-us-east-1.amazonaws.com/samplebucket/sampledenypolicy.json

Note that if you specify a new stack policy on a stack update, CloudFormation uses the existing stack policy: it uses the new policy only for subsequent updates. For example, if your current policy is set to deny all updates, you must run a SetStackPolicy command to change the stack policy to the one that allows updates. Then you can run an update command against the stack. To update the stack we just created, you can run this:

aws cloudformation set-stack-policy –stack-name cloudformation-demo –stack-policy-url https://s3-us-east-1.amazonaws.com/samplebucket/sampleallowpolicy.json

Then you can run the update:

aws cloudformation update-stack –stack-name cloudformation-demo –parameters ParameterKey=Password,ParameterValue=NewPassword –capabilities CAPABILITY_IAM –template-url https://s3.amazonaws.com/cloudformation-templates-us-east-1/IAM_Users_Groups_and_Policies.template –stack-policy-url https://s3-us-west-2.amazonaws.com/awshubfiles/sampledenypolicy.json

The IAM policy that we used ensures that a specific stack policy is applied to the stack any time a stack is updated or created.

Conclusion

CloudFormation provides a repeatable way to create and manage related AWS resources. By using a combination of IAM policies, users, and roles, CloudFormation-specific IAM conditions, and stack policies, you can ensure that your CloudFormation stacks are used as intended and minimize accidental resource updates or deletions.

You can learn more about this topic and other CloudFormation best practices in the recording of our re:Invent 2015 session, (DVO304) AWS CloudFormation Best Practices, and in our documentation.