AWS Open Source Blog

Using AWS CodePipeline and open source tools for at-scale infrastructure deployment

AWS offers a rich set of developer tools to host code, build, and deploy your application and/or infrastructure to AWS. These include AWS CodePipeline, for continuous integration and continuous deployment orchestration; AWS CodeCommit, a fully-managed source control service; AWS CodeBuild, a fully-managed continuous integration service that compiles source code, runs tests, and produces software packages; and AWS CodeDeploy, a fully-managed deployment service that automates software deployments to a variety of compute services. These services can be used together to build powerful CI/CD pipelines on AWS.

In addition, AWS developer tools allow you to leverage open source software (OSS) to build CI/CD pipelines on AWS, so you can continue to use the tools you are familiar with. For instance, you can use GitHub as the source code provider for AWS CodePipeline or run CodeBuild jobs orchestrated by Jenkins. In fact, a variety of OSS and third-party tools can be easily integrated with AWS CodeBuild, allowing you to extend the capabilities of your pipeline. As an extra benefit, you are not required to manage build servers since CodeBuild is a fully-managed service.

In this blog post, we will show you how to build a serverless infrastructure deployment pipeline (i.e., no need for you to manage build servers) on AWS using AWS developer tools in conjunction with popular open source tools such as CFN-Nag, CFN-Python-Lint, and Stacker. The pipeline will run automated validation checks against CloudFormation templates and deploy the corresponding CloudFormation stacks if the templates are valid.

The companion source code for this blog post can be found in the GitHub repo aws-codepipeline-at-scale-infrastructure-blog.

Open source tools

First, let’s briefly review the open source tools that will be later used to build the infrastructure pipeline discussed in this post.

CFN-Nag

CFN-Nag is a popular open source tool developed by Stelligent and provided to the open source community to help pinpoint security problems early on in an AWS CloudFormation template. CFN-Nag looks for patterns in AWS CloudFormation templates that may indicate insecure infrastructure, for example:

  • IAM rules that are too permissive (wildcards)
  • Security group rules that are too permissive (wildcards)
  • Access logs that aren’t enabled
  • Encryption that isn’t enabled

To use CFN-Nag from your workstation, you’ll need Ruby v.2.5 or greater installed. Assuming that you have met this prerequisite, simply run:

gem install cfn-nag

If you are using MacOS you can use HomeBrew to install:

brew install ruby brew-gem
brew gem install cfn-nag

To run CFN-Nag from the command line:

cfn_nag_scan --input-path <path to cloudformation json>

For further information please check the CFN-Nag GitHub repository.

CFN-Python-Lint

CFN-Python-Lint was released by AWS to the open source community as a CloudFormation linter. It validates AWS CloudFormation templates (YAML and JSON) against the AWS CloudFormation spec and does additional checks including ensuring valid values for resource properties, and best practices – for instance, if an input parameter is defined in a template but never referenced, the linter will raise a warning.

To install CFN-Python-Lint in your workstation you can use pip:

pip install cfn-lint

If pip is not available, you can use the Python command line:

python setup.py clean --all then python setup.py install

If you are using Mac OS, you can also install CFN-Python-Lint with HomeBrew:

brew install cfn-lint

To invoke CFN-Python-Lint from the command line:

 cfn-lint <path to yaml template> 

This will run standard linting of your template.

For further information please see the CFN-Python-lint GitHub repository.

Stacker

Stacker is an open source tool and library created by the Remind engineering team and released to the open source community. It is used to orchestrate and manage CloudFormation stacks across one or more accounts. Stacker can manage stack dependencies automatically and allow the output parameters of a given stack to be passed as input to other stacks (even for stacks deployed in different accounts!) making sure the stacks are created in the right order. It can also parallelize the deployment of non-dependent stacks reducing deployment time significantly.

To install Stacker in your workstation, you can use pip:

pip install stacker

To invoke the command line tool, run this command after stacker is installed:

stacker build <path to cloudformation yaml or json template>

Stacker will either launch or update your AWS CloudFormation stacks based on their configurations. Stacker can detect if the templates or their parameters have changed. If no changes are detected to a given stack, Stacker will not update that particular stack.

For further information see the Stacker GitHub repository or Stacker Documentation, Release 1.7.0.

Overview of the infrastructure pipeline

Overview of the infrastructure pipeline.

 

The figure above shows the infrastructure deployment pipeline we will build in this blog post. It uses AWS services such as AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild in conjunction with open source software such as CFN-Nag, CFN-Python-Lint, and Stacker.

The pipeline works as follows: a developer creates CloudFormation templates to provision infrastructure resources on AWS (e.g., a VPC with subnets, route tables, security groups, etc). When ready, the developer pushes the templates to an AWS CodeCommit repository which will then trigger the infrastructure pipeline. OSS tools CFN-Nag and CFN-Python-Lint will run in parallel within a CodeBuild environment to validate the templates. CFN-Nag will check the templates against well-known security vulnerabilities while CFN-Python-Lint will enforce best practices and verify compliance with the CloudFormation specification. If the templates do not pass the verification tests, the pipeline will stop in a failure state to prevent deployment issues. If the templates pass verification, Stacker will create or update CloudFormation stacks for each template. If Stacker fails, the pipeline will stop in a failure state to indicate a deployment issue. Otherwise, the stacks will be created or updated to provision the desired AWS resources.

As you might have noticed, the infrastructure pipeline described is generic and can be used to deploy arbitrary resources on AWS across a number of AWS accounts, while enforcing verification rules against the submitted templates. For simplicity, we use Stacker to deploy to a single AWS account in this post, but the tool really shines in the context of large cross-account deployments.

What will be deployed using the pipeline?

In this post, we will use the infrastructure deployment pipeline to deploy three simple stacks as follows:

Stack #1: creates a private S3 bucket.
Stack #2: creates an EC2 IAM role that allows putting objects in the S3 bucket created in stack #1.
Stack #3: creates an EC2 instance and EC2 profile that assumes the IAM role created in stack #2 and uploads a simple file to the S3 bucket created in stack #1. Stack #3 depends on stacks #1 and #2.

Below is a snippet of the Stacker configuration file we use to create the three stacks we just mentioned:

#-------------------------------------------------------------------------#
# Variable definitions
#-------------------------------------------------------------------------#
namespace: cposs
stacker_execution_profile: &stacker_execution_profile stacker_execution
stacker_bucket: ""  # not using S3 buckets to store CloudFormation templates

# any unique S3 bucket - doesn't matter much for demo purposes
s3_bucket_name: &s3_bucket_name cposssamples3bucket182755552031
# cheapest EC2 instance for testing purposes
ec2_instance_type: &ec2_instance_type t2.micro

#-------------------------------------------------------------------------#
# Stack Definitions (https://stacker.readthedocs.io/en/latest/config.html)
#-------------------------------------------------------------------------#
stacks:
  - name: sample-s3-bucket
    profile: *stacker_execution_profile
    template_path: templates/s3-bucket-template.yaml
    variables:
      S3BucketName: *s3_bucket_name

  - name: sample-ec2-iam-role
    profile: *stacker_execution_profile
    template_path: templates/ec2-role-template.yaml
    variables:
      S3BucketName: ${output sample-s3-bucket::S3BucketName}

  - name: sample-ec2-instance
    profile: *stacker_execution_profile
    template_path: templates/ec2-template.yaml
    variables:
      InstanceType: *ec2_instance_type
      S3BucketName: ${output sample-s3-bucket::S3BucketName}
      EC2IamRoleName: ${output sample-ec2-iam-role::EC2RoleName}

At the top of the file we define variables. The namespace variable is reserved for Stacker and used to prefix stack names. The other variables defined use YAML anchors that can be referenced in other parts of the template. For instance, the variable ec2_instance_type creates an anchor ec2_instance_type that refers to value “t2.micro”. This anchor value is later assigned to input variable InstanceType for stack sample-ec2-instance (see “InstanceType: *ec2_instance_type” above in the snippet of the Stacker configuration file.

The stacks session defines our three stacks. For each stack, we indicate the name of the stack, the AWS profile (credentials) that Stacker should use to deploy the stack, the local path to the CloudFormation template, and the input parameters to the stack. Notice that we use ${output ...} to refer to stack output parameters and pass those as input parameters to other stacks. For instance, the S3BucketName output parameter from stack sample-s3-bucket is used as an input parameter by the other two stacks.

Prerequisites

To follow the steps to provision the deployment pipeline illustrated in the diagram above, make sure you have the following prerequisites completed:

Steps

We start by provisioning the infrastructure pipeline that we introduced earlier.

1) Get the provided source code
Clone the Git repository with the code provided in the prerequisites section.

$ git clone https://github.com/aws-samples/aws-codepipeline-at-scale-infrastructure-blog/tree/master

You should then see the following file structure:

  • codepipeline/: CloudFormation templates to deploy the CI/CD pipeline including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild and OSS CFN-Nag, CFN-Lint, and Stacker)
  • stacker: Configuration files required by Stacker to perform CloudFormation stack deployments
  • stacker/buildspec.yaml: CodeBuild buildspec that will install and invoke Stacker for CFN provisioning
  • stacker/stacker-config.yaml: Stacker config file containing stack descriptions and input parameters
  • stacker/stacker-profiles: AWS profiles file Stacker will use to deploy the various stacks
  • templates/*: the sample templates for the the stacks mentioned above that will be validated and deployed by the pipeline

2) Create a local git repo for the pipeline artifacts

Now that you have the source code, you’ll create another local Git repository and copy only the following folders into it: stacker/ and templates/. This is because, for simplicity, we provided the code that provisions the pipeline and the code that is pushed through the pipeline in the same repo.

Here’s what you should do. Assuming you have cloned the provided Git repository into folder ~/sample-oss-pipeline/ you can make a directory named sample-pipeline-artifacts/. Then move your stacker/ and templates/ folders to this directory. The contents of this directory will be pushed to the repository that we will create shortly into AWS CodeCommit.

cd ~
mkdir sample-pipeline-artifacts/
cd sample-pipeline-artifacts/
git init
cp -r ~/Codepipeline-oss-blog/ ~/sample-pipeline-artifacts
rm README.md 
rm -r  blog codepipeline
git add --all
git commit -m "sample artifacts for OSS infrastructure pipeline"

After running these commands you should have the stacker/ and templates/ folder in your local repository.

3) Edit the Stacker-Profile file

Next, in the text editor of your choice, open the file, stacker/stacker-profiles.

This file specifies the AWS profiles available in the CodeBuild environment running Stacker. Each profile refers to a specific IAM Role. The stacker_master profile is used to invoke Stacker from within the AWS CodeBuild environment. Stacker assumes that IAM role to access resources in the account where the pipeline is deployed. In order to perform stack deployments, Stacker needs one or more profiles. Since we’re deploying to a single target account (which in our example is the same account where the pipeline is deployed), we define a single profile named stacker_execution. If we had multiple target accounts, we could define multiple profiles (e.g., stacker_execution_accountA, stacker_execution_accountB, etc.)

You’ll need to modify the AWS account id 123456789012 specified in stacker/stacker-profiles and replace that value with your own AWS account ID. This gives Stacker the target account for deployment. After you update your account ID, save the edited file and commit it:

git commit -am "Updated target AWS accound id in stacker-profiles"

4) Edit the Stacker configuration file

In the text editor of your choice, open the file stacker/stacker-config.yaml. As discussed, this file contains parameters used by Stacker to create AWS resources for you. Update parameter s3_bucket_name with a globally unique name for a not-yet-existing S3 bucket that will be created by Stacker as part of the CloudFormation stack deployment. This is just a simple private S3 bucket that will be created for you by Stacker. Also, note that the parameter ec2_instance_type indicates the EC2 instance type to deploy. By default, a “t2.micro” instance will be provisioned. Upon deployment, the EC2 instance will create and upload a simple file to the S3 bucket you specified.

5) Provision the pipeline

Now it’s time to deploy the infrastructure pipeline using the sample code provided. Here we create two CloudFormation pipeline stacks as follows:

Pipeline stack #1: The first stack provisions the infrastructure pipeline we introduced earlier. The stack will provision a CodeCommit repo, a CodePipeline pipeline, and three CodeBuild projects. The first CodeBuild project has a built-in buildspec configuration that will install and use CFN-Nag to check for security vulnerabilities in CloudFormation templates. The second CodeBuild project has a built-in buildspec configuration that will install and use CFN-Python-Lint to lint CloudFormation templates. The third CodeBuild project does not have a built-in buildspec configuration. Instead, a buildspec file will be pushed to the CodeCommit repo to instruct CodeBuild how it should install and run Stacker to provision and orchestrate CloudFormation stacks.

The buildspec file can either be built into the CloudFormation template or explicitly passed by developers. The advantage of building the buildspec file into the CloudFormation template is that it can help ensure that common standards are followed by teams who deploy code through a pipeline. (If you build the buildspec file into the CloudFormation template, the buildspec file cannot be edited or modified by developers.)

Provision the pipeline by following the steps listed below:

  • Log in to your AWS Account
  • Navigate to the CloudFormation console.
  • Create a stack named cposs-codepipeline-stack that represents the infrastructure pipeline

In the CloudFormation Stacks page, click the Create stack button:

Create Stack Image.

Step 1: On the following page (Create stack), complete the following sections:

  • Prerequisite – Prepare template: leave the default radio button selected, Template is ready.
  • Specify template – select the radio button, Upload a template file.
  • Select the button Choose file and select the file from your private repository, codepipeline/codepipeline-template.yaml.
  • Click Next.

AWS Cloudformation Create Stack Page.

Step 2: On the page Specify stack details, complete the following sections:

  • Stack name: enter the name cposs-codepipeline-stack.
  • Parameters: leave the defaults or change as required (e.g., you can pick a name for you CodeCommit repository or accept the default “samplerepo”). Make sure to complete these two mandatory fields:

    • CodePipelineArtifactsS3BucketName: Specify a unique S3 bucket name that will be used by CodePipeline to store your build artifacts. We suggest you pick a name that starts with prefix “cposs”, e.g., such as cposs-pipeline-artifacts-XXXXXX where X is a random number you choose or your AWS account id.
    • TargetAccount: Copy and paste your AWS account number in this field.
  • Click the Next button.

Step 3: On the page Configure stack options, leave the defaults in place and select Next.

Step 4: On the page Review, take a moment to review the inputs:

  • Make sure that you have the unique S3 bucket name and your correct account number entered or this project will not work.
  • Once you are satisfied that all details have been correctly entered, select the check box “I acknowledge that AWS CloudFormation might create IAM resources with custom names.” This is required because the template will create IAM roles for the various CodeBuild projects as well as the CodePipeline pipeline.
  • Select Create stack. It will take a few minutes for your first stack to be created. You should see something like this:

Once your stack has been created successfully, you will see output similar to:

Before you create the second stack, navigate over to the CodeCommit Console. Select Repositories and you should see a sample repository created, like this (the name of the repository will be different if you have changed the default value):

Sample repo shown in the AWS CodeCommit Repository

Next, navigate to the IAM console and select Roles. You’ll be able to see the five roles that your stack created:

IAM roles created are shown here.

Let’s look more closely at each of the roles and make sure that we understand the purpose of each role:

  • cposs-CodeBuildCFNLintRole: This is an IAM service role created for the CFN-Python-Lint CodeBuild environment
  • cposs-CodeBuildCFNNagRole: This is an IAM service role created for the CFN-Nag CodeBuild environment
  • cposs-CodeBuildDeployerRole: This is an IAM service role created for the Stacker CodeBuild environment
  • cposs-CodePipelineRole: This in an IAM service role for CodePipeline to allow the pipeline to perform tasks such as read/write artifacts from/to the artifacts S3 bucket, to trigger the various CodeBuild environments
  • cposs-StackerMasterRole: This is an IAM role used to launch Stacker, which allows Stacker to assume the cposs-StackerExecutionRole (to be created next) to deploy AWS resources via CloudFormation on the various target accounts. For simplicity, in our example we are using a single account, i.e., the same account where the pipeline resides.

The file stacker/buildspec.yaml contains the AWS CodeBuild buildspec to install and invoke Stacker for CloudFormation template provisioning. The buildspec.yml file defines the steps taken by AWS CodeBuild to test the AWS CloudFormation template prior to deploying it into a production environment. As mentioned before, there are two main ways in which you can specify your CodeBuild buildspec configuration. The first is to specify a buildspec inside the CloudFormation template (built-in buildspec) as we did for CFN-Nag and CFN-Python-Lint. The code snippet below shows the CodeBuild project created via CloudFormation for the CFN-Python-Lint CodePipeline action. Notice property BuildSpec below and how we used it to install (see pip install cfn-lint) and invoke the CFN-Python-Lint tool from within our CodeBuild Python 3.6.5 environment.

  CFNLintCodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
        Name: !Sub ${Namespace}-cfn-lint-code-build-project
        Description: CodeBuild Project to validate CloudFormation templates using cnf-python-lint
        Artifacts:
          Type: CODEPIPELINE
        Environment:
            Type: LINUX_CONTAINER
            ComputeType: BUILD_GENERAL1_SMALL
            Image: aws/codebuild/python:3.6.5
            EnvironmentVariables:
              - Name: CFNTemplatesPath
                Value: !Ref CFNTemplatesPath
        ServiceRole:
          !GetAtt CFNNagCodeBuildServiceRole.Arn
        Source:
            Type: CODEPIPELINE
            BuildSpec: |
              version: 0.2
              phases:
                install:
                  commands:
                    - pip install --upgrade pip
                    - env && ls -l && python --version
                    - pip install cfn-lint
                    - cfn-lint ${CFNTemplatesPath}*.yaml

Using the BuildSpec property in CloudFormation means that developers pushing code to the pipeline cannot modify the behavior of the CodeBuild environment and how we use CFN-Python-Lint to lint Cloudformation templates.

Another way to configure a CodeBuild environment is to allow developers to create a custom buildspec.yml file and push it to the CodeCommit repository. This gives them more flexibility and control over the CodeBuild execution environment so they, for instance, install and run software as needed. In cases where different teams own the pipeline vs. the pipeline artifacts the pipeline builder team should plan carefully what level of customization they want to offer developers vs. what build steps are immutable.

Below is the custom buildspec.yml YAML file we use for the CodeBuild project that runs Stacker. Notice that we define some environment variables and run a few commands to configure AWS credentials and to install (pip install stacker==1.7.0) and run Stacker (stacker build).

version: 0.2

env:
  variables:
    stacker_master_profile_name: "stacker_master"
    stacker_profiles_file: "stacker-profiles"
    stacker_orchestration_file: "stacker-config.yaml"

phases:

  pre_build:
    commands:
      - pip install --upgrade pip
      - pip install stacker==1.7.0
      - env && ls -lha && python --version

  build:
    commands:
      - export AWS_CONFIG_FILE="${CODEBUILD_SRC_DIR}/${StackerConfigPath}/${stacker_profiles_file}"
      - echo "AWS_CONFIG_FILE=${AWS_CONFIG_FILE}"
      - stacker build "${CODEBUILD_SRC_DIR}/${StackerConfigPath}/${stacker_orchestration_file}" --profile $stacker_master_profile_name --recreate-failed

Pipeline stack #2: The second stack we’re going to create provisions the IAM role required by Stacker to deploy AWS resources in a given target AWS account. We use separate templates for the pipeline and the Stacker execution role, because the latter can now be deployed to multiple target accounts if we wish the pipeline to provision stacks into those accounts.

Since we’re using a single AWS account in this post, we will create the Stacker execution role in the same account as the pipeline. The IAM policies in the role must set permissions to deploy all types of AWS resources referenced in the template. In our case, Stacker needs permissions to create an S3 bucket, an EC2 instance, IAM roles, etc. Please check the template for further details.

  • Create your second stack, named cposs-stacker-execution-role-stack,using the file codepipeline/stacker-execution-role-template.yaml. This second stack will create an IAM role in your account so that Stacker can deploy outputs in the same account where it lives. As stated above, while we are using a single account for this project, this approach will also work for multiple accounts.

On the CloudFormation Stacks page, select Create stack.

  • Step 1: On the Create stack page, complete the following sections:
    • Prerequisite – Prepare template: leave the default selected, template is ready.
    • Specify template – select the radio button, upload a template file.
    • Select the button Choose file and select the file from your private repo, codepipeline/stacker-execution-role-template.yaml.
    • Click Next.
  • Step 2: On the page Specify stack details, complete the following sections:
    • Stack name: enter the name cposs-stacker-execution-role-stack
    • Parameters: leave the defaults. However, make sure to complete these two fields:

      • Namespace: leave the default value, cposs, as is.
      • StackMasterAccountID: Copy and paste your AWS account number in this field.
    • Select Next
  • Step 3: On the Configure stack options page, leave the defaults in place and select Next
  • Step 4: On the Review page, take a moment to review the inputs.
    • Make sure that you have the unique S3 bucket name and your correct account number entered or this project will not work.
    • At the bottom of the page, click the tick box “I acknowledge that AWS CloudFormation might create IAM resources with custom names.”
    • Click Create Stack. It will take a minute or two for your stack to be created. You should see something like this:

screenshot showing stack 2 being created

When your stack has been successfully built, the console screen will look like this:

screenshot showing stack 2 Successful Completion

Using the AWS CodePipeline infrastructure pipeline

Now that the pipeline is ready and we have configured the pipeline artifacts, it’s time to use the pipeline to deploy our stacks.

Prerequisites

In order to push code to CodeCommit, which in turn triggers the pipeline, you’ll need to make sure that you have:

  • A local Git client installed.
  • AWS credentials properly configured to allow your Git client to authenticate and push code to the CodeCommit repo.

Once you have confirmed that the above prerequisites are met, follow these steps:

  • From within your AWS Account, navigate to the CodeCommit console and look for the repo named samplerepo.
  • Under the heading Clone URL, click the blue text HTTPS. This action will copy the HTTPS address to your clipboard.

Screenshot of the AWS CodeCommit repository

At this point, you should have a local git repository with the stacker/ and templates/ folders (created earlier). Add a Git remote to your CodeCommit repository like this:

git remote add cc <HTTPS URL for the samplerepo>

The cc here is a short for CodeCommit. Make sure your local Git repo does not have any pending changes to be committed (git status).

Push your code to the CodeCommit repository’s master branch: git push cc master:master.

After pushing your code, navigate to the AWS CodePipeline console to verify that the pipeline was triggered successfully. Once in the CodePipeline console, select Pipelines then your pipeline cposs-cicd-pipeline. If all goes well , your templates will be validated and deployed and the pipeline should look like the screenshot below:

 

Pipeline Successful Trigger Screenshot

So what have we deployed exactly? Well, the three stacks we mentioned previously: an S3 bucket, an IAM role, and and EC2 instance (plus other resources, e.g., security group, EC2 profile) that assumes the IAM role and writes a file to the S3 bucket. You should have a file named ‘hello.txt’ in the S3 bucket your specified in stacker/stacker-config.yaml.

Have a look now at your sample repo in the AWS CodeCommit Console, to review what you pushed through your serverless pipeline:

CodeCommit console showing the files pushed through your serverless pipeline.Screenshot of the file structure in the sample repo

Earlier in this post we introduced the file structure and explained the files that will be pushed out to the codepipeline and their purpose. Let’s briefly recap:

  1. buildspec.yaml: This is the AWS CodeBuild buildspec. As discussed, its purpose is to install and invoke Stacker for CloudFormation resource provisioning
  2. stacker-config.yaml: This is the stacker config file containing stack descriptions and input parameters
  3. stacker-profiles: This file contains the AWS profiles that Stacker used to assume roles during deployment

Testing pipeline failure scenarios

Let’s make a change to one of the templates under templates/ to cause the pipeline template verification stage to fail. For example, specify an invalid type for the S3BucketName input parameter in the template s3-bucket-template.yaml, like this:

Original template:

Parameters:

  S3BucketName:
    Type: *String*
    Description: Unique name to assign to S3 bucket

Incorrect template:

Parameters:

  S3BucketName:
    Type: *InvalidType*
    Description: Unique name to assign to S3 bucket

Commit and push your code to the CodeCommit repository again and wait for the pipeline to fail as CFN-Python-Lint checks your template against CloudFormation specs, as in the screenshot below:

 

Failed Pipeline Screenshot

For details on why the pipeline failed, click on the Details link in the CodeBuild action for CFN-Python-Lint. You should see a screen similar to the one below. On line 117 it says “Parameter S3BucketName has invalid type InvalidType.”

cli troubleshooting on the failed pipeline use case.

That’s exactly what we wanted: CFN-Python-Lint is helping us make sure that our templates are valid before letting the pipeline deploy them.

Cleanup

To avoid incurring future charges to your AWS accounts, delete the resources created in your AWS account for this project. You can simply destroy the two pipeline stacks created earlier: cposs-codepipeline-stack and cposs-stacker-execution-role-stack. Remember that you’ll have to clean the S3 bucket created by the cposs-codepipeline-stack stack otherwise Cloudformation won’t be able to delete the corresponding stack.

Conclusion

In this post, we showed you how to leverage popular open source tools in conjunction with AWS Code services such as CodePipeline, CodeCommit, and CodeBuild to build, validate and deploy arbitrary infrastructure stacks. We used CFN-NAG and CFN-Lint to validate the templates, and Stacker to perform the deployment of the CloudFormation stacks. We also explained the details of the artifacts that were pushed to the AWS CodeCommit repository.

More about open source and CI/CD pipelines

For additional examples of how you can integrate AWS services with Open Source Tools to build CI/CD pipelines, please see some other posts in the AWS Open Source Blog, including: Building a Deployment Pipeline with Spinnaker on Kubernetes, Git Push to Deploy your App on EKS or Integrating Phabricator with AWS CodePipeline via AWS CodeCommit.

TAGS: , , ,
Marcilio Mendonca

Marcilio Mendonca

Marcilio Mendonca is a Sr. Solutions Developer in the Solution Prototyping Team at Amazon Web Services. In the past years, he has been helping AWS customers to design, build and deploy modern applications on AWS leveraging VMs, containers, and Serverless architectures. Prior to joining AWS, Marcilio was a Software Development Engineer with Amazon. He also holds a PhD in Computer Science. You can find him on twitter at @marciliomendonc.

Charles Gibson

Charles Gibson

Charles Gibson is a Cloud Application Architect at Amazon Web Services. He helps Enterprise-scale customers around the globe how to migrate on-premise workloads to Amazon Cloud. Charles also advises customers modernize their applications so that customers can fully realize the benefits of operating their workloads in the cloud. Charles enjoys exploring the back roads of Texas and cooking up Northern Italian food for friends and family. You can find him on Twitter at @charles_gib.