Category: How-to


Building a Secure Cross-Account Continuous Delivery Pipeline

by Anuj Sharma | on | in How-to |

Most organizations create multiple AWS accounts because they provide the highest level of resource and security isolation. In this blog post, I will discuss how to use cross account AWS Identity and Access Management (IAM) access to orchestrate continuous integration and continuous deployment.

Do I need multiple accounts?

If you answer “yes” to any of the following questions you should consider creating more AWS accounts:

  • Does your business require administrative isolation between workloads? Administrative isolation by account is the most straightforward way to grant independent administrative groups different levels of administrative control over AWS resources based on workload, development lifecycle, business unit (BU), or data sensitivity.
  • Does your business require limited visibility and discoverability of workloads? Accounts provide a natural boundary for visibility and discoverability. Workloads cannot be accessed or viewed unless an administrator of the account enables access to users managed in another account.
  • Does your business require isolation to minimize blast radius? Separate accounts help define boundaries and provide natural blast-radius isolation to limit the impact of a critical event such as a security breach, an unavailable AWS Region or Availability Zone, account suspensions, and so on.
  • Does your business require a particular workload to operate within AWS service limits without impacting the limits of another workload? You can use AWS account service limits to impose restrictions on a business unit, development team, or project. For example, if you create an AWS account for a project group, you can limit the number of Amazon Elastic Compute Cloud (Amazon EC2) or high performance computing (HPC) instances that can be launched by the account.
  • Does your business require strong isolation of recovery or auditing data? If regulatory requirements require you to control access and visibility to auditing data, you can isolate the data in an account separate from the one where you run your workloads (for example, by writing AWS CloudTrail logs to a different account).
  • Do your workloads depend on specific instance reservations to support high availability (HA) or disaster recovery (DR) capacity requirements? Reserved Instances (RIs) ensure reserved capacity for services such as Amazon EC2 and Amazon Relational Database Service (Amazon RDS) at the individual account level.

Use case

The identities in this use case are set up as follows:

  • DevAccount

Developers check the code into an AWS CodeCommit repository. It stores all the repositories as a single source of truth for application code. Developers have full control over this account. This account is usually used as a sandbox for developers.

  • ToolsAccount

A central location for all the tools related to the organization, including continuous delivery/deployment services such as AWS CodePipeline and AWS CodeBuild. Developers have limited/read-only access in this account. The Operations team has more control.

  • TestAccount

Applications using the CI/CD orchestration for test purposes are deployed from this account. Developers and the Operations team have limited/read-only access in this account.

  • ProdAccount

Applications using the CI/CD orchestration tested in the ToolsAccount are deployed to production from this account. Developers and the Operations team have limited/read-only access in this account.

Solution

In this solution, we will check in sample code for an AWS Lambda function in the Dev account. This will trigger the pipeline (created in AWS CodePipeline) and run the build using AWS CodeBuild in the Tools account. The pipeline will then deploy the Lambda function to the Test and Prod accounts.

 

Setup

  1. Clone this repository. It contains the AWS CloudFormation templates that we will use in this walkthrough.
git clone https://github.com/awslabs/aws-refarch-cross-account-pipeline.git
  1. Follow the instructions in the repository README to push the sample AWS Lambda application to an AWS CodeCommit repository in the Dev account.
  2. Install the AWS Command Line Interface as described here. To prepare your access keys or assume-role to make calls to AWS, configure the AWS CLI as described here.

Walkthrough

Note: Follow the steps in the order they’re written. Otherwise, the resources might not be created correctly.

  1. In the Tools account, deploy this CloudFormation template. It will create the customer master keys (CMK) in AWS Key Management Service (AWS KMS), grant access to Dev, Test, and Prod accounts to use these keys, and create an Amazon S3 bucket to hold artifacts from AWS CodePipeline.
aws cloudformation deploy --stack-name pre-reqs \
--template-file ToolsAcct/pre-reqs.yaml --parameter-overrides \
DevAccount=ENTER_DEV_ACCT TestAccount=ENTER_TEST_ACCT \
ProductionAccount=ENTER_PROD_ACCT

In the output section of the CloudFormation console, make a note of the Amazon Resource Number (ARN) of the CMK and the S3 bucket name. You will need them in the next step.

  1. In the Dev account, which hosts the AWS CodeCommit repository, deploy this CloudFormation template. This template will create the IAM roles, which will later be assumed by the pipeline running in the Tools account. Enter the AWS account number for the Tools account and the CMK ARN.
aws cloudformation deploy --stack-name toolsacct-codepipeline-role \
--template-file DevAccount/toolsacct-codepipeline-codecommit.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides ToolsAccount=ENTER_TOOLS_ACCT CMKARN=FROM_1st_Step
  1. In the Test and Prod accounts where you will deploy the Lambda code, execute this CloudFormation template. This template creates IAM roles, which will later be assumed by the pipeline to create, deploy, and update the sample AWS Lambda function through CloudFormation.
aws cloudformation deploy --stack-name toolsacct-codepipeline-cloudformation-role \
--template-file TestAccount/toolsacct-codepipeline-cloudformation-deployer.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides ToolsAccount=ENTER_TOOLS_ACCT CMKARN=FROM_1st_STEP  \
S3Bucket=FROM_1st_STEP
  1. In the Tools account, which hosts AWS CodePipeline, execute this CloudFormation template. This creates a pipeline, but does not add permissions for the cross accounts (Dev, Test, and Prod).
aws cloudformation deploy --stack-name sample-lambda-pipeline \
--template-file ToolsAcct/code-pipeline.yaml \
--parameter-overrides DevAccount=ENTER_DEV_ACCT TestAccount=ENTER_TEST_ACCT \
ProductionAccount=ENTER_PROD_ACCT CMKARN=FROM_1st_STEP \
S3Bucket=FROM_1st_STEP--capabilities CAPABILITY_NAMED_IAM
  1. In the Tools account, execute this CloudFormation template, which give access to the role created in step 4. This role will be assumed by AWS CodeBuild to decrypt artifacts in the S3 bucket. This is the same template that was used in step 1, but with different parameters.
aws cloudformation deploy --stack-name pre-reqs \
--template-file ToolsAcct/pre-reqs.yaml \
--parameter-overrides CodeBuildCondition=true
  1. In the Tools account, execute this CloudFormation template, which will do the following:
    1. Add the IAM role created in step 2. This role is used by AWS CodePipeline in the Tools account for checking out code from the AWS CodeCommit repository in the Dev account.
    2. Add the IAM role created in step 3. This role is used by AWS CodePipeline in the Tools account for deploying the code package to the Test and Prod accounts.
aws cloudformation deploy --stack-name sample-lambda-pipeline \
--template-file ToolsAcct/code-pipeline.yaml \
--parameter-overrides CrossAccountCondition=true \
--capabilities CAPABILITY_NAMED_IAM

What did we just do?

  1. The pipeline created in step 4 and updated in step 6 checks out code from the AWS CodeCommit repository. It uses the IAM role created in step 2. The IAM role created in step 4 has permissions to assume the role created in step 2. This role will be assumed by AWS CodeBuild to decrypt artifacts in the S3 bucket, as described in step 5.
  2. The IAM role created in step 2 has permission to check out code. See here.
  3. The IAM role created in step 2 also has permission to upload the checked-out code to the S3 bucket created in step 1. It uses the KMS keys created in step 1 for server-side encryption.
  4. Upon successfully checking out the code, AWS CodePipeline triggers AWS CodeBuild. The AWS CodeBuild project created in step 4 is configured to use the CMK created in step 1 for cryptography operations. See here. The AWS CodeBuild role is created later in step 4. In step 5, access is granted to the AWS CodeBuild role to allow the use of the CMK for cryptography.
  5. AWS CodeBuild uses pip to install any libraries for the sample Lambda function. It also executes the aws cloudformation package command to create a Lambda function deployment package, uploads the package to the specified S3 bucket, and adds a reference to the uploaded package to the CloudFormation template. See here.
  6. Using the role created in step 3, AWS CodePipeline executes the transformed CloudFormation template (received as an output from AWS CodeBuild) in the Test account. The AWS CodePipeline role created in step 4 has permissions to assume the IAM role created in step 3, as described in step 5.
  7. The IAM role assumed by AWS CodePipeline passes the role to an IAM role that can be assumed by CloudFormation. AWS CloudFormation creates and updates the Lambda function using the code that was built and uploaded by AWS CodeBuild.

This is what the pipeline looks like using the sample code:

Conclusion

Creating multiple AWS accounts provides the highest degree of isolation and is appropriate for a number of use cases. However, keeping a centralized account to orchestrate continuous delivery and deployment using AWS CodePipeline and AWS CodeBuild eliminates the need to duplicate the delivery pipeline. You can secure the pipeline through the use of cross account IAM roles and the encryption of artifacts using AWS KMS. For more information, see Providing Access to an IAM User in Another AWS Account That You Own in the IAM User Guide.

References

Use AWS CloudFormation to Automate the Creation of an S3 Bucket with Cross-Region Replication Enabled

by Rajakumar Sampathkumar | on | in How-to | | Comments

At the request of many of our customers, in this blog post, we will discuss how to use AWS CloudFormation to create an S3 bucket with cross-region replication enabled. We’ve included a CloudFormation template with this post that uses an AWS Lambda-backed custom resource to create source and destination buckets.

What is S3 cross-region replication?

Cross-region replication is a bucket-level feature that enables automatic, asynchronous copying of objects across buckets in different AWS regions. You can create two buckets in two different regions and use the ReplicationConfiguration property to replicate the objects from one bucket to the other. For example, you can have a bucket in us-east-1 and replicate the bucket objects to a bucket in us-west-2.

For more information, see What Is and Is Not Replicated in Cross-Region Replication.

Challenge

When you enable cross-region replication, the replicated objects will be stored in only one destination (an S3 bucket). The destination bucket must already exist and it must be in an AWS region different from your source bucket.

Using CloudFormation, you cannot create the destination bucket in a region different from the region in which you are creating your stack. To create the destination bucket, you can:

Solution overview

The CloudFormation template provided with this post uses an AWS Lambda-backed custom resource to create an S3 destination bucket in one region and a source S3 bucket in the same region as the CloudFormation endpoint.

Note: In this scenario, CloudFormation is not aware of the destination bucket created by AWS Lambda. For this reason, CloudFormation will not delete this resource when the stack is deleted.

(more…)

Performing Blue/Green Deployments with AWS CodeDeploy and Auto Scaling Groups

by Jeff Levine | on | in How-to |

Jeff Levine is a Solutions Architect for Amazon Web Services.

Amazon Web Services offers services that enable organizations to leverage the power of the cloud for their development and deployment needs. AWS CodeDeploy makes it possible to automate the deployment of code to either Amazon EC2 or on-premises instances. AWS CodeDeploy now supports blue/green deployments. In this blog post, I will discuss the benefits of blue/green deployments and show you how to perform one.

(more…)

Using AWS Lambda and Amazon DynamoDB in an Automated Approach to Managing AWS CloudFormation Template Parameters and Mappings

by Anuj Sharma | on | in How-to |

AWS CloudFormation gives you an easy way to codify the creation and management of related AWS resources. The optional Mappings and Parameters sections of CloudFormation templates help you organize and parameterize your templates so you can quickly customize your stack. As organizations adopt Infrastructure as Code best practices, the number of mappings and parameters can quickly grow. In this blog post, we’ll discuss how AWS Lambda and Amazon DynamoDB can be used to simplify updates, reuse, quick lookups, and reporting for these mappings and parameters.

Solution overview

There are three parts to the solution described in this post. You’ll find sample code for this solution in this AWS Labs GitHub repository.

  1. DynamoDB table: Used as a central location to store and update all key-value pairs used in the ‘Mappings’ and ‘Parameters’ sections of the CloudFormation template. This could be a centralized table for the whole organization, with a partition key consisting of the team name and environment (for example, development, test, production) and a sort key for the application name. For more information about the types of keys supported by DynamoDB, see Core Components in the Amazon DynamoDB Developer Guide.

Here is the sample data in the table. “teamname-environment” is the partition key and “appname” is the sort key.

{
        "teamname-environment": "team1-dev",
        "appname": "app1",
        "mappings": {
            "elbsubnet1": "subnet-123456",
            "elbsubnet2": "subnet-234567",
            "appsubnet1": "subnet-345678",
            "appsubnet2": "subnet-456789",
            "vpc": "vpc-123456",
            "appname":"app1",
            "costcenter":"123456",
            "teamname":"team1",
            "environment":"dev",
            "certificate":"arn-123456qwertyasdfgh",
            "compliancetype":"pci",
            "amiid":"ami-123456",
            "region":"us-west-1",
            "publichostedzoneid":"Z234asdf1234asdf",
            "privatehostedzoneid":"Z345SDFGCVHD123",
            "hostedzonename":"demo.internal"
        }
}


 

  1. Lambda function: Accepts the inputs of the primary keys, looks up the DynamoDB table, and returns all the key-value data.
  2. Custom lookup resource: Makes a call to the Lambda function, with the inputs of primary keys (accepted by the Lambda function) and retrieves the key-value data. All CloudFormation templates can duplicate this generic custom resource.

 

The diagram shows the interaction of services and resources in this solution.

  1. Users create a DynamoDB table and, using the DynamoDB console, AWS SDK, or AWS CLI, insert the mappings, parameters, and key-value data.
  2. CloudFormation templates have a custom resource, which calls a Lambda function. The combination of team name and application environment (“teamname-environment”) is the partition key input. Application Name (“appname”) is the sort key input to the Lambda function.
  3. The Lambda function queries the DynamoDB table based on the inputs.
  4. DynamoDB responds to the Lambda function with the results.
  5. The Lambda function responds to the custom resource in the CloudFormation stack, with the key-value data.
  6. The key-value data retrieved by the custom resource is then used by other resources in the stack using GetAtt Intrinsic function.

 

Using the sample code

To use the sample code for this solution, follow these steps:

  1. Clone the repository.
git clone https://github.com/awslabs/custom-lookup-lambda.git
cd custom-lookup-lambda
  1. The values in the sample-mappings.json file will be inserted into DynamoDB. Each record in sample-mappings.json corresponds to an item in DynamoDB. The mappings object contains the mapping.
  2. This solution uses Python as a programming model for AWS Lambda. If you haven’t set up your local development environment for Python, follow these steps, and then install the awscli python package.
pip install awscli
  1. To prepare your access keys or assume-role to make calls to AWS, configure the AWS Command Line Interface as described here. The IAM user or the assumed role used to make API calls must have, at minimum, this access. You can attach this policy to the IAM user or IAM group if you are using access keys, or to the IAM role if you are assuming a role. For more information, see Attaching Managed Policies in the IAM User Guide.
  2. Run insertrecord.sh to create the DynamoDB table named custom-lookup and insert the items in sample-mappings.json.
./insertrecord.sh

This script does the following:

  • Installs boto3 and requests Python packages through pip.
  • Executes the Python script to create the DynamoDB table (custom-lookup) and puts the data in sample-mappings.json.
  1. Run deployer.sh to package and create the Lambda function.
./deployer.sh

This script does the following:

  1. Using sample-stack.yaml, create a stack that makes a call to the Lambda function created in step 6. The function queries the DynamoDB table and produces as output the values corresponding to team1-dev and app1.
aws cloudformation deploy --template-file sample-stack.yaml --stack-name sample-stack
  1. Examine the output in the AWS CloudFormation console. You should see the values retrieved from the DynamoDB table. The Fn::GetAtt function allows you to use the values retrieved by the custom resource Lambda function.

For example, if the custom resource Lambda function is called using resource name as CUSTOMLOOKUP in the sample-stack, the value of key=amiid will be used in the stack using !GetAtt CUSTOMLOOKUP.amiid. Likewise, the value of key=vpc will be used in the stack using !GetAtt CUSTOMLOOKUP.vpc and so on.

Conclusion

In this blog post, we showed how to use an AWS CloudFormation custom resource backed by an AWS Lambda function to query Amazon DynamoDB to retrieve key-value data, thereby replacing the Mappings and Parameter sections of the CloudFormation template. This solution provides a more automated approach to managing template parameters and mappings. You can use the DynamoDB table to simplify updates, reuse, quick lookups, and reporting for these mappings and parameters.

Implementing DevSecOps Using AWS CodePipeline

by Ramesh Adabala | on | in How-to |

DevOps is a combination of cultural philosophies, practices, and tools that emphasizes collaboration and communication between software developers and IT infrastructure teams while automating an organization’s ability to deliver applications and services rapidly, frequently, and more reliably.

CI/CD stands for continuous integration and continuous deployment. These concepts represent everything related to automation of application development and the deployment pipeline — from the moment a developer adds a change to a central repository until that code winds up in production.

DevSecOps covers security of and in the CI/CD pipeline, including automating security operations and auditing. The goals of DevSecOps are to:

  • Embed security knowledge into DevOps teams so that they can secure the pipelines they design and automate.
  • Embed application development knowledge and automated tools and processes into security teams so that they can provide security at scale in the cloud.

The Security Cloud Adoption Framework (CAF) whitepaper provides prescriptive controls to improve the security posture of your AWS accounts. These controls are in line with a DevOps blog post published last year about the control-monitor-fix governance model.

Security CAF controls are grouped into four categories:

  • Directive: controls establish the governance, risk, and compliance models on AWS.
  • Preventive: controls protect your workloads and mitigate threats and vulnerabilities.
  • Detective: controls provide full visibility and transparency over the operation of your deployments in AWS.
  • Responsive: controls drive remediation of potential deviations from your security baselines.

To embed the DevSecOps discipline in the enterprise, AWS customers are automating CAF controls using a combination of AWS and third-party solutions.

In this blog post, I will show you how to use a CI/CD pipeline to automate preventive and detective security controls. I’ll use an example that show how you can take the creation of a simple security group through the CI/CD pipeline stages and enforce security CAF controls at various stages of the deployment. I’ll use AWS CodePipeline to orchestrate the steps in a continuous delivery pipeline.

These resources are being used in this example:

  • An AWS CloudFormation template to create the demo pipeline.
  • A Lambda function to perform the static code analysis of the CloudFormation template.
  • A Lambda function to perform dynamic stack validation for the security groups in scope.
  • An S3 bucket as the sample code repository.
  • An AWS CloudFormation source template file to create the security groups.
  • Two VPCs to deploy the test and production security groups.

These are the high-level security checks enforced by the pipeline:

  • During the Source stage, static code analysis for any open security groups. The pipeline will fail if there are any violations.
  • During the Test stage, dynamic analysis to make sure port 22 (SSH) is open only to the approved IP CIDR range. The pipeline will fail if there are any violations.

demo_pipeline1

 

These are the pipeline stages:

1. Source stage: In this example, the pipeline gets the CloudFormation code that creates the security group from S3, the code repository service.

This stage passes the CloudFormation template and pipeline name to a Lambda function, CFNValidateLambda. This function performs the static code analysis. It uses the regular expression language to find patterns and identify security group policy violations. If it finds violations, then Lambda fails the pipeline and includes the violation details.

Here is the regular expression that Lambda function using for static code analysis of the open SSH port:

"^.*Ingress.*(([fF]rom[pP]ort|[tT]o[pP]ort).\s*:\s*u?.(22).*[cC]idr[iI]p.\s*:\s*u?.((0\.){3}0\/0)|[cC]idr[iI]p.\s*:\s*u?.((0\.){3}0\/0).*([fF]rom[pP]ort|[tT]o[pP]ort).\s*:\s*u?.(22))"

2. Test stage: After the static code analysis is completed successfully, the pipeline executes the following steps:

a. Create stack: This step creates the stack in the test VPC, as described in the test configuration.

b. Stack validation: This step triggers the StackValidationLambda Lambda function. It passes the stack name and pipeline name in the event parameters. Lambda validates the security group for the following security controls. If it finds violations, then Lambda deletes the stack, stops the pipeline, and returns an error message.

The following is the sample Python code used by AWS Lambda to check if the SSH port is open to the approved IP CIDR range (in this example, 72.21.196.67/32):

for n in regions:
    client = boto3.client('ec2', region_name=n)
    response = client.describe_security_groups(
        Filters=[{'Name': 'tag:aws:cloudformation:stack-name', 'Values': [stackName]}])
    for m in response['SecurityGroups']:
        if "72.21.196.67/32" not in str(m['IpPermissions']):
            for o in m['IpPermissions']:
                try:
                    if int(o['FromPort']) <= 22 <= int(o['ToPort']):
                        result = False
                        failReason = "Found Security Group with port 22 open to the wrong source IP range"
                        offenders.append(str(m['GroupId']))
                except:
                    if str(o['IpProtocol']) == "-1":
                        result = False
                        failReason = "Found Security Group with port 22 open to the wrong source IP range"
                        offenders.append(str(n) + " : " + str(m['GroupId']))

c. Approve test stack: This step creates a manual approval task for stack review. This step could be eliminated for automated deployments.

d. Delete test stack: After all the stack validations are successfully completed, this step deletes the stack in the test environment to avoid unnecessary costs.

3. Production stage: After the static and dynamic security checks are completed successfully, this stage creates the stack in the production VPC using the production configuration supplied in the template.

a. Create change set: This step creates the change set for the resources in the scope.

b. Execute change set: This step executes the change set and creates/updates the security group in the production VPC.

 

Source code and CloudFormation template

You’ll find the source code at https://github.com/awslabs/automating-governance-sample/tree/master/DevSecOps-Blog-Code

basic-sg-3-cfn.json creates the pipeline in AWS CodePipeline with all the stages previously described. It also creates the static code analysis and stack validation Lambda functions.

The CloudFormation template points to a shared S3 bucket. The codepipeline-lambda.zip file contains the Lambda functions. Before you run the template, upload the zip file to your S3 bucket and then update the CloudFormation template to point to your S3 bucket location.

The CloudFormation template uses the codepipe-single-sg.zip file, which contains the sample security group and test and production configurations. Update these configurations with your VPC details, and then upload the modified zip file to your S3 bucket.

Update these parts of the code to point to your S3 bucket:

 "S3Bucket": {
      "Default": "codepipeline-devsecops-demo",
      "Description": "The name of the S3 bucket that contains the source artifact, which must be in the same region as this stack",
      "Type": "String"
    },
    "SourceS3Key": {
      "Default": "codepipe-single-sg.zip",
      "Description": "The file name of the source artifact, such as myfolder/myartifact.zip",
      "Type": "String"
    },
    "LambdaS3Key": {
      "Default": "codepipeline-lambda.zip",
      "Description": "The file name of the source artifact of the Lambda code, such as myfolder/myartifact.zip",
      "Type": "String"
    },
	"OutputS3Bucket": {
      "Default": "codepipeline-devsecops-demo",
      "Description": "The name of the output S3 bucket that contains the processed artifact, which must be in the same region as this stack",
      "Type": "String"
    },

After the stack is created, AWS CodePipeline executes the pipeline and starts deploying the sample CloudFormation template. In the default template, security groups have wide-open ports (0.0.0.0/0), so the pipeline execution will fail. Update the CloudFormation template in codepipe-single-sg.zip with more restrictive ports and then upload the modified zip file to S3 bucket. Open the AWS CodePipeline console, and choose the Release Change button. This time the pipeline will successfully create the security groups.

demo_pipeline2

You could expand the security checks in the pipeline to include other AWS resources, not just security groups. The following table shows the sample controls you could enforce in the pipeline using the static and dynamic analysis Lambda functions.

demo_pipeline3
If you have feedback about this post, please add it to the Comments section below. If you have questions about implementing the example used in this post, please open a thread on the Developer Tools forum.

Replicating and Automating Sync-Ups for a Repository with AWS CodeCommit

by Cherry Zhou | on | in How-to |

by Chenwei (Cherry) Zhou, Software Development Engineer


 

Many of our customers have expressed interest in the following scenarios:

  • Backing up or replicating an AWS CodeCommit repository to another AWS region.
  • Automatically backing up repositories currently hosted on other services (for example, GitHub or BitBucket) to AWS CodeCommit.

In this blog post, we’ll show you how to automate the replication of a source repository to a repository in AWS CodeCommit. Your source repository could be another AWS CodeCommit repository, a local repository, or a repository hosted on other Git services.

To replicate your repository, you’ll first need to set up a repository in AWS CodeCommit to use as your backup/replica repository. After replicating the contents in your source repository to the backup repository, we’ll demonstrate how you can set up a scheduled job to periodically sync up your source repository with the backup/replica.

(more…)

Extending AWS CodeBuild with Custom Build Environments

by John Pignata | on | in How-to | | Comments

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. CodeBuild provides curated build environments for programming languages and runtimes such as Java, Ruby, Python, Go, Node.js, Android, and Docker. It can be extended through the use of custom build environments to support many more.

Build environments are Docker images that include a complete file system with everything required to build and test your project. To use a custom build environment in a CodeBuild project, you build a container image for your platform that contains your build tools, push it to a Docker container registry such as Amazon EC2 Container Registry (ECR), and reference it in the project configuration. When building your application, CodeBuild will retrieve the Docker image from the container registry specified in the project configuration and use the environment to compile your source code, run your tests, and package your application.

In this post, we’ll create a build environment for PHP applications and walk through the steps to configure CodeBuild to use this environment.

(more…)

Run Umbraco CMS with Flexible Load Balancing on AWS

by Ihab Shaaban | on | in How-to | | Comments

In version 7.3, Umbraco CMS the popular open source CMS introduced the flexible load balancing feature, which makes the setup of load-balanced applications a lot easier. In this blog post, we’ll follow the guidelines in the Umbraco documentation to set up a load-balanced Umbraco application on AWS. We’ll let AWS Elastic Beanstalk manage the deployments, load balancing, auto scaling, and health monitoring for us.

Application Architecture

When you use the flexible load balancing feature, any updates to Umbraco content will be stored in a queue in the master database. Each server in the load-balanced environment will automatically download, process, and cache the updates from the queue, so no matter which server is selected by the Elastic Load Balancing to handle the request, the user will always receive the same content. Umbraco administration doesn’t work correctly if accessed from a load-balanced server. For this reason, we’ll set up a non-balanced environment to be accessed only by the administrators and editors.

(more…)

Introducing Git Credentials: A Simple Way to Connect to AWS CodeCommit Repositories Using a Static User Name and Password

by Ankur Agarwal | on | in How-to, New stuff |

Today, AWS is introducing a simplified way to authenticate to your AWS CodeCommit repositories over HTTPS.

With Git credentials, you can generate a static user name and password in the Identity and Access Management (IAM) console that you can use to access AWS CodeCommit repositories from the command line, Git CLI, or any Git tool that supports HTTPS authentication.

Because these are static credentials, they can be cached using the password management tools included in your local operating system or stored in a credential management utility. This allows you to get started with AWS CodeCommit within minutes. You don’t need to download the AWS CLI or configure your Git client to connect to your AWS CodeCommit repository on HTTPS. You can also use the user name and password to connect to the AWS CodeCommit repository from third-party tools that support user name and password authentication, including popular Git GUI clients (such as TowerUI) and IDEs (such as Eclipse, IntelliJ, and Visual Studio).

So, why did we add this feature? Until today, users who wanted to use HTTPS connections were required to configure the AWS credential helper to authenticate their AWS CodeCommit operations. Customers told us our credential helper sometimes interfered with password management tools such as Keychain Access and Windows Vault, which caused authentication failures. Also, many Git GUI tools and IDEs require a static user name and password to connect with remote Git repositories and do not support the credential helper.

In this blog post, I’ll walk you through the steps for creating an AWS CodeCommit repository, generating Git credentials, and setting up CLI access to AWS CodeCommit repositories.


Git Credentials Walkthrough
Let’s say Dave wants to create a repository on AWS CodeCommit and set up local access from his computer.

Prerequisite: If Dave had previously configured his local computer to use the credential helper for AWS CodeCommit, he must edit his .gitconfig file to remove the credential helper information from the file. Additionally, if his local computer is running macOS, he might need to clear any cached credentials from Keychain Access.

With Git credentials, Dave can now create a repository and start using AWS CodeCommit in four simple steps.

Step 1: Make sure the IAM user has the required permissions
Dave must have the following managed policies attached to his IAM user (or their equivalent permissions) before he can set up access to AWS CodeCommit using Git credentials.

  • AWSCodeCommitPowerUser (or an appropriate CodeCommit managed policy)
  • IAMSelfManageServiceSpecificCredentials
  • IAMReadOnlyAccess

Step 2: Create an AWS CodeCommit repository
Next, Dave signs in to the AWS CodeCommit console and create a repository, if he doesn’t have one already. He can choose any repository in his AWS account to which he has access. The instructions to create Git credentials are shown in the help panel. (Choose the Connect button if the instructions are not displayed.) When Dave clicks the IAM user link, the IAM console will open and he can generate the credentials.

GitCred_Blog1

 

Step 3: Create HTTPS Git credentials in the IAM console
On the IAM user page, Dave selects the Security Credentials tab and clicks Generate under HTTPS Git credentials for AWS CodeCommit section. This creates and displays the user name and password. Dave can then download the credentials.

GitCred_Blog2

Note: This is the only time the password is available to view or download.

 

Step 4: Clone the repository on the local machine
On the AWS CodeCommit console page for the repository, Dave chooses Clone URL, and then copy the HTTPS link for cloning the repository. At the command line or terminal, Dave will use the link he just copied to clone the repository. For example, Dave copies:

GitCred_Blog3

 

And then at the command line or terminal, Dave types:

$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/TestRepo_Dave

When prompted for user name and password, Dave provides the Git credentials (user name and password) he generated in step 3.

Dave is now ready to start pushing his code to the new repository.

Git credentials can be made active or inactive based on your requirements. You can also reset the password if you would like to use the existing username with a new password.

Next Steps

  1. You can optionally cache your credentials using the Git credentials caching command here.
  2. Want to invite a collaborator to work on your AWS CodeCommit repository? Simply create a new IAM user in your AWS account, create Git credentials for that user, and securely share the repository URL and Git credentials with the person you want to collaborate on the repositories.
  3. Connect to any third-party client that supports connecting to remote Git repositories using Git credentials (a stored user name and password). Virtually all tools and IDEs allow you to connect with static credentials. We’ve tested these:
    • Visual Studio (using the default Git plugin)
    • Eclipse IDE (using the default Git plugin)
    • Git Tower UI

For more information, see the AWS CodeCommit documentation.

We are excited to provide this new way of connecting to AWS CodeCommit. We look forward to hearing from you about the many different tools and IDEs you will be able to use with your AWS CodeCommit repositories.

Integrating Git with AWS CodePipeline

by Jay McConnell and Karthik Thirugnanasambandam | on | in How-to | | Comments


AWS CodePipeline
is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software. The service currently supports GitHub, AWS CodeCommit, and Amazon S3 as source providers. This blog post will cover how to integrate AWS CodePipeline with GitHub Enterprise, Bitbucket, GitLab, or any other Git server that supports the webhooks functionality available in most Git software.

Note: The steps outlined in this guide can also be used with AWS CodeBuild. AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. Once the “Test a commit” step is completed the output zip file can be used as an S3 input for a build project. Be sure to include a Build Specification file in the root of your repository.

Architecture overview

Webhooks notify a remote service by issuing an HTTP POST when a commit is pushed to the repository. AWS Lambda receives the HTTP POST through Amazon API Gateway, and then downloads a copy of the repository. It places a zipped copy of the repository into a versioned S3 bucket. AWS CodePipeline can then use the zip file in S3 as a source; the pipeline will be triggered whenever the Git repository is updated.

Architectural diagram

Architectural overview

There are two methods you can use to get the contents of a repository. Each method exposes Lambda functions that have different security and scalability properties.

  • Zip download uses the Git provider’s HTTP API to download an already-zipped copy of the current state of the repository.
    • No need for external libraries.
    • Smaller Lambda function code.
    • Large repo size limit (500 MB).
  • Git pull uses SSH to pull from the repository. The repository contents are then zipped and uploaded to S3.
    • Efficient for repositories with a high volume of commits, because each time the API is triggered, it downloads only the changed files.
    • Suitable for any Git server that supports hooks and SSH; does not depend on personal access tokens or OAuth2.
    • More extensible because it uses a standard Git library.

Build the required AWS resources

For your convenience, there is an AWS CloudFormation template that includes the AWS infrastructure and configuration required to build out this integration. To launch the CloudFormation stack setup wizard, click the link for your desired region. (The following AWS regions support all of the services required for this integration.)

For a list of services available in AWS regions, see the AWS Region Table.

The stack setup wizard will prompt you to enter several parameters. Many of these values must be obtained from your Git service.

OutputBucketName: The name of the bucket where your zipped code will be uploaded. CloudFormation will create a bucket with this name. For this reason, you cannot use the name of an existing S3 bucket.

Note: By default, there is no lifecycle policy on this bucket, so previous versions of your code will be retained indefinitely.  If you want to control the retention period of previous versions, see Lifecycle Configuration for a Bucket with Versioning in the Amazon S3 User Guide.

AllowedIps: Used only with the git pull method described earlier. A comma-separated list of IP CIDR blocks used for Git provider source IP authentication. The Bitbucket Cloud IP ranges are provided as defaults.

ApiSecret: Used only with the git pull method described earlier. This parameter is used for webhook secrets in GitHub Enterprise and GitLab. If a secret is matched, IP range authentication is bypassed. The secret cannot contain commas (,), slashes (\), or quotation marks ().

GitToken: Used only with the zip download method described earlier. This is a personal access token generated by GitHub Enterprise or GitLab.

OauthKey/OuathSecret: Used only with the zip download method described earlier. This is an OAuth2 key and secret provided by Bitbucket.

At least one parameter for your chosen method and provider must be set.

The process for setting up webhook secrets and API tokens differs between vendors and product versions. Consult your Git provider’s documentation for details.

After you have entered values for these parameters, you can complete the steps in the wizard and start the stack creation. If your desired values change over time, you can use CloudFormation’s update stack functionality to modify your parameters.

After the CloudFormation stack creation is complete, make a note of the GitPullWebHookApi, ZipDownloadWebHookApi, OutputBucketName and PublicSSHKey. You will need these in the following steps.

Configure the source repository

Depending on the method (git pull or zip download) you would like to use, in your Git provider’s interface, set the destination URL of your webhook to either the GitPullWebHookApi or ZipDownloadWebHookApi. If you create a secret at this point, be sure to update the ApiSecret parameter in your CloudFormation stack.

If you are using the git pull method, the Git repo is downloaded over SSH. For this reason, the PublicSSHKey output must be imported into Git as a deployment key.

Test a commit

After you have set up webhooks on your repository, run the git push command to create a folder structure and zip file in the S3 bucket listed in your CloudFormation output as OutputBucketName. If the zip file is not created, you can check the following sources for troubleshooting help:

Set up AWS CodePipeline

The final step is to create a pipeline in AWS CodePipeline using the zip file as an S3 source. For information about creating a pipeline, see the Simple Pipeline Walkthrough in the AWS CodePipeline User Guide. After your pipeline is set up, commits to your repository will trigger an update to the zip file in S3, which, in turn, triggers a pipeline execution.

We hope this blog post will help you integrate your Git server. Feel free to leave suggestions or approaches on integration in the comments.