AWS DevOps Blog

Using AWS CodePipeline, AWS CodeBuild, and AWS Lambda for Serverless Automated UI Testing

Testing the user interface of a web application is an important part of the development lifecycle. In this post, I’ll explain how to automate UI testing using serverless technologies, including AWS CodePipeline, AWS CodeBuild, and AWS Lambda.

I built a website for UI testing that is hosted in S3. I used Selenium to perform cross-browser UI testing on Chrome, Firefox, and PhantomJS, a headless WebKit browser with Ghost Driver, an implementation of the WebDriver Wire Protocol. I used Python to create test cases for ChromeDriver, FirefoxDriver, or PhatomJSDriver based the browser against which the test is being executed.

Resources referred to in this post, including the AWS CloudFormation template, test and status websites hosted in S3, AWS CodeBuild build specification files, AWS Lambda function, and the Python script that performs the test are available in the serverless-automated-ui-testing GitHub repository.

(more…)

Simplify Your Jenkins Builds with AWS CodeBuild

Jeff Bezos famously said, “There’s a lot of undifferentiated heavy lifting that stands between your idea and that success.” He went on to say, “…70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.”

If you subscribe to this maxim, you should not be spending valuable time focusing on operational issues related to maintaining the Jenkins build infrastructure. Companies such as Riot Games have over 1.25 million builds per year and have written several lengthy blog posts about their experiences designing a complex, custom Docker-powered Jenkins build farm. Dealing with Jenkins slaves at scale is a job in itself and Riot has engineers focused on managing the build infrastructure.

Typical Jenkins Build Farm

 

As with all technology, the Jenkins build farm architectures have evolved. Today, instead of manually building your own container infrastructure, there are Jenkins Docker plugins available to help reduce the operational burden of maintaining these environments. There is also a community-contributed Amazon EC2 Container Service (Amazon ECS) plugin that helps remove some of the overhead, but you still need to configure and manage the overall Amazon ECS environment.

There are various ways to create and manage your Jenkins build farm, but there has to be a way that significantly reduces your operational overhead.

Introducing AWS CodeBuild

AWS CodeBuild is a fully managed build service that removes the undifferentiated heavy lifting of provisioning, managing, and scaling your own build servers. With CodeBuild, there is no software to install, patch, or update. CodeBuild scales up automatically to meet the needs of your development teams. In addition, CodeBuild is an on-demand service where you pay as you go. You are charged based only on the number of minutes it takes to complete your build.

One AWS customer, Recruiterbox, helps companies hire simply and predictably through their software platform. Two years ago, they began feeling the operational pain of maintaining their own Jenkins build farms. They briefly considered moving to Amazon ECS, but chose an even easier path forward instead. Recuiterbox transitioned to using Jenkins with CodeBuild and are very happy with the results. You can read more about their journey here.

Solution Overview: Jenkins and CodeBuild

To remove the heavy lifting from managing your Jenkins build farm, AWS has developed a Jenkins AWS CodeBuild plugin. After the plugin has been enabled, a developer can configure a Jenkins project to pick up new commits from their chosen source code repository and automatically run the associated builds. After the build is successful, it will create an artifact that is stored inside an S3 bucket that you have configured. If an error is detected somewhere, CodeBuild will capture the output and send it to Amazon CloudWatch logs. In addition to storing the logs on CloudWatch, Jenkins also captures the error so you do not have to go hunting for log files for your build.

 

AWS CodeBuild with Jenkins Plugin

 

The following example uses AWS CodeCommit (Git) as the source control management (SCM) and Amazon S3 for build artifact storage. Logs are stored in CloudWatch. A development pipeline that uses Jenkins with CodeBuild plugin architecture looks something like this:

 

AWS CodeBuild Diagram

Initial Solution Setup

To keep this blog post succinct, I assume that you are using the following components on AWS already and have applied the appropriate IAM policies:

·         AWS CodeCommit repo.

·         Amazon S3 bucket for CodeBuild artifacts.

·         SNS notification for text messaging of the Jenkins admin password.

·         IAM user’s key and secret.

·         A role that has a policy with these permissions. Be sure to edit the ARNs with your region, account, and resource name. Use this role in the AWS CloudFormation template referred to later in this post.

 

Jenkins Installation with CodeBuild Plugin Enabled

To make the integration with Jenkins as frictionless as possible, I have created an AWS CloudFormation template here: https://s3.amazonaws.com/proberts-public/jenkins.yaml. Download the template, sign in the AWS CloudFormation console, and then use the template to create a stack.

 

CloudFormation Inputs

Jenkins Project Configuration

After the stack is complete, log in to the Jenkins EC2 instance using the user name “admin” and the password sent to your mobile device. Now that you have logged in to Jenkins, you need to create your first project. Start with a Freestyle project and configure the parameters based on your CodeBuild and CodeCommit settings.

 

AWS CodeBuild Plugin Configuration in Jenkins

 

Additional Jenkins AWS CodeBuild Plugin Configuration

 

After you have configured the Jenkins project appropriately you should be able to check your build status on the Jenkins polling log under your project settings:

 

Jenkins Polling Log

 

Now that Jenkins is polling CodeCommit, you can check the CodeBuild dashboard under your Jenkins project to confirm your build was successful:

Jenkins AWS CodeBuild Dashboard

Wrapping Up

In a matter of minutes, you have been able to provision Jenkins with the AWS CodeBuild plugin. This will greatly simplify your build infrastructure management. Now kick back and relax while CodeBuild does all the heavy lifting!


About the Author

Paul Roberts is a Strategic Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps, or Artificial Intelligence, he is often found in Lake Tahoe exploring the various mountain ranges with his family.

Ensuring Security of Your Code in a Cross-Region/Cross-Account Deployment Solution

There are multiple ways you can protect your data while it is in transit and at rest. You can protect your data in transit by using SSL or by using client-side encryption. AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create, control, rotate, and use your encryption keys. AWS KMS allows you to create custom keys. You can then share these keys with AWS Identity and Access Management (IAM) users and roles in your AWS account or in an AWS account owned by someone else.

In my previous post, I described a solution for building a cross-region/cross-account code deployment solution on AWS. In this post, I describe a few options for protecting your source code as it travels between regions and between AWS accounts.

To recap, you deployed the infrastructure as shown in the following diagram.

  • You had your development environment running in Region A in AWS Account A.
  • You had your QA environment running in Region B in AWS Account B.
  • You had a staging or production environment running in Region C in AWS Account C.

An update to the source code in Region A triggered validation and deployment of source code changes in the pipeline in Region A. A successful processing of source code in all of its AWS CodePipeline states invoked a Lambda function, which copied the source code into an S3 bucket in Region B. After the source code was copied into this bucket, it triggered a similar chain of processes into the different AWS CodePipeline stages in Region B.

 

Ensuring Security for Your Source Code

You might choose to encrypt the source code .zip file before uploading to the S3 bucket that is in Account A, Region A, using Amazon S3 server-side encryption:

1. Using the Amazon S3 service master key

Refer back to the Lambda function created for you by the CloudFormation stack in the previous post. Go to the AWS Lambda console and your function name should be <stackname>-CopytoDest-XXXXXXX.

 

 

Use the following parameter for the copyObject function – ServerSideEncryption: ‘AES256’

Note: The set-up already uses this option by default.

The copyObject function decrypts the .zip file and copies the object into account B.

 

2. Using an AWS KMS master key

Since the KMS keys are constrained in a region, copying the object (source code .zip file) into a different account across the region requires cross-account access to the KMS key. This must occur before Amazon S3 can use that key for encryption and decryption.

Use the following parameter for the copyObject function – ServerSideEncryption: ‘aws:kms’ and provide an SSEKMSKeyId: ‘<keyeid>’

To enable cross-account access for the KMS key and use it in Lambda function

a. Create a KMS key in the source account (Account A), region B – for example, XRDepTestKey

Note: This key must be created in region B. This is because the source code will be copied in an S3 bucket that exists in region B and the KMS key must be accessible in this region.

b. To enable the Lambda function to be able to use this KMS key, add lambdaS3CopyRole as a user for this key. The Lambda function and associated role and policies are defined in the CloudFormation template.

c. Note the ARN of the key that you generated.

d. Provide the external account (Account B) permission to use this key. For more information, see Sharing custom encryption keys securely between accounts.

arn:aws:iam::<Account B ID>:root

e. In Account B, delegate the permission to use this key to the role that AWS CodePipeline is using. In the CloudFormation template, you can see that CodePipelineTrustRole is used. Attach the following policy to the role. Ensure that you update the region and Account ID accordingly.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowUseOfTheKey",
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:kms:<regionB>:<AccountA ID>:key/<KMS Key in Region B ID>"
            ]
        },
        {
            "Sid": "AllowAttachmentOfPersistentResources",
            "Effect": "Allow",
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": [
                "arn:aws:kms:<regionB>:<AccountA ID>:key/<KMS Key in Region B ID>"
            ],
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": true
                }
          }
        }
    ]
}

f. Update the Lambda function, CopytoDest, to use the following in the parameter definition.

 

ServerSideEncryption: 'aws:kms',\n",
SSEKMSKeyId: '< keyeid >'  
//ServerSideEncryption: 'AES256'\n",

And there you go! You have enabled secure delivery of your source code into your cross-region/cross-account deployment solution.

About the Author


BK Chaurasiya is a Solutions Architect with Amazon Web Services. He provides technical guidance, design advice and thought leadership to some of the largest and successful AWS customers and partners.

Automating Blue/Green Deployments of Infrastructure and Application Code using AMIs, AWS Developer Tools, & Amazon EC2 Systems Manager

Previous DevOps blog posts have covered the following use cases for infrastructure and application deployment automation:

An AMI provides the information required to launch an instance, which is a virtual server in the cloud. You can use one AMI to launch as many instances as you need. It is security best practice to customize and harden your base AMI with required operating system updates and, if you are using AWS native services for continuous security monitoring and operations, you are strongly encouraged to bake into the base AMI agents such as those for Amazon EC2 Systems Manager (SSM), Amazon Inspector, CodeDeploy, and CloudWatch Logs. A customized and hardened AMI is often referred to as a “golden AMI.” The use of golden AMIs to create EC2 instances in your AWS environment allows for fast and stable application deployment and scaling, secure application stack upgrades, and versioning.

In this post, using the DevOps automation capabilities of Systems Manager, AWS developer tools (CodePipeLine, CodeDeploy, CodeCommit, CodeBuild), I will show you how to use AWS CodePipeline to orchestrate the end-to-end blue/green deployments of a golden AMI and application code. Systems Manager Automation is a powerful security feature for enterprises that want to mature their DevSecOps practices.

Here are the high-level phases and primary services covered in this use case:

 

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/Bluegreen-AMI-Application-Deployment-blog.

This sample will create a pipeline in AWS CodePipeline with the building blocks to support the blue/green deployments of infrastructure and application. The sample includes a custom Lambda step in the pipeline to execute Systems Manager Automation to build a golden AMI and update the Auto Scaling group with the golden AMI ID for every rollout of new application code. This guarantees that every new application deployment is on a fully patched and customized AMI in a continuous integration and deployment model. This enables the automation of hardened AMI deployment with every new version of application deployment.

 

 

We will build and run this sample in three parts.

Part 1: Setting up the AWS developer tools and deploying a base web application

Part 1 of the AWS CloudFormation template creates the initial Java-based web application environment in a VPC. It also creates all the required components of Systems Manager Automation, CodeCommit, CodeBuild, and CodeDeploy to support the blue/green deployments of the infrastructure and application resulting from ongoing code releases.

Part 1 of the AWS CloudFormation stack creates these resources:

After Part 1 of the AWS CloudFormation stack creation is complete, go to the Outputs tab and click the Elastic Load Balancing link. You will see the following home page for the base web application:

Make sure you have all the outputs from the Part 1 stack handy. You need to supply them as parameters in Part 3 of the stack.

Part 2: Setting up your CodeCommit repository

In this part, you will commit and push your sample application code into the CodeCommit repository created in Part 1. To access the initial git commands to clone the empty repository to your local machine, click Connect to go to the AWS CodeCommit console. Make sure you have the IAM permissions required to access AWS CodeCommit from command line interface (CLI).

After you’ve cloned the repository locally, download the sample application files from the part2 folder of the Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following commands to commit and push the files to the CodeCommit repository:

git add .
git commit -a -m "add all files from the AWS Java Tomcat CodeDeploy application"
git push

After all the files are pushed successfully, the repository should look like this:

 

Part 3: Setting up CodePipeline to enable blue/green deployments     

Part 3 of the AWS CloudFormation template creates the pipeline in AWS CodePipeline and all the required components.

a) Source: The pipeline is triggered by any change to the CodeCommit repository.

b) BuildGoldenAMI: This Lambda step executes the Systems Manager Automation document to build the golden AMI. After the golden AMI is successfully created, a new launch configuration with the new AMI details will be updated into the Auto Scaling group of the application deployment group. You can watch the progress of the automation in the EC2 console from the Systems Manager –> Automations menu.

c) Build: This step uses the application build spec file to build the application build artifact. Here are the CodeBuild execution steps and their status:

d) Deploy: This step clones the Auto Scaling group, launches the new instances with the new AMI, deploys the application changes, reroutes the traffic from the elastic load balancer to the new instances and terminates the old Auto Scaling group. You can see the execution steps and their status in the CodeDeploy console.

After the CodePipeline execution is complete, you can access the application by clicking the Elastic Load Balancing link. You can find it in the output of Part 1 of the AWS CloudFormation template. Any consecutive commits to the application code in the CodeCommit repository trigger the pipelines and deploy the infrastructure and code with an updated AMI and code.

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Developer Tools forum.


About the author

 

Ramesh Adabala is a Solutions Architect in Southeast Enterprise Solution Architecture team at Amazon Web Services.

Build Serverless AWS CodeCommit Workflows using Amazon CloudWatch Events and JGit

Sam Dengler is a Solutions Architect at Amazon Web Services

Summary

Amazon CloudWatch Events now supports AWS CodeCommit Repository State Changes event types for activities like pushing new code to a repository. Using these new event types, customers can build Amazon CloudWatch Event rules to match AWS CodeCommit events and route them to one or more targets like an Amazon SNS Topic, AWS Step Functions state machine, or AWS Lambda function to trigger automated workflows to process repository changes.

In this blog, I will provide three examples for using AWS Lambda and JGit to build cost-effective serverless solutions to securely process AWS CodeCommit repository state changes:

  • Replicate CodeCommit Repository
  • Enforce Git Commit Message Policy
  • Backup Git Archive to Amazon S3

Source code and Amazon CloudFormation templates for the examples are located in the following GitHub repository: https://github.com/awslabs/serverless-codecommit-examples.

AWS CodeCommit CloudWatch Events

Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. Below is an example Amazon CloudWatch Event for one of the new AWS CodeCommit Repository State Changes event types, referenceUpdated. Any change to the repository will trigger a referenceUpdated event, however triggers for particular branches can be filtered using the referenceType and referenceName fields in the event details.

{
    "version": "0",
    "id": "01234567-0123-0123-0123-012345678901",
    "detail-type": "CodeCommit Repository State Change",
    "source": "aws.codecommit",
    "account": "123456789012",
    "time": "2017-06-12T10:23:43Z",
    "region": "us-east-1",
    "resources": [
        "arn:aws:codecommit:us-east-1:123456789012:myRepo"
    ],
    "detail": {
        "event": "referenceUpdated",
        "repositoryName": "myRepo",
        "referenceType": "head",
        "referenceName": "myBranch",
        "commitId": "3e5983EXAMPLE",
        "oldCommitId": "1a7813EXAMPLE"
    }
}

We will use the Amazon CloudWatch Event’s fields to create a pattern to match the events for which we will trigger a target, in this case an AWS Lambda function.

Accessing AWS CodeCommit using HTTPS URLs

The HTTPS URL method for accessing an AWS CodeCommit repository is particularly suited to a serverless solution because an Amazon Lambda execution container already provides temporary AWS IAM key credentials associated to the function’s AWS IAM Execution Role. The function’s Execution Role is associated to one or more AWS IAM Policies, in which you specify permissions allowing the function to access AWS resources. For each function, we will limit the AWS IAM polices to only the AWS CodeCommit repository, Amazon S3 bucket, or Amazon SNS topics, following the AWS best practice to grant least privileged access.

For example, the AWS IAM Policy snippet below restricts the Amazon Lambda function to pull from the source repository and push to the target repository.

Policies:
  - Version: '2012-10-17'
    Statement:
      - Effect: Allow
        Resource: !Sub 'arn:aws:codecommit:${AWS::Region}:${AWS::AccountId}:${SourceRepositoryName}'
        Action:
          - 'codecommit:GetRepository'
          - 'codecommit:GitPull'
      - Effect: Allow
        Resource: !Sub 'arn:aws:codecommit:${TargetRepositoryRegion}:${AWS::AccountId}:${TargetRepositoryName}'
        Action:
          - 'codecommit:GetRepository'
          - 'codecommit:GitPush'

When using the HTTPS URL access method, a credential helper is configured for the Git client, which executes the “aws codecommit credential-helper” command to provide a SigV4 compatible user name and password using AWS IAM credentials (see more). When using JGit as the Git client, a CredentialsProvider can be supplied to Git commands to achieve the same result.

The Spring Cloud Config project provides an implementation of the JGit CredentialsProvider for AwsCodeCommit (source), which conveniently uses the AWS DefaultAWSCredentialsProviderChain to discover AWS credentials in the standard priority order supported by Amazon Lambda. The AwsCodeCommit.calculateCodeCommitPassword method is particularly interesting to review as an example of SigV4 transformation logic.

Cloning a repository is repeated across examples, and the functionality has been delegated to a supporting CloneCommandBuilder Class below.

public class CloneCommandBuilder {

    private File directory;

    public CloneCommandBuilder() throws IOException {
        directory = Files.createTempDirectory(null).toFile();
    }

    public CloneCommand buildCloneCommand(String sourceUrl) {
        return buildCloneCommand(sourceUrl, new AwsCodeCommitCredentialProvider());
    }

    public CloneCommand buildCloneCommand(String sourceUrl,
            AwsCodeCommitCredentialProvider credentialsProvider) {

        return new CloneCommand().setDirectory(directory)
                .setURI(sourceUrl)
                .setCredentialsProvider(credentialsProvider)
                .setBare(true);
    }
}

Next we’ll look at some examples to process the repository events using Amazon Lambda.

Example 1: Replicate CodeCommit Repository

Customers often need to replicate commits from one repository to another to support disaster recovery or cross region CI/CD pipelines. In this example, the Amazon Lambda function will clone a repository from the source and push to the target. This example is intended to update an existing target repository, which should not be empty before configuring replication prior to configuring replication.

Please note, Amazon Lambda functions are limited to 1.5GB memory, 512MB ephemeral disk capacity (“/tmp” space), and a 5 minute execution time. If your repository is unable to be processed within these limits, please see the Replicating and Automating Sync-Ups for a Repository with AWS CodeCommit blog article for an alternative approach to replicate repositories using an Amazon EC2 instance.

Let’s take a look at some code!

public class ReplicateRepositoryHandler
        implements RequestHandler<CodeCommitEvent, HandlerResponse> {

    private static Logger logger = Logger.getLogger(ReplicateRepositoryHandler.class);
    private final String targetUrl;
    private final AwsCodeCommitCredentialProvider credentialsProvider;

    public ReplicateRepositoryHandler() {
        String targetName = System.getenv("TARGET_REPO_NAME");
        String targetRegion = System.getenv("TARGET_REPO_REGION");

        CodeCommitMetadata target = new CodeCommitMetadata(targetName, targetRegion);
        targetUrl = target.getCloneUrlHttp();
        credentialsProvider = new AwsCodeCommitCredentialProvider();
    }

    // ...

On instantiation, the Amazon Lambda function discovers the target AWS CodeCommit repository HTTPS URL by querying the repository metadata using the target repository name and region. This discovery process is repeated across the examples, and the code has been delegated to the CodeCommitMetadata class below.

public class CodeCommitMetadata {

    private RepositoryMetadata repositoryMetadata;

    public CodeCommitMetadata(String repoName, String repoRegion) {
        AWSCodeCommitClientBuilder builder = AWSCodeCommitClientBuilder.standard();
        AWSCodeCommit client = builder.withRegion(repoRegion).build();

        GetRepositoryRequest request = new GetRepositoryRequest();
        request.withRepositoryName(repoName);

        GetRepositoryResult result = client.getRepository(request);
        repositoryMetadata = result.getRepositoryMetadata();
    }

    public String getCloneUrlHttp() {
        return repositoryMetadata.getCloneUrlHttp();
    }
}

When the Amazon Lambda function is triggered by the AWS CloudWatch Event, the source repository name and region in the event are used to discover the source repository HTTPS URL. We use JGit to clone the source repository from this URL into a local repository stored in a temporary directory in the Amazon Lambda execution container.

public HandlerResponse handleRequest(CodeCommitEvent event, Context context) {
    try {
        String sourceName = event.getDetail().getRepositoryName();
        String sourceRegion = event.getRegion();
   
        // clone source repository
        CodeCommitMetadata source = new CodeCommitMetadata(sourceName, sourceRegion);
        String sourceUrl = source.getCloneUrlHttp();
        Git git = new CloneCommandBuilder().buildCloneCommand(sourceUrl).call();

        // ...

Once we’ve cloned the local repository to the Amazon Lambda execution container, the last step is to set the target AWS CodeCommit repository as a new remote location and push the local references using the reference specification “+refs/*:refs/*”.

// push target repository
git.push().setCredentialsProvider(credentialsProvider)
          .setRemote(targetUrl)
          .setRefSpecs(new RefSpec("+refs/*:refs/*"))
          .call();
// ...

In the next example, we’ll review how we can build a Lambda function to enforce commit message policies.

Example 2: Enforce Git Commit Message Policy

Some customers choose to enforce policies on a Git repository to maintain code quality. In this example, we use the same tools described above to clone a repository and validate the commit messages from the Git log using a regular expression.

public class PolicyEnforcerHandler implements RequestHandler<CodeCommitEvent, HandlerResponse> {

    private static Logger logger = LoggerFactory.getLogger(ArchiveRepositoryHandler.class);

    private final String mainBranch;
    private final String snsTopicArn;
    private final Pattern pattern;
    private final AmazonSNS snsClient;

    public PolicyEnforcerHandler() {
        mainBranch = System.getenv("MAIN_BRANCH_NAME");
        snsTopicArn = System.getenv("SNS_TOPIC_ARN");

        String messageRegex = System.getenv("MESSAGE_REGEX");
        pattern = Pattern.compile(messageRegex);

        String snsRegion = snsTopicArn.split(":")[3];
        snsClient = AmazonSNSClientBuilder.standard().withRegion(snsRegion).build();
    }

    // ...

On instantiation, the Amazon Lambda function compiles the regular expression for message validation and creates an Amazon SNS client to send notifications.

@Override
public HandlerResponse handleRequest(CodeCommitEvent event, Context context) {
    String sourceName = event.getDetail().getRepositoryName();
    String sourceRegion = event.getRegion();
    String commitId = event.getDetail().getCommitId();
    String oldCommitId = event.getDetail().getOldCommitId();

    try {
        // clone source repository
        CodeCommitMetadata source = new CodeCommitMetadata(sourceName, sourceRegion);
        String sourceUrl = source.getCloneUrlHttp();
        Git git = new CloneCommandBuilder().buildCloneCommand(sourceUrl).call();

        // ...

When the Amazon Lambda function is triggered by the AWS CloudWatch Event, the process to clone the repository is the same, discovering the AWS CodeCommit HTTPS URL from the AWS CloudWatch Event and cloning a bare Git repository to the Amazon Lambda execution container.

// use the OldCommitId, or default to the main branch
String toGitReference = Optional.ofNullable(oldCommitId).orElse(mainBranch);
Repository repository = git.getRepository();
ObjectId to = repository.resolve(toGitReference);
ObjectId from = repository.resolve(commitId);

// ...

JGit RevWalk is used to determine the range of commits over which to validate the message policy. When commits are added to an existing branch, AWS CodeCommit will emit an referenceUpdated event, which includes commitId and oldCommitId fields that establish the range of commits.

When commits are added to a new branch, AWS CodeCommit will emit a referenceCreated event, which includes a commitId but not the oldCommitId. In this case, we will use the main branch name to determine the common ancestry of the commit chains, called the merge base, in order to establish the range of commits.

// create a RevWalk and set the range of commits
try (RevWalk walk = new RevWalk(repository)) {
    walk.markStart(walk.parseCommit(from));
    walk.markUninteresting(walk.parseCommit(to));

    // iterate the list of commits and validate each message
    for (RevCommit commit : walk) {
        Matcher matcher = pattern.matcher(commit.getShortMessage());

        // publish a message to the topic if the message does not match
        if (!matcher.find()) {
            String message = buildMessage(commit);
            logger.info(message);
            snsClient.publish(snsTopicArn, message);
        }
    }

    walk.dispose();
}

// ...

Once the range has been established, we iterate the list of commit messages, testing each against the message policy regular expression. If the message does not match the regular expression, then it is out of compliance from the policy, and a message is published to the Amazon SNS topic for notification.

In the next example, I’ll review how to backup an archive of the files in a Git repository.

Example 3: Backup Git Archive to Amazon S3

The previous examples have focused on the bare Git repository objects, however there are some use cases for processing the files in the Git repository at a particular reference. In this example, I’ll build a Lambda function to create a zip of the files in the repository and store it in Amazon S3 as a backup.

public class ArchiveRepositoryHandler
        implements RequestHandler<CodeCommitEvent, HandlerResponse> {

    private static Logger logger = LoggerFactory.getLogger(ArchiveRepositoryHandler.class);

    private final String ZIP_FORMAT = "zip"
    private final String targetS3Bucket;
    private final AmazonS3 s3Client;

    public ArchiveRepositoryHandler() {
        targetS3Bucket = System.getenv("TARGET_S3_BUCKET");
        s3Client = AmazonS3ClientBuilder.defaultClient();
        ArchiveCommand.registerFormat(ZIP_FORMAT, new ZipFormat());
    }

    // ...

On instantiation, the Amazon Lambda function creates an Amazon S3 client and registers the ZipFormat with JGit.

@Override
public HandlerResponse handleRequest(CodeCommitEvent event, Context context) {
    String sourceName = event.getDetail().getRepositoryName();
    String sourceRegion = event.getRegion();
    String commitId = event.getDetail().getCommitId();

    try {
        // clone source repository
        CodeCommitMetadata source = new CodeCommitMetadata(sourceName, sourceRegion);
        String sourceUrl = source.getCloneUrlHttp();
        Git git = new CloneCommandBuilder().buildCloneCommand(sourceUrl).call();

        // ...

When the Amazon Lambda function is triggered by the AWS CloudWatch Event, the process to clone the repository is the same, discovering the AWS CodeCommit HTTPS URL from the AWS CloudWatch Event and cloning a bare Git repository to the Amazon Lambda execution container.

// create and upload archive for commitId
File file = Files.createTempFile(null, null).toFile();
try (OutputStream out = new FileOutputStream(file)) {
    ObjectId objectId = git.getRepository().resolve(commitId);
    git.archive().setTree(objectId)
                 .setFormat(ZIP_FORMAT)
                 .setOutputStream(out)
                 .call();

    String key = sourceName + "." + commitId + "." + ZIP_FORMAT;
    s3Client.putObject(targetS3Bucket, key, file);
}

// ...

Once the repository has been cloned, we use a JGit ArchiveCommand to create a zip artifact representing the working files of repository at the commit triggering the event. The generated zip artifact is then uploaded to Amazon S3 using the repository name and commit shortId as the key.

Conclusion

AWS CloudWatch Event’s support for AWS CodeCommit Repository State Changes event types opens possibilities to build event-driven source code workflow automation using the same AWS CloudWatch Events service that acts as an event bus across many AWS services. Combining this new capability with Amazon Lambda, the JGit client, and AWS IAM policy controls provides builders with a set of tools to build serverless solutions that securely access AWS resources, scale on demand, and are cost effective.

In this blog, I’ve demonstrated three example solutions built using these tools, however AWS CodeCommit’s integration with AWS CloudWatch Events allows you to integrate with other AWS CloudWatch Events targets, like Amazon SQS or AWS Step Functions.

I encourage you to visit the GitHub repository (https://github.com/awslabs/serverless-codecommit-examples), which has instructions to launch these examples in your own AWS account. Please share your ideas and questions in the comments below, or submit pull requests and issues to the GitHub repository!

Create Multiple Builds from the Same Source Using Different AWS CodeBuild Build Specification Files

In June 2017, AWS CodeBuild announced you can now specify an alternate build specification file name or location in an AWS CodeBuild project.

In this post, I’ll show you how to use different build specification files in the same repository to create different builds. You’ll find the source code for this post in our GitHub repo.

Requirements

The AWS CLI must be installed and configured.

Solution Overview

I have created a C program (cbsamplelib.c) that will be used to create a shared library and another utility program (cbsampleutil.c) to use that library. I’ll use a Makefile to compile these files.

I need to put this sample application in RPM and DEB packages so end users can easily deploy them. I have created a build specification file for RPM. It will use make to compile this code and the RPM specification file (cbsample.rpmspec) configured in the build specification to create the RPM package. Similarly, I have created a build specification file for DEB. It will create the DEB package based on the control specification file (cbsample.control) configured in this build specification.

(more…)

Validating AWS CloudFormation Templates

For their continuous integration and continuous deployment (CI/CD) pipeline path, many companies use tools like Jenkins, Chef, and AWS CloudFormation. Usually, the process is managed by two or more teams. One team is responsible for designing and developing an application, CloudFormation templates, and so on. The other team is generally responsible for integration and deployment.

One of the challenges that a CI/CD team has is to validate the CloudFormation templates provided by the development team. Validation provides early warning about any incorrect syntax and ensures that the development team follows company policies in terms of security and the resources created by CloudFormation templates.

In this post, I focus on the validation of AWS CloudFormation templates for syntax as well as in the context of business rules.

(more…)

Continuous Delivery of Nested AWS CloudFormation Stacks Using AWS CodePipeline

In CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks, Jeff Barr discusses infrastructure as code and how to use AWS CodePipeline for continuous delivery. In this blog post, I discuss the continuous delivery of nested CloudFormation stacks using AWS CodePipeline, with AWS CodeCommit as the source repository and AWS CodeBuild as a build and testing tool. I deploy the stacks using CloudFormation change sets following a manual approval process.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • Build and Test (AWS CodeBuild and AWS CloudFormation)
  • Staging (AWS CloudFormation and manual approval)
  • Production (AWS CloudFormation and manual approval)

(more…)

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

Building a Continuous Delivery Pipeline for AWS Service Catalog (Sync AWS Service Catalog with Version Control)

AWS Service Catalog enables organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multitier application architectures. You can use AWS Service Catalog to centrally manage commonly deployed IT services. It also helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

However, as the number of Service Catalog portfolios and products increases across an organization, centralized management and scaling can become a challenge. In this blog post, I walk you through a solution that simplifies management of AWS Service Catalog portfolios and related products. This solution also enables portfolio sharing with other accounts, portfolio tagging, and granting access to users. Finally, the solution delivers updates to the products using a continuous delivery in AWS CodePipeline. This enables you to maintain them in version control, thereby adopting “Infrastructure as Code” practices.

Solution overview

  1. Authors (developers, operations, architects, etc.) create the AWS CloudFormation templates based on the needs of their organizations. These templates are the reusable artifacts. They can be shared among various teams within the organizations. You can name these templates product-A.yaml or product-B.yaml. For example, if the template creates an Amazon VPC that is based on organization needs, as described in the Amazon VPC Architecture Quick Start, you can save it as product-vpc.yaml.

The authors also define a mapping.yaml file, which includes the list of products that you want to include in the portfolio and related metadata. The mapping.yaml file is the core configuration component of this solution. This file defines your portfolio and its associated permissions and products. This configuration file determines how your portfolio will look in AWS Service Catalog, after the solution deploys it. A sample mapping.yaml is described here. Configuration properties of this mapping.yaml are explained here.

 

  1. Product template files and the mappings are committed to version control. In this example, we use AWS CodeCommit. The folder structure on the file system looks like the following:
    • portfolio-infrastructure (folder name)
      – product-a.yaml
      – product-b.yaml
      – product-c.yaml
      – mapping.yaml
    • portfolio-example (folder name)
      – product-c.yaml
      – product-d.yaml
      – mapping.yaml

    The name of the folder must start with portfolio- because the AWS Lambda function iterates through all folders whose names start with portfolio-, and syncs them with AWS Service Catalog.

    Checking in any code in the repository triggers an AWS CodePipeline orchestration and invokes the Lambda function.

  2. The Lambda function downloads the code from version control and iterates through all folders with names that start with portfolio-. The function gets a list of all existing portfolios in AWS Service Catalog. Then it checks whether the display name of the portfolio matches the “name” property in the mapping.yaml under each folder. If the name doesn’t match, a new portfolio is created. If the name matches, the description and owner fields are updated and synced with what is in the file. There must be only one mapping.yaml file in each folder with a name starting with portfolio-.
  3. and 5. The Lambda function iterates through the list of products in the mapping.yaml file. If the name of product matches any of the products already associated with the portfolio, a new version of the product is created and is associated with the portfolio. If the name of the product doesn’t match, a new product is created. The CloudFormation template file (as specified in the template property for that product in the mapping file) is uploaded to Amazon S3 with a unique ID. A new version of the product is created and is pointed to the unique S3 path.

Try it out!

Get started using this solution, which is available in this AWSLabs GitHub repository.

  1. Clone the repository. It contains the AWS CloudFormation templates that we use in this walkthrough.
git clone https://github.com/awslabs/aws-pipeline-to-service-catalog.git
cd aws-pipeline-to-service-catalog
  1. Examine mapping.yaml under the portfolio-infrastructure folder. Replace the account number with the account number with which to share the portfolio. To share the portfolio with multiple other accounts, you can append more account numbers to the list. These account numbers must be valid AWS accounts, and must not include the account number in which this solution is being created. Optionally, edit this file and provide the values you want for the name, description, and owner properties. You can also choose to leave these values as they are, which creates a portfolio with the name, description, and owners described in the file.
  2. Optional – If you don’t have the AWS Command Line Interface (AWS CLI) installed, install it as described here. To prepare your access keys or assumed role to make calls to AWS, configure the AWS CLI as described here.
  3. Create a pipeline. This orchestrates continuous integration with the AWS CodeCommit repository created in step 2, and continuously syncs AWS Service Catalog with the code.
aws cloudformation deploy --template-file pipeline-to-service-catalog.yaml \
--stack-name service-catalog-sync-pipeline --capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides RepositoryName=blogs-pipeline-to-service-catalog

This creates the following resources.

  1. An AWS CodeCommit repository to push the code to. You can get the repository URL to push the code from the outputs of the stack that we just created. Connect, commit, and push code to this repository as described here.

    1. An S3 bucket, which holds the built artifacts (CloudFormation templates) and the Lambda function code.
    2. The AWS IAM roles and policies, with least privileges for this solution to work.
    3. An AWS CodeBuild project, which builds the Lambda function. This Python-based Lambda function has the logic, as explained earlier.
    4. A pipeline with the following four stages:
      • Stage-1: Checks out source from the repository created in step 2
      • Stage-2: Builds the Lambda function using AWS CodeBuild, which has the logic to sync the AWS Service Catalog products and portfolios with code.
      • Stage-3: Deploys the Lambda function using CloudFormation.
      • Stage-4: Invokes the Lambda function. Once this stage completes successfully, you see an AWS Service Catalog portfolio and two products created, as shown below.

 

Optional next steps!

You can deploy the Lambda function as we explained in this post to sync AWS Service Catalog products, portfolios, and permissions across multiple accounts that you own with version control. You can create a secure cross-account continuous delivery pipeline, as explained here. To do this:

  1. Delete all the resources created earlier.
aws cloudformation delete-stack -- stack-name service-catalog-sync-pipeline
  1. Follow the steps in this blog post. The sample Lambda function, described here, is the same as what I explained in this post.

Conclusion

You can use AWS Lambda to make API calls to AWS Service Catalog to keep portfolios and products in sync with a mapping file. The code includes the CloudFormation templates and the mapping file and folder structure, which resembles the portfolios in AWS Service Catalog. When checked in to an AWS CodeCommit repository, it invokes the Lambda function, orchestrated by AWS CodePipeline.