Category: How-To


AWS Developer Tools Expands Integration to Include GitHub

AWS Developer Tools is a set of services that include AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Together, these services help you securely store and maintain version control of your application’s source code and automatically build, test, and deploy your application to AWS or your on-premises environment. These services are designed to enable developers and IT professionals to rapidly and safely deliver software.

As part of our continued commitment to extend the AWS Developer Tools ecosystem to third-party tools and services, we’re pleased to announce AWS CodeStar and AWS CodeBuild now integrate with GitHub. This will make it easier for GitHub users to set up a continuous integration and continuous delivery toolchain as part of their release process using AWS Developer Tools.

In this post, I will walk through the following:

Prerequisites:

You’ll need an AWS account, a GitHub account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CodeStar, AWS CodeBuild, AWS CodePipeline, Amazon EC2, Amazon S3.

 

Integrating GitHub with AWS CodeStar

AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. Its unified user interface helps you easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, so you can start releasing code faster.

When AWS CodeStar launched in April of this year, it used AWS CodeCommit as the hosted source repository. You can now choose between AWS CodeCommit or GitHub as the source control service for your CodeStar projects. In addition, your CodeStar project dashboard lets you centrally track GitHub activities, including commits, issues, and pull requests. This makes it easy to manage project activity across the components of your CI/CD toolchain. Adding the GitHub dashboard view will simplify development of your AWS applications.

In this section, I will show you how to use GitHub as the source provider for your CodeStar projects. I’ll also show you how to work with recent commits, issues, and pull requests in the CodeStar dashboard.

Sign in to the AWS Management Console and from the Services menu, choose CodeStar. In the CodeStar console, choose Create a new project. You should see the Choose a project template page.

CodeStar Project

Choose an option by programming language, application category, or AWS service. I am going to choose the Ruby on Rails web application that will be running on Amazon EC2.

On the Project details page, you’ll now see the GitHub option. Type a name for your project, and then choose Connect to GitHub.

Project details

You’ll see a message requesting authorization to connect to your GitHub repository. When prompted, choose Authorize, and then type your GitHub account password.

Authorize

This connects your GitHub identity to AWS CodeStar through OAuth. You can always review your settings by navigating to your GitHub application settings.

Installed GitHub Apps

You’ll see AWS CodeStar is now connected to GitHub:

Create project

You can choose a public or private repository. GitHub offers free accounts for users and organizations working on public and open source projects and paid accounts that offer unlimited private repositories and optional user management and security features.

In this example, I am going to choose the public repository option. Edit the repository description, if you like, and then choose Next.

Review your CodeStar project details, and then choose Create Project. On Choose an Amazon EC2 Key Pair, choose Create Project.

Key Pair

On the Review project details page, you’ll see Edit Amazon EC2 configuration. Choose this link to configure instance type, VPC, and subnet options. AWS CodeStar requires a service role to create and manage AWS resources and IAM permissions. This role will be created for you when you select the AWS CodeStar would like permission to administer AWS resources on your behalf check box.

Choose Create Project. It might take a few minutes to create your project and resources.

Review project details

When you create a CodeStar project, you’re added to the project team as an owner. If this is the first time you’ve used AWS CodeStar, you’ll be asked to provide the following information, which will be shown to others:

  • Your display name.
  • Your email address.

This information is used in your AWS CodeStar user profile. User profiles are not project-specific, but they are limited to a single AWS region. If you are a team member in projects in more than one region, you’ll have to create a user profile in each region.

User settings

User settings

Choose Next. AWS CodeStar will create a GitHub repository with your configuration settings (for example, https://github.com/biyer/ruby-on-rails-service).

When you integrate your integrated development environment (IDE) with AWS CodeStar, you can continue to write and develop code in your preferred environment. The changes you make will be included in the AWS CodeStar project each time you commit and push your code.

IDE

After setting up your IDE, choose Next to go to the CodeStar dashboard. Take a few minutes to familiarize yourself with the dashboard. You can easily track progress across your entire software development process, from your backlog of work items to recent code deployments.

Dashboard

After the application deployment is complete, choose the endpoint that will display the application.

Pipeline

This is what you’ll see when you open the application endpoint:

The Commit history section of the dashboard lists the commits made to the Git repository. If you choose the commit ID or the Open in GitHub option, you can use a hotlink to your GitHub repository.

Commit history

Your AWS CodeStar project dashboard is where you and your team view the status of your project resources, including the latest commits to your project, the state of your continuous delivery pipeline, and the performance of your instances. This information is displayed on tiles that are dedicated to a particular resource. To see more information about any of these resources, choose the details link on the tile. The console for that AWS service will open on the details page for that resource.

Issues

You can also filter issues based on their status and the assigned user.

Filter

AWS CodeBuild Now Supports Building GitHub Pull Requests

CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can use prepackaged build environments to get started quickly or you can create custom build environments that use your own build tools.

We recently announced support for GitHub pull requests in AWS CodeBuild. This functionality makes it easier to collaborate across your team while editing and building your application code with CodeBuild. You can use the AWS CodeBuild or AWS CodePipeline consoles to run AWS CodeBuild. You can also automate the running of AWS CodeBuild by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the AWS CodeBuild Plugin for Jenkins.

AWS CodeBuild

In this section, I will show you how to trigger a build in AWS CodeBuild with a pull request from GitHub through webhooks.

Open the AWS CodeBuild console at https://console.aws.amazon.com/codebuild/. Choose Create project. If you already have a CodeBuild project, you can choose Edit project, and then follow along. CodeBuild can connect to AWS CodeCommit, S3, BitBucket, and GitHub to pull source code for builds. For Source provider, choose GitHub, and then choose Connect to GitHub.

Configure

After you’ve successfully linked GitHub and your CodeBuild project, you can choose a repository in your GitHub account. CodeBuild also supports connections to any public repository. You can review your settings by navigating to your GitHub application settings.

GitHub Apps

On Source: What to Build, for Webhook, select the Rebuild every time a code change is pushed to this repository check box.

Note: You can select this option only if, under Repository, you chose Use a repository in my account.

Source

In Environment: How to build, for Environment image, select Use an image managed by AWS CodeBuild. For Operating system, choose Ubuntu. For Runtime, choose Base. For Version, choose the latest available version. For Build specification, you can provide a collection of build commands and related settings, in YAML format (buildspec.yml) or you can override the build spec by inserting build commands directly in the console. AWS CodeBuild uses these commands to run a build. In this example, the output is the string “hello.”

Environment

On Artifacts: Where to put the artifacts from this build project, for Type, choose No artifacts. (This is also the type to choose if you are just running tests or pushing a Docker image to Amazon ECR.) You also need an AWS CodeBuild service role so that AWS CodeBuild can interact with dependent AWS services on your behalf. Unless you already have a role, choose Create a role, and for Role name, type a name for your role.

Artifacts

In this example, leave the advanced settings at their defaults.

If you expand Show advanced settings, you’ll see options for customizing your build, including:

  • A build timeout.
  • A KMS key to encrypt all the artifacts that the builds for this project will use.
  • Options for building a Docker image.
  • Elevated permissions during your build action (for example, accessing Docker inside your build container to build a Dockerfile).
  • Resource options for the build compute type.
  • Environment variables (built-in or custom). For more information, see Create a Build Project in the AWS CodeBuild User Guide.

Advanced settings

You can use the AWS CodeBuild console to create a parameter in Amazon EC2 Systems Manager. Choose Create a parameter, and then follow the instructions in the dialog box. (In that dialog box, for KMS key, you can optionally specify the ARN of an AWS KMS key in your account. Amazon EC2 Systems Manager uses this key to encrypt the parameter’s value during storage and decrypt during retrieval.)

Create parameter

Choose Continue. On the Review page, either choose Save and build or choose Save to run the build later.

Choose Start build. When the build is complete, the Build logs section should display detailed information about the build.

Logs

To demonstrate a pull request, I will fork the repository as a different GitHub user, make commits to the forked repo, check in the changes to a newly created branch, and then open a pull request.

Pull request

As soon as the pull request is submitted, you’ll see CodeBuild start executing the build.

Build

GitHub sends an HTTP POST payload to the webhook’s configured URL (highlighted here), which CodeBuild uses to download the latest source code and execute the build phases.

Build project

If you expand the Show all checks option for the GitHub pull request, you’ll see that CodeBuild has completed the build, all checks have passed, and a deep link is provided in Details, which opens the build history in the CodeBuild console.

Pull request

Summary:

In this post, I showed you how to use GitHub as the source provider for your CodeStar projects and how to work with recent commits, issues, and pull requests in the CodeStar dashboard. I also showed you how you can use GitHub pull requests to automatically trigger a build in AWS CodeBuild — specifically, how this functionality makes it easier to collaborate across your team while editing and building your application code with CodeBuild.


About the author:

Balaji Iyer is an Enterprise Consultant for the Professional Services Team at Amazon Web Services. In this role, he has helped several customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly scalable distributed systems, serverless architectures, large scale migrations, operational security, and leading strategic AWS initiatives. Before he joined Amazon, Balaji spent more than a decade building operating systems, big data analytics solutions, mobile services, and web applications. In his spare time, he enjoys experiencing the great outdoors and spending time with his family.

 

Using AWS CodePipeline, AWS CodeBuild, and AWS Lambda for Serverless Automated UI Testing

Testing the user interface of a web application is an important part of the development lifecycle. In this post, I’ll explain how to automate UI testing using serverless technologies, including AWS CodePipeline, AWS CodeBuild, and AWS Lambda.

I built a website for UI testing that is hosted in S3. I used Selenium to perform cross-browser UI testing on Chrome, Firefox, and PhantomJS, a headless WebKit browser with Ghost Driver, an implementation of the WebDriver Wire Protocol. I used Python to create test cases for ChromeDriver, FirefoxDriver, or PhatomJSDriver based the browser against which the test is being executed.

Resources referred to in this post, including the AWS CloudFormation template, test and status websites hosted in S3, AWS CodeBuild build specification files, AWS Lambda function, and the Python script that performs the test are available in the serverless-automated-ui-testing GitHub repository.

(more…)

Ensuring Security of Your Code in a Cross-Region/Cross-Account Deployment Solution

There are multiple ways you can protect your data while it is in transit and at rest. You can protect your data in transit by using SSL or by using client-side encryption. AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create, control, rotate, and use your encryption keys. AWS KMS allows you to create custom keys. You can then share these keys with AWS Identity and Access Management (IAM) users and roles in your AWS account or in an AWS account owned by someone else.

In my previous post, I described a solution for building a cross-region/cross-account code deployment solution on AWS. In this post, I describe a few options for protecting your source code as it travels between regions and between AWS accounts.

To recap, you deployed the infrastructure as shown in the following diagram.

  • You had your development environment running in Region A in AWS Account A.
  • You had your QA environment running in Region B in AWS Account B.
  • You had a staging or production environment running in Region C in AWS Account C.

An update to the source code in Region A triggered validation and deployment of source code changes in the pipeline in Region A. A successful processing of source code in all of its AWS CodePipeline states invoked a Lambda function, which copied the source code into an S3 bucket in Region B. After the source code was copied into this bucket, it triggered a similar chain of processes into the different AWS CodePipeline stages in Region B.

 

Ensuring Security for Your Source Code

You might choose to encrypt the source code .zip file before uploading to the S3 bucket that is in Account A, Region A, using Amazon S3 server-side encryption:

1. Using the Amazon S3 service master key

Refer back to the Lambda function created for you by the CloudFormation stack in the previous post. Go to the AWS Lambda console and your function name should be <stackname>-CopytoDest-XXXXXXX.

 

 

Use the following parameter for the copyObject function – ServerSideEncryption: ‘AES256’

Note: The set-up already uses this option by default.

The copyObject function decrypts the .zip file and copies the object into account B.

 

2. Using an AWS KMS master key

Since the KMS keys are constrained in a region, copying the object (source code .zip file) into a different account across the region requires cross-account access to the KMS key. This must occur before Amazon S3 can use that key for encryption and decryption.

Use the following parameter for the copyObject function – ServerSideEncryption: ‘aws:kms’ and provide an SSEKMSKeyId: ‘<keyeid>’

To enable cross-account access for the KMS key and use it in Lambda function

a. Create a KMS key in the source account (Account A), region B – for example, XRDepTestKey

Note: This key must be created in region B. This is because the source code will be copied in an S3 bucket that exists in region B and the KMS key must be accessible in this region.

b. To enable the Lambda function to be able to use this KMS key, add lambdaS3CopyRole as a user for this key. The Lambda function and associated role and policies are defined in the CloudFormation template.

c. Note the ARN of the key that you generated.

d. Provide the external account (Account B) permission to use this key. For more information, see Sharing custom encryption keys securely between accounts.

arn:aws:iam::<Account B ID>:root

e. In Account B, delegate the permission to use this key to the role that AWS CodePipeline is using. In the CloudFormation template, you can see that CodePipelineTrustRole is used. Attach the following policy to the role. Ensure that you update the region and Account ID accordingly.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowUseOfTheKey",
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:kms:<regionB>:<AccountA ID>:key/<KMS Key in Region B ID>"
            ]
        },
        {
            "Sid": "AllowAttachmentOfPersistentResources",
            "Effect": "Allow",
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": [
                "arn:aws:kms:<regionB>:<AccountA ID>:key/<KMS Key in Region B ID>"
            ],
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": true
                }
          }
        }
    ]
}

f. Update the Lambda function, CopytoDest, to use the following in the parameter definition.

 

ServerSideEncryption: 'aws:kms',\n",
SSEKMSKeyId: '< keyeid >'  
//ServerSideEncryption: 'AES256'\n",

And there you go! You have enabled secure delivery of your source code into your cross-region/cross-account deployment solution.

About the Author


BK Chaurasiya is a Solutions Architect with Amazon Web Services. He provides technical guidance, design advice and thought leadership to some of the largest and successful AWS customers and partners.

Create Multiple Builds from the Same Source Using Different AWS CodeBuild Build Specification Files

In June 2017, AWS CodeBuild announced you can now specify an alternate build specification file name or location in an AWS CodeBuild project.

In this post, I’ll show you how to use different build specification files in the same repository to create different builds. You’ll find the source code for this post in our GitHub repo.

Requirements

The AWS CLI must be installed and configured.

Solution Overview

I have created a C program (cbsamplelib.c) that will be used to create a shared library and another utility program (cbsampleutil.c) to use that library. I’ll use a Makefile to compile these files.

I need to put this sample application in RPM and DEB packages so end users can easily deploy them. I have created a build specification file for RPM. It will use make to compile this code and the RPM specification file (cbsample.rpmspec) configured in the build specification to create the RPM package. Similarly, I have created a build specification file for DEB. It will create the DEB package based on the control specification file (cbsample.control) configured in this build specification.

(more…)

Validating AWS CloudFormation Templates

For their continuous integration and continuous deployment (CI/CD) pipeline path, many companies use tools like Jenkins, Chef, and AWS CloudFormation. Usually, the process is managed by two or more teams. One team is responsible for designing and developing an application, CloudFormation templates, and so on. The other team is generally responsible for integration and deployment.

One of the challenges that a CI/CD team has is to validate the CloudFormation templates provided by the development team. Validation provides early warning about any incorrect syntax and ensures that the development team follows company policies in terms of security and the resources created by CloudFormation templates.

In this post, I focus on the validation of AWS CloudFormation templates for syntax as well as in the context of business rules.

(more…)

Continuous Delivery of Nested AWS CloudFormation Stacks Using AWS CodePipeline

In CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks, Jeff Barr discusses infrastructure as code and how to use AWS CodePipeline for continuous delivery. In this blog post, I discuss the continuous delivery of nested CloudFormation stacks using AWS CodePipeline, with AWS CodeCommit as the source repository and AWS CodeBuild as a build and testing tool. I deploy the stacks using CloudFormation change sets following a manual approval process.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • Build and Test (AWS CodeBuild and AWS CloudFormation)
  • Staging (AWS CloudFormation and manual approval)
  • Production (AWS CloudFormation and manual approval)

(more…)

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

Building a Continuous Delivery Pipeline for AWS Service Catalog (Sync AWS Service Catalog with Version Control)

AWS Service Catalog enables organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multitier application architectures. You can use AWS Service Catalog to centrally manage commonly deployed IT services. It also helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

However, as the number of Service Catalog portfolios and products increases across an organization, centralized management and scaling can become a challenge. In this blog post, I walk you through a solution that simplifies management of AWS Service Catalog portfolios and related products. This solution also enables portfolio sharing with other accounts, portfolio tagging, and granting access to users. Finally, the solution delivers updates to the products using a continuous delivery in AWS CodePipeline. This enables you to maintain them in version control, thereby adopting “Infrastructure as Code” practices.

Solution overview

  1. Authors (developers, operations, architects, etc.) create the AWS CloudFormation templates based on the needs of their organizations. These templates are the reusable artifacts. They can be shared among various teams within the organizations. You can name these templates product-A.yaml or product-B.yaml. For example, if the template creates an Amazon VPC that is based on organization needs, as described in the Amazon VPC Architecture Quick Start, you can save it as product-vpc.yaml.

The authors also define a mapping.yaml file, which includes the list of products that you want to include in the portfolio and related metadata. The mapping.yaml file is the core configuration component of this solution. This file defines your portfolio and its associated permissions and products. This configuration file determines how your portfolio will look in AWS Service Catalog, after the solution deploys it. A sample mapping.yaml is described here. Configuration properties of this mapping.yaml are explained here.

 

  1. Product template files and the mappings are committed to version control. In this example, we use AWS CodeCommit. The folder structure on the file system looks like the following:
    • portfolio-infrastructure (folder name)
      – product-a.yaml
      – product-b.yaml
      – product-c.yaml
      – mapping.yaml
    • portfolio-example (folder name)
      – product-c.yaml
      – product-d.yaml
      – mapping.yaml

    The name of the folder must start with portfolio- because the AWS Lambda function iterates through all folders whose names start with portfolio-, and syncs them with AWS Service Catalog.

    Checking in any code in the repository triggers an AWS CodePipeline orchestration and invokes the Lambda function.

  2. The Lambda function downloads the code from version control and iterates through all folders with names that start with portfolio-. The function gets a list of all existing portfolios in AWS Service Catalog. Then it checks whether the display name of the portfolio matches the “name” property in the mapping.yaml under each folder. If the name doesn’t match, a new portfolio is created. If the name matches, the description and owner fields are updated and synced with what is in the file. There must be only one mapping.yaml file in each folder with a name starting with portfolio-.
  3. and 5. The Lambda function iterates through the list of products in the mapping.yaml file. If the name of product matches any of the products already associated with the portfolio, a new version of the product is created and is associated with the portfolio. If the name of the product doesn’t match, a new product is created. The CloudFormation template file (as specified in the template property for that product in the mapping file) is uploaded to Amazon S3 with a unique ID. A new version of the product is created and is pointed to the unique S3 path.

Try it out!

Get started using this solution, which is available in this AWSLabs GitHub repository.

  1. Clone the repository. It contains the AWS CloudFormation templates that we use in this walkthrough.
git clone https://github.com/awslabs/aws-pipeline-to-service-catalog.git
cd aws-pipeline-to-service-catalog
  1. Examine mapping.yaml under the portfolio-infrastructure folder. Replace the account number with the account number with which to share the portfolio. To share the portfolio with multiple other accounts, you can append more account numbers to the list. These account numbers must be valid AWS accounts, and must not include the account number in which this solution is being created. Optionally, edit this file and provide the values you want for the name, description, and owner properties. You can also choose to leave these values as they are, which creates a portfolio with the name, description, and owners described in the file.
  2. Optional – If you don’t have the AWS Command Line Interface (AWS CLI) installed, install it as described here. To prepare your access keys or assumed role to make calls to AWS, configure the AWS CLI as described here.
  3. Create a pipeline. This orchestrates continuous integration with the AWS CodeCommit repository created in step 2, and continuously syncs AWS Service Catalog with the code.
aws cloudformation deploy --template-file pipeline-to-service-catalog.yaml \
--stack-name service-catalog-sync-pipeline --capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides RepositoryName=blogs-pipeline-to-service-catalog

This creates the following resources.

  1. An AWS CodeCommit repository to push the code to. You can get the repository URL to push the code from the outputs of the stack that we just created. Connect, commit, and push code to this repository as described here.

    1. An S3 bucket, which holds the built artifacts (CloudFormation templates) and the Lambda function code.
    2. The AWS IAM roles and policies, with least privileges for this solution to work.
    3. An AWS CodeBuild project, which builds the Lambda function. This Python-based Lambda function has the logic, as explained earlier.
    4. A pipeline with the following four stages:
      • Stage-1: Checks out source from the repository created in step 2
      • Stage-2: Builds the Lambda function using AWS CodeBuild, which has the logic to sync the AWS Service Catalog products and portfolios with code.
      • Stage-3: Deploys the Lambda function using CloudFormation.
      • Stage-4: Invokes the Lambda function. Once this stage completes successfully, you see an AWS Service Catalog portfolio and two products created, as shown below.

 

Optional next steps!

You can deploy the Lambda function as we explained in this post to sync AWS Service Catalog products, portfolios, and permissions across multiple accounts that you own with version control. You can create a secure cross-account continuous delivery pipeline, as explained here. To do this:

  1. Delete all the resources created earlier.
aws cloudformation delete-stack -- stack-name service-catalog-sync-pipeline
  1. Follow the steps in this blog post. The sample Lambda function, described here, is the same as what I explained in this post.

Conclusion

You can use AWS Lambda to make API calls to AWS Service Catalog to keep portfolios and products in sync with a mapping file. The code includes the CloudFormation templates and the mapping file and folder structure, which resembles the portfolios in AWS Service Catalog. When checked in to an AWS CodeCommit repository, it invokes the Lambda function, orchestrated by AWS CodePipeline.

Database Continuous Integration and Automated Release Management Workflow with AWS and Datical DB

Just as a herd can move only as fast as its slowest member, companies must increase the speed of all parts of their release process, especially the database change process, which is often manual. One bad database change can bring down an app or compromise data security.

We need to make database code deployment as fast and easy as application release automation, while eliminating risks that cause application downtime and data security vulnerabilities. Let’s take a page from the application development playbook and bring a continuous deployment approach to the database.

By creating a continuous deployment database, you can:

  • Discover mistakes more quickly.
  • Deliver updates faster and frequently.
  • Help developers write better code.
  • Automate the database release management process.

The database deployment package can be promoted automatically with application code changes. With database continuous deployment, application development teams can deliver smaller, less risky deployments, making it possible to respond more quickly to business or customer needs.

In our previous post, Building End-to-End Continuous Delivery and Deployment Pipelines in AWS, we walked through steps for implementing a continuous deployment and automated delivery pipeline for your application.

In this post, we walk through steps for building a continuous deployment workflow for databases using AWS CodePipeline (a fully managed continuous delivery service) and Datical DB (a database release automation application). We use AWS CodeCommit for source code control and Amazon RDS for database hosting to demonstrate end-to-end database change management — from check-in to final deployment.

As part of this example, we will show how a database change that does not meet standards is rejected automatically and actionable feedback is provided to the developer. Just like a code unit test, Datical DB evaluates changes and enforces your organization’s standards. In the sample use case, database table indexes of more than three columns are disallowed. In some cases, this type of index can slow performance.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CodePipeline, AWS CodeCommit, Amazon RDS, Amazon EC2, and Amazon S3.

From Datical DB, you’ll need access to software.datical.com portal, your license key, a database, and JDBC drivers. You can request a free trial of Datical here.

Overview

Here are the steps:

  1. Install and configure Datical DB.
  2. Create an RDS database instance running the Oracle database engine.
  3. Configure Datical DB to manage database changes across your software development life cycle (SDLC).
  4. Set up database version control using AWS CodeCommit.
  5. Set up a continuous integration server to stage database changes.
  6. Integrate the continuous integration server with Datical DB.
  7. Set up automated release management for your database through AWS CodePipeline.
  8. Enforce security governance and standards with the Datical DB Rules Engine.

1. Install and configure Datical DB

Navigate to https://software.datical.com and sign in with your credentials. From the left navigation menu, expand the Common folder, and then open the Datical_DB_Folder. Choose the latest version of the application by reviewing the date suffix in the name of the folder. Download the installer for your platform — Windows (32-bit or 64-bit) or Linux (32-bit or 64-bit).

Verify the JDK Version

In a terminal window, run the following command to ensure you’re running JDK version 1.7.x or later.

# java –version
java version "1.7.0_75"
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) Client VM (build 24.75-b04, mixed mode, sharing)

The Datical DB installer contains a graphical (GUI) and command line (CLI) interface that can be installed on Windows and Linux operating systems.

Install Datical DB (GUI)

  1. Double-click on the installer
  2. Follow the prompts to install the application.
  3. When prompted, type the path to a valid license.

Install JDBC drivers

  1. Open the Datical DB application.
  2. From the menu, choose Help, and then choose Install New Software.
  3. From the Work with drop-down list, choose Database Drivers – http://update.datical.com/drivers/updates.
  4. Follow the prompts to install the drivers.

Install Datical DB (CLI)

Datical DB (CLI only) can be installed on a headless Linux system. Select the correct 32-bit or 64-bit Linux installer for your system.

  1. Run the installer as root and install it to /usr/local/DaticalDB.
    sudo java -jar ../installers/<Datical Installer>.jar -console
  2. Follow the prompts to install the application.
  3. When prompted, type the path to a valid license.

Install JDBC drivers

  1. Copy JDBC drivers to /usr/local/DaticalDB/jdbc_drivers.
    sudo mkdir /usr/local/DaticalDB/jdbc_drivers
    copy JDBC Drivers from software.datical.com to /usr/local/DaticalDB/jdbc_drivers
  2. Copy the license file to /usr/local/DaticalDB/repl.
    sudo cp <license_filename> /usr/local/DaticalDB/repl
    sudo chmod 777 /usr/local/DaticalDB/repl/<license_filename>

2. Create an RDS instance running the Oracle database engine

Datical DB supports database engines like Oracle, MySQL, Microsoft SQL Server, PostgreSQL, and IBM DB2. The example in this post uses a DB instance running Oracle. To create a DB instance running Oracle, follow these steps.
Make sure that you can access the Oracle port (1521) from the location where you will be using Datical DB. Just like SQLPlus or other database management tools, Datical DB must be able to connect to the Oracle port. When you configure the security group for your RDS instance, make sure you can access port 1521 from your location.

3. Manage database changes across the SDLC

This one-time process is required to ensure databases are in sync so that you can manage database changes across the SDLC:

  1. Create a Datical DB deployment plan with connections to the databases to be managed.
  2. Baseline the first database (DEV/CI). This must be the original or best configured database – your reference database.
  3. For each additional database (TEST and PROD):
    a. Compare databases to ensure the application schema are in sync.
    b. Resolve any differences.
    c. Perform a change log sync to get each setup for Datical DB management.

Datical DB creates an initial model change log from one of the databases. It also creates in each database a metadata table named DATABASECHANGELOG that will be used to track the state. Now the databases look like this:

Datical DB Model

Note: In the preceding figure, the Datical DB model and metadata table are a representation of the actual model.

Create a deployment plan

    1. In Datical DB, right-click Deployment Plans, and choose New.
    2. On the New Deployment Plan page, type a name for your project (for example, AWS-Sample-Project), and then choose Next.
    3. Select Oracle 11g Instant Client, type a name for the database (for example, DevDatabase), and then choose Next.
    4. On the following page, provide the database connection information.
      1. For Hostname, enter the RDS endpoint..
      2. Select SID, and then type ORCL.
      3. Type the user name and password used to connect to the RDS instance running Oracle.
      4. Before you choose Finish, choose the Test Connection button.

When Datical DB creates the project, it also creates a baseline snapshot that captures the current state of the database schema. Datical DB stores the snapshot in Datical change sets for future forecasting and modification.

Create a database change set

A change set describes the change/refactoring to apply to the database.
From the AWS-Sample-Project project in the left pane, right-click Change Log, select New, and then select Change Set. Choose the type of change to make, and then choose Next. In this example, we’re creating a table. For Table Name, type a name. Choose Add Column, and then provide information to add one or more columns to the new table. Follow the prompts, and then choose Finish.

Add Columns

The new change set will be added at the end of your current change log. You can tag change sets with a sprint label. Depending on the environment, changes can be deployed based on individual labels or by the higher-level grouping construct.
Datical DB also provides an option to load SQL scripts into a database, where the change sets are labeled and captured as objects. This makes them ready for deployment in other environments.

Best practices for continuous delivery

Change sets are stored in an XML file inside the Datical DB project. The file, changelog.xml, is stored inside the Changelog folder. (In the Datical DB UI, it is called Change Log.)

Just like any other files stored in your source code repository, the Datical DB change log can be branched and merged to support agile software development, where individual work spaces are isolated until changes are merged into the parent branch.

To implement this best practice, your Datical DB project should be checked into the same location as your application source code. That way, branches and merges will be applied to your Datical DB project automatically. Use unique change set IDs to avoid collisions with other scrum teams.

4. Set up database version control using AWS CodeCommit

To create a new CodeCommit repository, follow these steps.

Note: On some versions of Windows and Linux, you might see a pop-up dialog box asking for your user name and password. This is the built-in credential management system, but it is not compatible with the credential helper for AWS CodeCommit. Choose Cancel.

Commit the contents located in the Datical working directory (for example, ~/datical/AWS-Sample-Project) to the AWS CodeCommit repository.

5. Set up a continuous integration server to stage database changes

In this example, Jenkins is the continuous integration server. To create a Jenkins server, follow these steps. Be sure your instance security group allows port 8080 access.

sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

For more information about installing Jenkins, see the Jenkins wiki.

After setup, connect to your Jenkins server, and create a job.

  1. Install the following Jenkins plugins:
    For this project, you will need to install the following Jenkins plugins:

    1. AWS CodeCommit plugin
    2. DaticalDB4Jenkins plugin
    3. Hudson Post Build Task plugin
    4. HTML Publisher plugin
  2. To configure Jenkins for AWS CodeCommit, follow these steps.
  3. To configure Jenkins with Datical DB, navigate to Jenkins, choose Manage Jenkins, and then choose Configure System. In the Datical DB section, provide the requested directory information.

For example:

Add a build step:

Go to your newly created Jenkins project and choose Configure. On the Build tab, under Build, choose Add build step, and choose Datical DB.

In Project Dir, enter the Datical DB project directory (in this example, /var/lib/jenkins/workspace/demo/MyProject). You can use Jenkins environment variables like $WORKSPACE. The first build action is Check Drivers. This allow you to verify that Datical DB and Jenkins are configured correctly.

Choose Save. Choose Build Now to test the configuration.

After you’ve verified the drivers are installed, add forecast and deploy steps.

Add forecast and deploy steps:


Choose Save. Then choose Build Now to test the configuration.

6. Configure the continuous integration server to publish Datical DB reports

In this step, we will configure Jenkins to publish Datical DB forecast and HTML reports. In your Jenkins project, select Delete workspace before build starts.

Add post-build steps

1. Archive the Datical DB reports, logs, and snapshots

Archive
To expose Datical DB reports in Jenkins, you must create a post-build task step to copy the forecast and deployment HTML reports to a location easily published, and then publish the HTML reports.

2. Copy the forecast and deploy HTML reports

mkdir /var/lib/jenkins/workspace/Demo/MyProject/report
cp -rv /var/lib/jenkins/workspace/Demo/MyProject/Reports/*/*/*/forecast*/* /var/lib/jenkins/workspace/Demo/MyProject/report 2>NUL
cp -rv /var/lib/jenkins/workspace/Demo/MyProject/Reports/*/*/*/deploy*/deployReport.html /var/lib/jenkins/workspace/Demo/MyProject/report 2>NUL

Post build task

 

3. Publish HTML reports

Use the information in the following screen shot. Depending on the location where you configured Jenkins to build, your details might be different.

Note: Datical DB HTML reports use CSS, so update the JENKINS_JAVA_OPTIONS in your config file as follows:

Edit /etc/sysconfig/jenkins and set JENKINS_JAVA_OPTIONS to:

JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Dhudson.model.DirectoryBrowserSupport.CSP= "

7. Enable automated release management for your database through AWS CodePipeline

To create an automated release process for your databases using AWS CodePipeline, follow these instructions.

  1. Sign in to the AWS Management Console and open the AWS CodePipeline console at http://console.aws.amazon.com/codepipeline.
  2. On the introductory page, choose Get started. If you see the All pipelines page, choose Create pipeline.
  3. In Step 1: Name, in Pipeline name, type DatabasePipeline, and then choose Next step.
  4. In Step 2: Source, in Source provider, choose AWS CodeCommit. In Repository name, choose the name of the AWS CodeCommit repository you created earlier. In Branch name, choose the name of the branch that contains your latest code update. Choose Next step.

  5. In Step 3: Build, chose Jenkins.

To complete the deployment workflow, follow steps 6 through 9 in the Create a Simple Pipeline Tutorial.

8. Enforce database standards and compliance with the Datical DB Rules Engine

The Datical DB Dynamic Rules Engine automatically inspects the Virtual Database Model to make sure that proposed database code changes are safe and in compliance with your organization’s database standards. The Rules Engine also makes it easy to identify complex changes that warrant further review and empowers DBAs to efficiently focus only on the changes that require their attention. It also provides application developers with a self-service validation capability that uses the same automated build process established for the application. The consistent evaluation provided by the Dynamic Rules Engine removes uncertainty about what is acceptable and empowers application developers to write safe, compliant changes every time.

Earlier, you created a Datical DB project with no changes. To demonstrate rules, you will now create changes that violate a rule.

First, create a table with four columns. Then try to create an index on the table that comprises all four columns. For some databases, having more than three columns in an index can cause performance issues. For this reason, create a rule that will prevent the creation of an index on more than three columns, long before the change is proposed for production. Like a unit test that will fail the build, the Datical DB Rules Engine fails the build at the forecast step and provides feedback to the development team about the rule and the change to fix.

Create a Datical DB rule

To create a Datical DB rule, open the Datical DB UI and navigate to your project. Expand the Rules folder. In this example, you will create a rule in the Forecast folder.

Right-click the Forecast folder, and then select Create Rules File. In the dialog box, type a unique file name for your rule. Use a .drl extension.

.

In the editor window that opens, type the following:

package com.datical.hammer.core.forecast
import java.util.Collection;
import java.util.List;
import java.util.Arrays;
import java.util.ArrayList;
import org.apache.commons.lang.StringUtils;
import org.apache.commons.collections.ListUtils;
import com.datical.db.project.Project;
import com.datical.hammer.core.rules.Response;
import com.datical.hammer.core.rules.Response.ResponseType;

// ************************************* Models *************************************

// Database Models

import com.datical.dbsim.model.DbModel;
import com.datical.dbsim.model.Schema;
import com.datical.dbsim.model.Table;
import com.datical.dbsim.model.Index;
import com.datical.dbsim.model.Column;
import org.liquibase.xml.ns.dbchangelog.CreateIndexType;
import org.liquibase.xml.ns.dbchangelog.ColumnType;


/* @return false if validation fails; true otherwise */

function boolean validate(List columns)
{

// FAIL If more than 3 columns are included in new index
if (columns.size() > 3)
return false;
else
return true;

}

rule "Index Too Many Columns Error"
salience 1
when
$createIndex : CreateIndexType($indexName: indexName, $columns: columns, $tableName: tableName, $schemaName: schemaName)
eval(!validate($columns))
then
String errorMessage = "The new index [" + $indexName + "] contains more than 3 columns.";
insert(new Response(ResponseType.FAIL, errorMessage, drools.getRule().getName()));
end

Save the new rule file, and then right-click the Forecast folder, and select Check Rules. You should see “Rule Validation returned no errors.”

Now check your rule into source code control and request a new build. The build will fail, which is expected. Go back to Datical DB, and change the index to comprise only three columns. After your check-in, you will see a successful deployment to your RDS instance.

The following forecast report shows the Datical DB rule violation:

To implement database continuous delivery into your existing continuous delivery process, consider creating a separate project for your database changes that use the Datical DB forecast functionality at the same time unit tests are run on your code. This will catch database changes that violate standards before deployment.

Summary:

In this post, you learned how to build a modern database continuous integration and automated release management workflow on AWS. You also saw how Datical DB can be seamlessly integrated with AWS services to enable database release automation, while eliminating risks that cause application downtime and data security vulnerabilities. This fully automated delivery mechanism for databases can accelerate every organization’s ability to deploy software rapidly and reliably while improving productivity, performance, compliance, and auditability, and increasing data security. These methodologies simplify process-related overhead and make it possible for organizations to serve their customers efficiently and compete more effectively in the market.

I hope you enjoyed this post. If you have any feedback, please leave a comment below.


About the Authors

 

Balaji Iyer

Balaji Iyer is an Enterprise Consultant for the Professional Services Team at Amazon Web Services. In this role, he has helped several customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly scalable distributed systems, serverless architectures, large scale migrations, operational security, and leading strategic AWS initiatives. Before he joined Amazon, Balaji spent more than a decade building operating systems, big data analytics solutions, mobile services, and web applications. In his spare time, he enjoys experiencing the great outdoors and spending time with his family.

Robert Reeves is a Co-Founder & Chief Technology Officer at Datical. In this role, he advocates for Datical’s customers and provides technical architecture leadership. Prior to cofounding Datical, Robert was a Director at the Austin Technology Incubator. At ATI, he provided real-world entrepreneurial expertise to ATI member companies to aid in market validation, product development, and fundraising efforts. Robert cofounded Phurnace Software in 2005. He invented and created the flagship product, Phurnace Deliver, which provides middleware infrastructure management to multiple Fortune 500 companies. As Chief Technology Officer for Phurnace, he led technical evangelism efforts, product vision, and large account technical sales efforts. After BMC Software acquired Phurnace in 2009, Robert served as Chief Architect and lead worldwide technology evangelism.


How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

It’s an operational and security best practice to create and maintain custom Amazon Machine Images. Because it’s also a best practice to maintain infrastructure as code, it makes sense to use automated tooling to script the creation and configuration of AMIs that are used to quickly launch Amazon EC2 instances.

In this first of two posts, we’ll use AWS CodeBuild to programmatically create AMIs for use in our environment. As a part of the AMI generation, we will apply OS patches, configure a banner statement, and install some common software, forming a solid base for future Amazon EC2-based deployments.
(more…)