AWS DevOps Blog

Introducing Amazon CloudWatch Logs Integration for AWS OpsWorks Stacks

AWS OpsWorks Stacks now supports Amazon CloudWatch Logs. This benefits all users who want to stream their log files from OpsWorks instances to CloudWatch. This enables you to take advantage of CloudWatch Logs features such as centralized log archival, real-time monitoring of log data, or generating CloudWatch alarms. Until now, OpsWorks customers had to manually install and configure the CloudWatch Logs agent on every instance they wanted to ship log data from. Now, with the built-in CloudWatch Logs integration these features are made available with just a few clicks across all instances in a layer.

(more…)

Using AWS Lambda and Amazon DynamoDB in an Automated Approach to Managing AWS CloudFormation Template Parameters and Mappings

AWS CloudFormation gives you an easy way to codify the creation and management of related AWS resources. The optional Mappings and Parameters sections of CloudFormation templates help you organize and parameterize your templates so you can quickly customize your stack. As organizations adopt Infrastructure as Code best practices, the number of mappings and parameters can quickly grow. In this blog post, we’ll discuss how AWS Lambda and Amazon DynamoDB can be used to simplify updates, reuse, quick lookups, and reporting for these mappings and parameters.

Solution overview

There are three parts to the solution described in this post. You’ll find sample code for this solution in this AWS Labs GitHub repository.

  1. DynamoDB table: Used as a central location to store and update all key-value pairs used in the ‘Mappings’ and ‘Parameters’ sections of the CloudFormation template. This could be a centralized table for the whole organization, with a partition key consisting of the team name and environment (for example, development, test, production) and a sort key for the application name. For more information about the types of keys supported by DynamoDB, see Core Components in the Amazon DynamoDB Developer Guide.

Here is the sample data in the table. “teamname-environment” is the partition key and “appname” is the sort key.

{
        "teamname-environment": "team1-dev",
        "appname": "app1",
        "mappings": {
            "elbsubnet1": "subnet-123456",
            "elbsubnet2": "subnet-234567",
            "appsubnet1": "subnet-345678",
            "appsubnet2": "subnet-456789",
            "vpc": "vpc-123456",
            "appname":"app1",
            "costcenter":"123456",
            "teamname":"team1",
            "environment":"dev",
            "certificate":"arn-123456qwertyasdfgh",
            "compliancetype":"pci",
            "amiid":"ami-123456",
            "region":"us-west-1",
            "publichostedzoneid":"Z234asdf1234asdf",
            "privatehostedzoneid":"Z345SDFGCVHD123",
            "hostedzonename":"demo.internal"
        }
}


 

  1. Lambda function: Accepts the inputs of the primary keys, looks up the DynamoDB table, and returns all the key-value data.
  2. Custom lookup resource: Makes a call to the Lambda function, with the inputs of primary keys (accepted by the Lambda function) and retrieves the key-value data. All CloudFormation templates can duplicate this generic custom resource.

 

The diagram shows the interaction of services and resources in this solution.

  1. Users create a DynamoDB table and, using the DynamoDB console, AWS SDK, or AWS CLI, insert the mappings, parameters, and key-value data.
  2. CloudFormation templates have a custom resource, which calls a Lambda function. The combination of team name and application environment (“teamname-environment”) is the partition key input. Application Name (“appname”) is the sort key input to the Lambda function.
  3. The Lambda function queries the DynamoDB table based on the inputs.
  4. DynamoDB responds to the Lambda function with the results.
  5. The Lambda function responds to the custom resource in the CloudFormation stack, with the key-value data.
  6. The key-value data retrieved by the custom resource is then used by other resources in the stack using GetAtt Intrinsic function.

 

Using the sample code

To use the sample code for this solution, follow these steps:

  1. Clone the repository.
git clone https://github.com/awslabs/custom-lookup-lambda.git
cd custom-lookup-lambda
  1. The values in the sample-mappings.json file will be inserted into DynamoDB. Each record in sample-mappings.json corresponds to an item in DynamoDB. The mappings object contains the mapping.
  2. This solution uses Python as a programming model for AWS Lambda. If you haven’t set up your local development environment for Python, follow these steps, and then install the awscli python package.
pip install awscli
  1. To prepare your access keys or assume-role to make calls to AWS, configure the AWS Command Line Interface as described here. The IAM user or the assumed role used to make API calls must have, at minimum, this access. You can attach this policy to the IAM user or IAM group if you are using access keys, or to the IAM role if you are assuming a role. For more information, see Attaching Managed Policies in the IAM User Guide.
  2. Run insertrecord.sh to create the DynamoDB table named custom-lookup and insert the items in sample-mappings.json.
./insertrecord.sh

This script does the following:

  • Installs boto3 and requests Python packages through pip.
  • Executes the Python script to create the DynamoDB table (custom-lookup) and puts the data in sample-mappings.json.
  1. Run deployer.sh to package and create the Lambda function.
./deployer.sh

This script does the following:

  1. Using sample-stack.yaml, create a stack that makes a call to the Lambda function created in step 6. The function queries the DynamoDB table and produces as output the values corresponding to team1-dev and app1.
aws cloudformation deploy --template-file sample-stack.yaml --stack-name sample-stack
  1. Examine the output in the AWS CloudFormation console. You should see the values retrieved from the DynamoDB table. The Fn::GetAtt function allows you to use the values retrieved by the custom resource Lambda function.

For example, if the custom resource Lambda function is called using resource name as CUSTOMLOOKUP in the sample-stack, the value of key=amiid will be used in the stack using !GetAtt CUSTOMLOOKUP.amiid. Likewise, the value of key=vpc will be used in the stack using !GetAtt CUSTOMLOOKUP.vpc and so on.

Conclusion

In this blog post, we showed how to use an AWS CloudFormation custom resource backed by an AWS Lambda function to query Amazon DynamoDB to retrieve key-value data, thereby replacing the Mappings and Parameter sections of the CloudFormation template. This solution provides a more automated approach to managing template parameters and mappings. You can use the DynamoDB table to simplify updates, reuse, quick lookups, and reporting for these mappings and parameters.

Implementing DevSecOps Using AWS CodePipeline

DevOps is a combination of cultural philosophies, practices, and tools that emphasizes collaboration and communication between software developers and IT infrastructure teams while automating an organization’s ability to deliver applications and services rapidly, frequently, and more reliably.

CI/CD stands for continuous integration and continuous deployment. These concepts represent everything related to automation of application development and the deployment pipeline — from the moment a developer adds a change to a central repository until that code winds up in production.

DevSecOps covers security of and in the CI/CD pipeline, including automating security operations and auditing. The goals of DevSecOps are to:

  • Embed security knowledge into DevOps teams so that they can secure the pipelines they design and automate.
  • Embed application development knowledge and automated tools and processes into security teams so that they can provide security at scale in the cloud.

The Security Cloud Adoption Framework (CAF) whitepaper provides prescriptive controls to improve the security posture of your AWS accounts. These controls are in line with a DevOps blog post published last year about the control-monitor-fix governance model.

Security CAF controls are grouped into four categories:

  • Directive: controls establish the governance, risk, and compliance models on AWS.
  • Preventive: controls protect your workloads and mitigate threats and vulnerabilities.
  • Detective: controls provide full visibility and transparency over the operation of your deployments in AWS.
  • Responsive: controls drive remediation of potential deviations from your security baselines.

To embed the DevSecOps discipline in the enterprise, AWS customers are automating CAF controls using a combination of AWS and third-party solutions.

In this blog post, I will show you how to use a CI/CD pipeline to automate preventive and detective security controls. I’ll use an example that show how you can take the creation of a simple security group through the CI/CD pipeline stages and enforce security CAF controls at various stages of the deployment. I’ll use AWS CodePipeline to orchestrate the steps in a continuous delivery pipeline.

These resources are being used in this example:

  • An AWS CloudFormation template to create the demo pipeline.
  • A Lambda function to perform the static code analysis of the CloudFormation template.
  • A Lambda function to perform dynamic stack validation for the security groups in scope.
  • An S3 bucket as the sample code repository.
  • An AWS CloudFormation source template file to create the security groups.
  • Two VPCs to deploy the test and production security groups.

These are the high-level security checks enforced by the pipeline:

  • During the Source stage, static code analysis for any open security groups. The pipeline will fail if there are any violations.
  • During the Test stage, dynamic analysis to make sure port 22 (SSH) is open only to the approved IP CIDR range. The pipeline will fail if there are any violations.

demo_pipeline1

 

These are the pipeline stages:

1. Source stage: In this example, the pipeline gets the CloudFormation code that creates the security group from S3, the code repository service.

This stage passes the CloudFormation template and pipeline name to a Lambda function, CFNValidateLambda. This function performs the static code analysis. It uses the regular expression language to find patterns and identify security group policy violations. If it finds violations, then Lambda fails the pipeline and includes the violation details.

Here is the regular expression that Lambda function using for static code analysis of the open SSH port:

"^.*Ingress.*(([fF]rom[pP]ort|[tT]o[pP]ort).\s*:\s*u?.(22).*[cC]idr[iI]p.\s*:\s*u?.((0\.){3}0\/0)|[cC]idr[iI]p.\s*:\s*u?.((0\.){3}0\/0).*([fF]rom[pP]ort|[tT]o[pP]ort).\s*:\s*u?.(22))"

2. Test stage: After the static code analysis is completed successfully, the pipeline executes the following steps:

a. Create stack: This step creates the stack in the test VPC, as described in the test configuration.

b. Stack validation: This step triggers the StackValidationLambda Lambda function. It passes the stack name and pipeline name in the event parameters. Lambda validates the security group for the following security controls. If it finds violations, then Lambda deletes the stack, stops the pipeline, and returns an error message.

The following is the sample Python code used by AWS Lambda to check if the SSH port is open to the approved IP CIDR range (in this example, 72.21.196.67/32):

for n in regions:
    client = boto3.client('ec2', region_name=n)
    response = client.describe_security_groups(
        Filters=[{'Name': 'tag:aws:cloudformation:stack-name', 'Values': [stackName]}])
    for m in response['SecurityGroups']:
        if "72.21.196.67/32" not in str(m['IpPermissions']):
            for o in m['IpPermissions']:
                try:
                    if int(o['FromPort']) <= 22 <= int(o['ToPort']):
                        result = False
                        failReason = "Found Security Group with port 22 open to the wrong source IP range"
                        offenders.append(str(m['GroupId']))
                except:
                    if str(o['IpProtocol']) == "-1":
                        result = False
                        failReason = "Found Security Group with port 22 open to the wrong source IP range"
                        offenders.append(str(n) + " : " + str(m['GroupId']))

c. Approve test stack: This step creates a manual approval task for stack review. This step could be eliminated for automated deployments.

d. Delete test stack: After all the stack validations are successfully completed, this step deletes the stack in the test environment to avoid unnecessary costs.

3. Production stage: After the static and dynamic security checks are completed successfully, this stage creates the stack in the production VPC using the production configuration supplied in the template.

a. Create change set: This step creates the change set for the resources in the scope.

b. Execute change set: This step executes the change set and creates/updates the security group in the production VPC.

 

Source code and CloudFormation template

You’ll find the source code at https://github.com/awslabs/automating-governance-sample/tree/master/DevSecOps-Blog-Code

basic-sg-3-cfn.json creates the pipeline in AWS CodePipeline with all the stages previously described. It also creates the static code analysis and stack validation Lambda functions.

The CloudFormation template points to a shared S3 bucket. The codepipeline-lambda.zip file contains the Lambda functions. Before you run the template, upload the zip file to your S3 bucket and then update the CloudFormation template to point to your S3 bucket location.

The CloudFormation template uses the codepipe-single-sg.zip file, which contains the sample security group and test and production configurations. Update these configurations with your VPC details, and then upload the modified zip file to your S3 bucket.

Update these parts of the code to point to your S3 bucket:

 "S3Bucket": {
      "Default": "codepipeline-devsecops-demo",
      "Description": "The name of the S3 bucket that contains the source artifact, which must be in the same region as this stack",
      "Type": "String"
    },
    "SourceS3Key": {
      "Default": "codepipe-single-sg.zip",
      "Description": "The file name of the source artifact, such as myfolder/myartifact.zip",
      "Type": "String"
    },
    "LambdaS3Key": {
      "Default": "codepipeline-lambda.zip",
      "Description": "The file name of the source artifact of the Lambda code, such as myfolder/myartifact.zip",
      "Type": "String"
    },
	"OutputS3Bucket": {
      "Default": "codepipeline-devsecops-demo",
      "Description": "The name of the output S3 bucket that contains the processed artifact, which must be in the same region as this stack",
      "Type": "String"
    },

After the stack is created, AWS CodePipeline executes the pipeline and starts deploying the sample CloudFormation template. In the default template, security groups have wide-open ports (0.0.0.0/0), so the pipeline execution will fail. Update the CloudFormation template in codepipe-single-sg.zip with more restrictive ports and then upload the modified zip file to S3 bucket. Open the AWS CodePipeline console, and choose the Release Change button. This time the pipeline will successfully create the security groups.

demo_pipeline2

You could expand the security checks in the pipeline to include other AWS resources, not just security groups. The following table shows the sample controls you could enforce in the pipeline using the static and dynamic analysis Lambda functions.

demo_pipeline3
If you have feedback about this post, please add it to the Comments section below. If you have questions about implementing the example used in this post, please open a thread on the Developer Tools forum.

Replicating and Automating Sync-Ups for a Repository with AWS CodeCommit

by Chenwei (Cherry) Zhou, Software Development Engineer


 

Many of our customers have expressed interest in the following scenarios:

  • Backing up or replicating an AWS CodeCommit repository to another AWS region.
  • Automatically backing up repositories currently hosted on other services (for example, GitHub or BitBucket) to AWS CodeCommit.

In this blog post, we’ll show you how to automate the replication of a source repository to a repository in AWS CodeCommit. Your source repository could be another AWS CodeCommit repository, a local repository, or a repository hosted on other Git services.

To replicate your repository, you’ll first need to set up a repository in AWS CodeCommit to use as your backup/replica repository. After replicating the contents in your source repository to the backup repository, we’ll demonstrate how you can set up a scheduled job to periodically sync up your source repository with the backup/replica.

(more…)

Extending AWS CodeBuild with Custom Build Environments

by John Pignata | on | in How-to | Permalink | Comments |  Share

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. CodeBuild provides curated build environments for programming languages and runtimes such as Java, Ruby, Python, Go, Node.js, Android, and Docker. It can be extended through the use of custom build environments to support many more.

Build environments are Docker images that include a complete file system with everything required to build and test your project. To use a custom build environment in a CodeBuild project, you build a container image for your platform that contains your build tools, push it to a Docker container registry such as Amazon EC2 Container Registry (ECR), and reference it in the project configuration. When building your application, CodeBuild will retrieve the Docker image from the container registry specified in the project configuration and use the environment to compile your source code, run your tests, and package your application.

In this post, we’ll create a build environment for PHP applications and walk through the steps to configure CodeBuild to use this environment.

(more…)

Run Umbraco CMS with Flexible Load Balancing on AWS

by Ihab Shaaban | on | in How-to | Permalink | Comments |  Share

In version 7.3, Umbraco CMS the popular open source CMS introduced the flexible load balancing feature, which makes the setup of load-balanced applications a lot easier. In this blog post, we’ll follow the guidelines in the Umbraco documentation to set up a load-balanced Umbraco application on AWS. We’ll let AWS Elastic Beanstalk manage the deployments, load balancing, auto scaling, and health monitoring for us.

Application Architecture

When you use the flexible load balancing feature, any updates to Umbraco content will be stored in a queue in the master database. Each server in the load-balanced environment will automatically download, process, and cache the updates from the queue, so no matter which server is selected by the Elastic Load Balancing to handle the request, the user will always receive the same content. Umbraco administration doesn’t work correctly if accessed from a load-balanced server. For this reason, we’ll set up a non-balanced environment to be accessed only by the administrators and editors.

(more…)

Registering Spot Instances with AWS OpsWorks Stacks

AWS OpsWorks Stacks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application’s architecture and the specification of each component, including package installation, software configuration, and more.

Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Because Spot instances are often available at a discount compared to On-Demand instances, you can significantly reduce the cost of running your applications, grow your applications’ compute capacity and throughput for the same budget, and enable new types of cloud computing applications.

You can use Spot instances with AWS OpsWorks Stacks in the following ways:

  • As a part of an Auto Scaling group, as described in this blog post. You can follow the steps in the blog post and in the launch configuration described in step 5, choose the Spot instance option.
  • To provision a Spot instance in the EC2 console and have it automatically register with an OpsWorks stack, as described here.

The walkthrough assumes that your stack and the following resources you create are located in N. Virginia (us-east-1). If you want to use another region, be sure to set the region parameter accordingly.

IAM instance profile: an IAM profile that grants your instances permission to register themselves with OpsWorks.

Lambda function: a function that deregisters your instances from an OpsWorks stack.

Spot instance: the Spot instance that will run your application.

CloudWatch Event role: an event that will trigger the Lambda function whenever your Spot instance is terminated.

Step 1: Create an IAM instance profile

When a Spot instance starts, it must be able to make an API call to register itself with an OpsWorks stack. By assigning an instance with an IAM instance profile, the instance will be able to make calls to OpsWorks.

Open the IAM console at https://console.aws.amazon.com/iam/, choose Roles, and then choose Create New Role. Type a name for the role, and then choose Next Step. Choose the Amazon EC2 role, and then select the check box next to the AWSOpsWorksInstanceRegistration policy. Finally, select Next Step, and then choose Create Role. As its name suggests, the AWSOpsWorksInstanceRegistration policy will allow the instance to make API calls only to register an instance. Copy the following policy to the new role you’ve just created.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "opsworks:AssignInstance",
                "opsworks:DescribeInstances",
 "ec2:CreateTags"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

1b

Step 2: Create a Lambda function

This Lambda function deregisters an instance from your OpsWorks stack. It will be invoked whenever the Spot instance is terminated.

Open the AWS Lambda console at https://us-west-2.console.aws.amazon.com/lambda/home, and choose the option to create a Lambda function. If you are prompted to choose a blueprint, choose Skip. Type a name for the Lambda function, and from the Runtime drop-down list, select Python 2.7.

Next, paste the following code into the Lambda Function Code text box:

import boto3

def lambda_handler(event, context):
    ec2_instance_id = event['detail']['instance-id']
    ec2 = boto3.client('ec2')
    for tag in ec2.describe_instances(InstanceIds=[ec2_instance_id])['Reservations'][0]['Instances'][0]['Tags']:
        if (tag['Key'] == 'opsworks_stack_id'):
            opsworks_stack_id = tag['Value']
            opsworks = boto3.client('opsworks', 'us-east-1')
            for instance in opsworks.describe_instances(StackId=opsworks_stack_id)['Instances']:
                if ('Ec2InstanceId' in instance):
                    if (instance['Ec2InstanceId'] == ec2_instance_id):
                        print("Deregistering OpsWorks instance " + instance['InstanceId'])
                        opsworks.deregister_instance(InstanceId=instance['InstanceId'])
    return ec2_instance_id

The result should look like this:

2

In the Lambda function handler and role section, create a custom role. Edit the policy document to allow the Lambda function to access the required resources:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeInstances",
        "opsworks:DescribeInstances",
        "opsworks:DeregisterInstance"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

 

Step 3: Create a CloudWatch event

Whenever the Spot instance is terminated, the Lambda function from step 2 must be triggered to deregister the instance from its associated stack.

Open the AWS CloudWatch console at https://console.aws.amazon.com/cloudwatch/home, choose Events, and then choose the Create rule button. From Event selector, choose Amazon EC2. Select Specific state(s), and then choose Terminated. Under Targets, for Function, select the Lambda function you created earlier. Finally, choose the Configure details button.

3b

Step 4: Create a Spot instance

Open the EC2 console at https://console.aws.amazon.com/ec2sp/v1/spot/home, and choose the Request Spot Instances button. Use the latest release of Amazon Linux. On the details page, under IAM instance profile, choose the instance profile you created in step 1. Paste the following script into the User data field:

#!/bin/bash
sed -i'' -e 's/.*requiretty.*//' /etc/sudoers
pip install --upgrade awscli
STACK_ID=3464f35f-16b4-44dc-8073-a9cd19533ad5
LAYER_ID=ba04682c-6e32-481d-9d0e-e2fa72b55314
INSTANCE_ID=$(/usr/bin/aws opsworks register --use-instance-profile --infrastructure-class ec2 --region us-east-1 --stack-id $STACK_ID --override-hostname $(tr -cd 'a-z' < /dev/urandom |head -c8) --local 2>&1 |grep -o 'Instance ID: .*' |cut -d' ' -f3)
EC2_INSTANCE_ID=$(/usr/bin/aws opsworks describe-instances --region us-east-1 --instance-ids $INSTANCE_ID | grep -o '"Ec2InstanceId": "i-.*'| grep -o 'i-[a-z0-9]*')
/usr/bin/aws ec2 create-tags --region us-east-1 --resources $EC2_INSTANCE_ID --tags Key=opsworks_stack_id,Value=$STACK_ID
/usr/bin/aws opsworks wait instance-registered --region us-east-1 --instance-id $INSTANCE_ID
/usr/bin/aws opsworks assign-instance --region us-east-1 --instance-id $INSTANCE_ID --layer-ids $LAYER_ID

On boot, this script will register your Spot instance with an OpsWorks stack and layer. Be sure to fill in the following fields:

STACK_ID=YOUR_STACK_ID
LAYER_ID=YOUR_LAYER_ID

4sd

Important: Be sure to turn off auto healing for all of the layers in your stack to which you assign Spot instances. Otherwise, auto healing will attempt to revive your instances upon termination.

When the instance has been provisioned and come online, you’ll see fulfilled displayed in the Status column and active displayed in the State column. This process will take a few minutes. After the instance and request are both in an active state, the instance should be fully booted and registered with your OpsWorks stack/layer.

5

You can also view the instance and its online state in the OpsWorks console under Spot Instance.

6

You can manually terminate a Spot instance from the OpsWorks service console. Simply choose the stop button and the Spot instance will be terminated and removed from your stack. Unlike an On-Demand instance, when a Spot instance in OpsWorks is stopped, it cannot be restarted.

If your Spot instance is terminated through other means (for example, in the EC2 console), a CloudWatch event will trigger the Lambda function, which will deregister the instance from your OpsWorks stack.

Conclusion

You can now use OpsWorks Stacks to define your application’s architecture and software configuration while leveraging the attractive pricing of Spot instances. If you have questions or other feedback, please leave it in the comments.

Introducing Git Credentials: A Simple Way to Connect to AWS CodeCommit Repositories Using a Static User Name and Password

Today, AWS is introducing a simplified way to authenticate to your AWS CodeCommit repositories over HTTPS.

With Git credentials, you can generate a static user name and password in the Identity and Access Management (IAM) console that you can use to access AWS CodeCommit repositories from the command line, Git CLI, or any Git tool that supports HTTPS authentication.

Because these are static credentials, they can be cached using the password management tools included in your local operating system or stored in a credential management utility. This allows you to get started with AWS CodeCommit within minutes. You don’t need to download the AWS CLI or configure your Git client to connect to your AWS CodeCommit repository on HTTPS. You can also use the user name and password to connect to the AWS CodeCommit repository from third-party tools that support user name and password authentication, including popular Git GUI clients (such as TowerUI) and IDEs (such as Eclipse, IntelliJ, and Visual Studio).

So, why did we add this feature? Until today, users who wanted to use HTTPS connections were required to configure the AWS credential helper to authenticate their AWS CodeCommit operations. Customers told us our credential helper sometimes interfered with password management tools such as Keychain Access and Windows Vault, which caused authentication failures. Also, many Git GUI tools and IDEs require a static user name and password to connect with remote Git repositories and do not support the credential helper.

In this blog post, I’ll walk you through the steps for creating an AWS CodeCommit repository, generating Git credentials, and setting up CLI access to AWS CodeCommit repositories.


Git Credentials Walkthrough
Let’s say Dave wants to create a repository on AWS CodeCommit and set up local access from his computer.

Prerequisite: If Dave had previously configured his local computer to use the credential helper for AWS CodeCommit, he must edit his .gitconfig file to remove the credential helper information from the file. Additionally, if his local computer is running macOS, he might need to clear any cached credentials from Keychain Access.

With Git credentials, Dave can now create a repository and start using AWS CodeCommit in four simple steps.

Step 1: Make sure the IAM user has the required permissions
Dave must have the following managed policies attached to his IAM user (or their equivalent permissions) before he can set up access to AWS CodeCommit using Git credentials.

  • AWSCodeCommitPowerUser (or an appropriate CodeCommit managed policy)
  • IAMSelfManageServiceSpecificCredentials
  • IAMReadOnlyAccess

Step 2: Create an AWS CodeCommit repository
Next, Dave signs in to the AWS CodeCommit console and create a repository, if he doesn’t have one already. He can choose any repository in his AWS account to which he has access. The instructions to create Git credentials are shown in the help panel. (Choose the Connect button if the instructions are not displayed.) When Dave clicks the IAM user link, the IAM console will open and he can generate the credentials.

GitCred_Blog1

 

Step 3: Create HTTPS Git credentials in the IAM console
On the IAM user page, Dave selects the Security Credentials tab and clicks Generate under HTTPS Git credentials for AWS CodeCommit section. This creates and displays the user name and password. Dave can then download the credentials.

GitCred_Blog2

Note: This is the only time the password is available to view or download.

 

Step 4: Clone the repository on the local machine
On the AWS CodeCommit console page for the repository, Dave chooses Clone URL, and then copy the HTTPS link for cloning the repository. At the command line or terminal, Dave will use the link he just copied to clone the repository. For example, Dave copies:

GitCred_Blog3

 

And then at the command line or terminal, Dave types:

$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/TestRepo_Dave

When prompted for user name and password, Dave provides the Git credentials (user name and password) he generated in step 3.

Dave is now ready to start pushing his code to the new repository.

Git credentials can be made active or inactive based on your requirements. You can also reset the password if you would like to use the existing username with a new password.

Next Steps

  1. You can optionally cache your credentials using the Git credentials caching command here.
  2. Want to invite a collaborator to work on your AWS CodeCommit repository? Simply create a new IAM user in your AWS account, create Git credentials for that user, and securely share the repository URL and Git credentials with the person you want to collaborate on the repositories.
  3. Connect to any third-party client that supports connecting to remote Git repositories using Git credentials (a stored user name and password). Virtually all tools and IDEs allow you to connect with static credentials. We’ve tested these:
    • Visual Studio (using the default Git plugin)
    • Eclipse IDE (using the default Git plugin)
    • Git Tower UI

For more information, see the AWS CodeCommit documentation.

We are excited to provide this new way of connecting to AWS CodeCommit. We look forward to hearing from you about the many different tools and IDEs you will be able to use with your AWS CodeCommit repositories.

DevOps and Continuous Delivery at re:Invent 2016 – Wrap-up

The AWS re:Invent 2016 conference was packed with some exciting announcements and sessions around DevOps and Continuous Delivery. We launched AWS CodeBuild, a fully managed build service that eliminates the need to provision, manage, and scale your own build servers. You now have the ability to run your continuous integration and continuous delivery process entirely on AWS by plugging AWS CodeBuild into AWS CodePipeline, which automates building, testing, and deploying code each time you push a change to your source repository. If you are interested in learning more about AWS CodeBuild, you can sign up for the webinar on January 20th here.

The DevOps track had over 30 different breakout sessions ranging from customer stories to deep dive talks to best practices. If you weren’t able to attend the conference or missed a specific session, here is a link to the entire playlist.

 

There were a number of talks that can help you get started with your own DevOps practices for rapid software delivery. Here are some introductory sessions to give you the proper background:
DEV201: Accelerating Software Delivery with AWS Developer Tools
DEV211: Automated DevOps and Continuous Delivery

After you understand the big picture, you can dive into automating your software delivery. Here are some sessions on how to deploy your applications:
DEV310: Choosing the Right Software Deployment Technique
DEV403: Advanced Continuous Delivery Techniques
DEV404: Develop, Build, Deploy, and Manage Services and Applications

Finally, to maximize your DevOps efficiency, you’ll want to automate the provisioning of your infrastructure. Here are a couple sessions on how to manage your infrastructure:
DEV313: Infrastructure Continuous Delivery Using AWS CloudFormation
DEV319: Automating Cloud Management & Deployment

If you’re a Lambda developer, be sure to watch this session and read this documentation on how to practice continuous delivery for your serverless applications:
SVR307: Application Lifecycle Management in a Serverless World

For all 30+ DevOps sessions, click here.

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

AWS CodePipeline lets you create continuous delivery pipelines that automatically track code changes from sources such as AWS CodeCommit, Amazon S3, or GitHub. Now, you can use AWS CodePipeline as a code change-management solution for apps, Chef cookbooks, and recipes that you want to deploy with AWS OpsWorks.

This blog post demonstrates how you can create an automated pipeline for a simple Node.js app by using AWS CodePipeline and AWS OpsWorks. After you configure your pipeline, every time you update your Node.js app, AWS CodePipeline passes the updated version to AWS OpsWorks. AWS OpsWorks then deploys the updated app to your fleet of instances, leaving you to focus on improving your application. AWS makes sure that the latest version of your app is deployed.

Step 1: Upload app code to an Amazon S3 bucket

The Amazon S3 bucket must be in the same region in which you later create your pipeline in AWS CodePipeline. For now, AWS CodePipeline supports the AWS OpsWorks provider in the us-east-1 region only; all resources in this blog post should be created in the US East (N. Virginia) region. The bucket must also be versioned, because AWS CodePipeline requires a versioned source. For more information, see Using Versioning.

Upload your app to an Amazon S3 bucket

  1. Download a ZIP file of the AWS OpsWorks sample, Node.js app, and save it to a convenient location on your local computer: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-app.zip.
  2. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. Choose Create Bucket. Be sure to enable versioning.
  3. Choose the bucket that you created and upload the ZIP file that you saved in step 1.code-pipeline-01
  4. In the Properties pane for the uploaded ZIP file, make a note of the S3 link to the file. You will need the bucket name and the ZIP file name portion of this link to create your pipeline.

Step 2: Create an AWS OpsWorks to Amazon EC2 service role

1.     Go to the Identity and Access Management (IAM) service console, and choose Roles.
2.     Choose Create Role, and name it aws-opsworks-ec2-role-with-s3.
3.     In the AWS Service Roles section, choose Amazon EC2, and then choose the policy called AmazonS3ReadOnlyAccess.
4.     The new role should appear in the Roles dashboard.

code-pipeline-02

Step 3: Create an AWS OpsWorks Chef 12 Linux stack

To use AWS OpsWorks as a provider for a pipeline, you must first have an AWS OpsWorks stack, a layer, and at least one instance in the layer. As a reminder, the Amazon S3 bucket to which you uploaded your app must be in the same region in which you later create your AWS OpsWorks stack and pipeline, US East (N. Virginia).

1.     In the OpsWorks console, choose Add Stack, and then choose a Chef 12 stack.
2.     Set the stack’s name to CodePipeline Demo and make sure the Default operating system is set to Linux.
3.     Enable Use custom Chef cookbooks.
4.     For Repository type, choose HTTP Archive, and then use the following cookbook repository on S3: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-cookbook.zip. This repository contains a set of Chef cookbooks that include Chef recipes you’ll use to install the Node.js package and its dependencies on your instance. You will use these Chef recipes to deploy the Node.js app that you prepared in step 1.1.

Step 4: Create and configure an AWS OpsWorks layer

Now that you’ve created an AWS OpsWorks stack called CodePipeline Demo, you can create an OpsWorks layer.

1.     Choose Layers, and then choose Add Layer in the AWS OpsWorks stack view.
2.     Name the layer Node.js App Server. For Short Name, type app1, and then choose Add Layer.
3.     After you create the layer, open the layer’s Recipes tab. In the Deploy lifecycle event, type nodejs_demo. Later, you will link this to a Chef recipe that is part of the Chef cookbook you referenced when you created the stack in step 3.4. This Chef recipe runs every time a new version of your application is deployed.

code-pipeline-03

4.     Now, open the Security tab, choose Edit, and choose AWS-OpsWorks-WebApp from the Security groups drop-down list. You will also need to set the EC2 Instance Profile to use the service role you created in step 2.2 (aws-opsworks-ec2-role-with-s3).

code-pipeline-04

Step 5: Add your App to AWS OpsWorks

Now that your layer is configured, add the Node.js demo app to your AWS OpsWorks stack. When you create the pipeline, you’ll be required to reference this demo Node.js app.

  1. Have the Amazon S3 bucket link from the step 1.4 ready. You will need the link to the bucket in which you stored your test app.
  2. In AWS OpsWorks, open the stack you created (CodePipeline Demo), and in the navigation pane, choose Apps.
  3. Choose Add App.
  4. Provide a name for your demo app (for example, Node.js Demo App), and set the Repository type to an S3 Archive. Paste your S3 bucket link (s3://bucket-name/file name) from step 1.4.
  5. Now that your app appears in the list on the Apps page, add an instance to your OpsWorks layer.

Step 6: Add an instance to your AWS OpsWorks layer

Before you create a pipeline in AWS CodePipeline, set up at least one instance within the layer you defined in step 4.

  1. Open the stack that you created (CodePipeline Demo), and in the navigation pane, choose Instances.
  2. Choose +Instance, and accept the default settings, including the hostname, size, and subnet. Choose Add Instance.

code-pipeline-05

  1. By default, the instance is in a stopped state. Choose start to start the instance.

Step 7: Create a pipeline in AWS CodePipeline

Now that you have a stack and an app configured in AWS OpsWorks, create a pipeline with AWS OpsWorks as the provider to deploy your app to your specified layer. If you update your app or your Chef deployment recipes, the pipeline runs again automatically, triggering the deployment recipe to run and deploy your updated app.

This procedure creates a simple pipeline that includes only one Source and one Deploy stage. However, you can create more complex pipelines that use AWS OpsWorks as a provider.

To create a pipeline

  1. Open the AWS CodePipeline console in the U.S. East (N. Virginia) region.
  2. Choose Create pipeline.
  3. On the Getting started with AWS CodePipeline page, type MyOpsWorksPipeline, or a pipeline name of your choice, and then choose Next step.
  4. On the Source Location page, choose Amazon S3 from the Source provider drop-down list.
  5. In the Amazon S3 details area, type the Amazon S3 bucket path to your application, in the format s3://bucket-name/file name. Refer to the link you noted in step 1.4. Choose Next step.
    code-pipeline-06
  6. On the Build page, choose No Build from the drop-down list, and then choose Next step.
  7. On the Deploy page, choose AWS OpsWorks as the deployment provider.code-pipeline-07-2
  8. Specify the names of the stack, layer, and app that you created earlier, then choose Next step.
  9. On the AWS Service Role page, choose Create Role. On the IAM console page that opens, you will see the role that will be created for you (AWS-CodePipeline-Service). From the Policy Name drop-down list, choose Create new policy. Be sure the policy document has the following content, and then choose Allow.
    For more information about the service role and its policy statement, see Attach or Edit a Policy for an IAM Service Role.code-pipeline-08-2
  10. On the Review your pipeline page, confirm the choices shown on the page, and then choose Create pipeline.

The pipeline should now start deploying your app to your OpsWorks layer on its own.  Wait for deployment to finish; you’ll know it’s finished when Succeeded is displayed in both the Source and Deploy stages.

code-pipeline-09-2

Step 8: Verifying the app deployment

To verify that AWS CodePipeline deployed the Node.js app to your layer, sign in to the instance you created in step 4. You should be able to see and use the Node.js web app.

  1. On the AWS OpsWorks dashboard, choose the stack and the layer to which you just deployed your app.
  2. In the navigation pane, choose Instances, and then choose the public IP address of your instance to view the web app. The running app will be displayed in a new browser tab.code-pipeline-10-2
  3. To test the app, on the app’s web page, in the Leave a comment text box, type a comment, and then choose Send. The app adds your comment to the web page. You can add more comments to the page, if you like.

code-pipeline-11-2

Wrap-up

You now have a working and fully automated pipeline. As soon as you make changes to your application’s code and update the S3 bucket with the new version of your app, AWS CodePipeline automatically collects the artifact and uses AWS OpsWorks to deploy it to your instance, by running the OpsWorks deployment Chef recipe that you defined on your layer. The deployment recipe starts all of the operations on your instance that are required to support a new version of your artifact.

To learn more about Chef cookbooks and recipes: https://docs.chef.io/cookbooks.html

To learn more about the AWS OpsWorks and AWS CodePipeline integration: https://docs.aws.amazon.com/opsworks/latest/userguide/other-services-cp.html