Category: New stuff


Introducing Git Credentials: A Simple Way to Connect to AWS CodeCommit Repositories Using a Static User Name and Password

by Ankur Agarwal | on | in How-to, New stuff |

Today, AWS is introducing a simplified way to authenticate to your AWS CodeCommit repositories over HTTPS.

With Git credentials, you can generate a static user name and password in the Identity and Access Management (IAM) console that you can use to access AWS CodeCommit repositories from the command line, Git CLI, or any Git tool that supports HTTPS authentication.

Because these are static credentials, they can be cached using the password management tools included in your local operating system or stored in a credential management utility. This allows you to get started with AWS CodeCommit within minutes. You don’t need to download the AWS CLI or configure your Git client to connect to your AWS CodeCommit repository on HTTPS. You can also use the user name and password to connect to the AWS CodeCommit repository from third-party tools that support user name and password authentication, including popular Git GUI clients (such as TowerUI) and IDEs (such as Eclipse, IntelliJ, and Visual Studio).

So, why did we add this feature? Until today, users who wanted to use HTTPS connections were required to configure the AWS credential helper to authenticate their AWS CodeCommit operations. Customers told us our credential helper sometimes interfered with password management tools such as Keychain Access and Windows Vault, which caused authentication failures. Also, many Git GUI tools and IDEs require a static user name and password to connect with remote Git repositories and do not support the credential helper.

In this blog post, I’ll walk you through the steps for creating an AWS CodeCommit repository, generating Git credentials, and setting up CLI access to AWS CodeCommit repositories.


Git Credentials Walkthrough
Let’s say Dave wants to create a repository on AWS CodeCommit and set up local access from his computer.

Prerequisite: If Dave had previously configured his local computer to use the credential helper for AWS CodeCommit, he must edit his .gitconfig file to remove the credential helper information from the file. Additionally, if his local computer is running macOS, he might need to clear any cached credentials from Keychain Access.

With Git credentials, Dave can now create a repository and start using AWS CodeCommit in four simple steps.

Step 1: Make sure the IAM user has the required permissions
Dave must have the following managed policies attached to his IAM user (or their equivalent permissions) before he can set up access to AWS CodeCommit using Git credentials.

  • AWSCodeCommitPowerUser (or an appropriate CodeCommit managed policy)
  • IAMSelfManageServiceSpecificCredentials
  • IAMReadOnlyAccess

Step 2: Create an AWS CodeCommit repository
Next, Dave signs in to the AWS CodeCommit console and create a repository, if he doesn’t have one already. He can choose any repository in his AWS account to which he has access. The instructions to create Git credentials are shown in the help panel. (Choose the Connect button if the instructions are not displayed.) When Dave clicks the IAM user link, the IAM console will open and he can generate the credentials.

GitCred_Blog1

 

Step 3: Create HTTPS Git credentials in the IAM console
On the IAM user page, Dave selects the Security Credentials tab and clicks Generate under HTTPS Git credentials for AWS CodeCommit section. This creates and displays the user name and password. Dave can then download the credentials.

GitCred_Blog2

Note: This is the only time the password is available to view or download.

 

Step 4: Clone the repository on the local machine
On the AWS CodeCommit console page for the repository, Dave chooses Clone URL, and then copy the HTTPS link for cloning the repository. At the command line or terminal, Dave will use the link he just copied to clone the repository. For example, Dave copies:

GitCred_Blog3

 

And then at the command line or terminal, Dave types:

$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/TestRepo_Dave

When prompted for user name and password, Dave provides the Git credentials (user name and password) he generated in step 3.

Dave is now ready to start pushing his code to the new repository.

Git credentials can be made active or inactive based on your requirements. You can also reset the password if you would like to use the existing username with a new password.

Next Steps

  1. You can optionally cache your credentials using the Git credentials caching command here.
  2. Want to invite a collaborator to work on your AWS CodeCommit repository? Simply create a new IAM user in your AWS account, create Git credentials for that user, and securely share the repository URL and Git credentials with the person you want to collaborate on the repositories.
  3. Connect to any third-party client that supports connecting to remote Git repositories using Git credentials (a stored user name and password). Virtually all tools and IDEs allow you to connect with static credentials. We’ve tested these:
    • Visual Studio (using the default Git plugin)
    • Eclipse IDE (using the default Git plugin)
    • Git Tower UI

For more information, see the AWS CodeCommit documentation.

We are excited to provide this new way of connecting to AWS CodeCommit. We look forward to hearing from you about the many different tools and IDEs you will be able to use with your AWS CodeCommit repositories.

DevOps and Continuous Delivery at re:Invent 2016 – Wrap-up

by Frank Li | on | in New stuff |

The AWS re:Invent 2016 conference was packed with some exciting announcements and sessions around DevOps and Continuous Delivery. We launched AWS CodeBuild, a fully managed build service that eliminates the need to provision, manage, and scale your own build servers. You now have the ability to run your continuous integration and continuous delivery process entirely on AWS by plugging AWS CodeBuild into AWS CodePipeline, which automates building, testing, and deploying code each time you push a change to your source repository. If you are interested in learning more about AWS CodeBuild, you can sign up for the webinar on January 20th here.

The DevOps track had over 30 different breakout sessions ranging from customer stories to deep dive talks to best practices. If you weren’t able to attend the conference or missed a specific session, here is a link to the entire playlist.

 

There were a number of talks that can help you get started with your own DevOps practices for rapid software delivery. Here are some introductory sessions to give you the proper background:
DEV201: Accelerating Software Delivery with AWS Developer Tools
DEV211: Automated DevOps and Continuous Delivery

After you understand the big picture, you can dive into automating your software delivery. Here are some sessions on how to deploy your applications:
DEV310: Choosing the Right Software Deployment Technique
DEV403: Advanced Continuous Delivery Techniques
DEV404: Develop, Build, Deploy, and Manage Services and Applications

Finally, to maximize your DevOps efficiency, you’ll want to automate the provisioning of your infrastructure. Here are a couple sessions on how to manage your infrastructure:
DEV313: Infrastructure Continuous Delivery Using AWS CloudFormation
DEV319: Automating Cloud Management & Deployment

If you’re a Lambda developer, be sure to watch this session and read this documentation on how to practice continuous delivery for your serverless applications:
SVR307: Application Lifecycle Management in a Serverless World

For all 30+ DevOps sessions, click here.

Building a Microsoft BackOffice Server Solution on AWS with AWS CloudFormation

by Bill Jacobi | on | in Best practices, How-to, New stuff | | Comments

Last month, AWS released the AWS Enterprise Accelerator: Microsoft Servers on the AWS Cloud along with a deployment guide and CloudFormation template. This blog post will explain how to deploy complex Windows workloads and how AWS CloudFormation solves the problems related to server dependencies.

This AWS Enterprise Accelerator solution deploys the four most requested Microsoft servers ─ SQL Server, Exchange Server, Lync Server, and SharePoint Server ─ in a highly available, multi-AZ architecture on AWS. It includes Active Directory Domain Services as the foundation. By following the steps in the solution, you can take advantage of the email, collaboration, communications, and directory features provided by these servers on the AWS IaaS platform.  

There are a number of dependencies between the servers in this solution, including:

  • Active Directory
  • Internet access
  • Dependencies within server clusters, such as needing to create the first server instance before adding additional servers to the cluster.
  • Dependencies on AWS infrastructure, such as sharing a common VPC, NAT gateway, Internet gateway, DNS, routes, and so on.

The infrastructure and servers are built in three logical layers. The Master template orchestrates the stack builds with one stack per Microsoft server and manages inter-stack dependencies. Each of the CloudFormation stacks uses PowerShell to stand up the Microsoft servers at the OS level. Before it configures the OS, CloudFormation configures the AWS infrastructure required by each Windows server. Together, CloudFormation and PowerShell create a quick, repeatable deployment pattern for the servers. The solution supports 10,000 users. Its modularity at both the infrastructure and application level enables larger user counts.

MSServers Solution - 6 CloudFormation Stacks

Managing Stack Dependencies

To explain how we enabled the dependencies between the stacks, the SQLStack is dependent on ADStack since SQL Server is dependent on Active Directory; and, similarly, SharePointStack is dependent on SQLStack, both as required by Microsoft. Lync is dependendent on Exchange since both servers must extend the AD schema independently. In Master, these server dependencies are coded in CloudFormation as follows:

"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",

             "Properties": …
       }
and
"Resources": {
       "ADStack": …AWS::CloudFormation::Stack…
       "SQLStack": {
             "Type": "AWS::CloudFormation::Stack",
             "DependsOn": "ADStack",
             "Properties": …
       },
       "SharePointStack": {
            "Type": "AWS::CloudFormation::Stack",
            "DependsOn": "SQLStack",
            "Properties": …
       }

The “DependsOn” statements in the stack definitions forces the order of stack execution to match the diagram. Lower layers are executed and successfully completed before the upper layers. If you do not use “DependsOn”, CloudFormation will execute your stacks in parallel. An example of parallel execution is what happens after ADStack returns SUCCESS. The two higher-level stacks, SQLStack and ExchangeStack, are executed in parallel at the next level (layer 2).  SharePoint and Lync are executed in parallel at layer 3. The arrows in the diagram indicate stack dependencies.

Passing Parameters Between Stacks

If you have concerns about how to pass infrastructure parameters between the stack layers, let’s use an example in which we want to pass the same VPCCIDR to all of the stacks in the solution. VPCCIDR is defined as a parameter in Master as follows:

"VPCCIDR": {
            "AllowedPattern": "[a-zA-Z0-9]+\..+",
            "Default": "10.0.0.0/16",
            "Description": "CIDR Block for the VPC",
            "Type": "String"
           }

By defining VPCCIDR in Master and soliciting user input for this value, this value is then passed to ADStack by the use of an identically named and typed parameter between Master and the stack being called.

"VPCCIDR": {
            "Description": "CIDR Block for the VPC",
            "Type": "String",
            "Default": "10.0.0.0/16",
            "AllowedPattern": "[a-zA-Z0-9]+\..+"
           }

After Master defines VPCCIDR, ADStack can use “Ref”: “VPCCIDR” in any resource (such as the security group, DomainController1SG) that needs the VPC CIDR range of the first domain controller. Instead of passing commonly-named parameters between stacks, another option is to pass outputs from one stack as inputs to the next. For example, if you want to pass VPCID between two stacks, you could accomplish this as follows. Create an output like VPCID in the first stack:

Outputs”  : {
               “VPCID” : {
                          “Value” : “ {“Ref” : “VPC”},
                          “Description” : “VPC ID”
               }, …
}

In the second stack, create a parameter with the same name and type:

Parameters” : {
               “VPCID” : {
                          “Type” : “AWS::EC2::VPC::Id”,
               }, …
}

When the first template calls the second template, VPCID is passed as an output of the first template to become an input (parameter) to the second.

Managing Dependencies Between Resources Inside a Stack

All of the dependencies so far have been between stacks. Another type of dependency is one between resources within a stack. In the Microsoft servers case, an example of an intra-stack dependency is the need to create the first domain controller, DC1, before creating the second domain controller, DC2.

DC1, like many cluster servers, must be fully created first so that it can replicate common state (domain objects) to DC2.  In the case of the Microsoft servers in this solution, all of the servers require that a single server (such as DC1 or Exch1) must be fully created to define the cluster or farm configuration used on subsequent servers.

Here’s another intra-stack dependency example: The Microsoft servers must fully configure the Microsoft software on the Amazon EC2 instances before those instances can be used. So there is a dependency on software completion within the stack after successful creation of the instance, before the rest of stack execution (such as deploying subsequent servers) can continue. These intra-stack dependencies like “software is fully installed” are managed through the use of wait conditions. Wait conditions are CloudFormation resources just like EC2 instances and allow the “DependsOn” attribute mentioned earlier to manage dependencies inside a stack. For example, to pause the creation of DC2 until DC1 is complete, we configured the following “DependsOn” attribute using a wait condition. See (1) in the following diagram:

"DomainController1": {
            "Type": "AWS::EC2::Instance",
            "DependsOn": "NATGateway1",
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "configSets": {
                        "config": [
                            "setup",
                            "rename",
                            "installADDS",
                            "configureSites",
                            "installADCS",
                            "finalize"
                        ]
                    }, …
             },
             "Properties" : …
},
"DomainController2": {
             "Type": "AWS::EC2::Instance",
[1]          "DependsOn": "DomainController1WaitCondition",
             "Metadata": …,
             "Properties" : …
},

The WaitCondition (2) uses on a CloudFormation resource called a WaitConditionHandle (3), which receives a SUCCESS or FAILURE signal from the creation of the first domain controller:

"DomainController1WaitCondition": {
            "Type": "AWS::CloudFormation::WaitCondition",
            "DependsOn": "DomainController1",
            "Properties": {
                "Handle": {
[2]                    "Ref": "DomainController1WaitHandle"
                },
                "Timeout": "3600"
            }
     },
     "DomainController1WaitHandle": {
[3]            "Type": "AWS::CloudFormation::WaitConditionHandle"
     }

SUCCESS is signaled in (4) by cfn-signal.exe –exit-code 0 during the “finalize” step of DC1, which enables CloudFormation to execute DC2 as an EC2 resource via the wait condition.

                "finalize": {
                       "commands": {
                           "a-signal-success": {
                               "command": {
                                   "Fn::Join": [
                                       "",
                                       [
[4]                                            "cfn-signal.exe -e 0 "",
                                           {
                                               "Ref": "DomainController1WaitHandle"

                                            },
                                           """
                                       ]
                                   ]
                               }
                           }
                       }
                   }
               }

If the timeout had been reached in step (2), this would have automatically signaled a FAILURE and stopped stack execution of ADStack and the Master stack.

As we have seen in this blog post, you can create both nested stacks and nested dependencies and can pass parameters between stacks by passing standard parameters or by passing outputs. Inside a stack, you can configure resources that are dependent on other resources through the use of wait conditions and the cfn-signal infrastructure. The AWS Enterprise Accelerator solution uses both techniques to deploy multiple Microsoft servers in a single VPC for a Microsoft BackOffice solution on AWS.  

In a future blog post, we will illustrate how PowerShell can be used to bootstrap and configure Windows instances with downloaded cmdlets, all integrated into CloudFormation stacks.

AWS CodeDeploy Deployments with HashiCorp Consul

by George Huang | on | in How-to, New stuff | | Comments

Learn how to use AWS CodeDeploy and HashiCorp Consul together for your application deployments. 

AWS CodeDeploy automates code deployments to Amazon Elastic Compute Cloud (Amazon EC2) and on-premises servers. HashiCorp Consul is an open-source tool providing service discovery and orchestration for modern applications. 

Learn how to get started by visiting the guest post on the AWS Partner Network Blog. You can see a full list of CodeDeploy product integrations by visiting here

Using Custom JSON on AWS OpsWorks Layers

by Daniel Huesch | on | in How-to, New stuff | | Comments

Custom JSON, which has always been available on AWS OpsWorks stacks and deployments, is now also available as a property on layers in stacks using Chef versions 11.10, 12, and 12.2.

In this post I show how you can use custom JSON to adapt a single Chef cookbook to support different use cases on individual layers. To demonstrate, I use the example of a MongoDB setup with multiple shards.

In OpsWorks, each instance belongs to one or more layers, which in turn make up a stack. You use layers to specify details about which Chef cookbooks are run when the instances are set up and configured, among other things. When your stacks have instances that serve different purposes, you use different cookbooks for each.

Sometimes, however, there are only small differences between the layers and they don’t justify using separate cookbooks. For example, when you have a large MongoDB installation with multiple shards, you would have a layer per shard, as shown in the following figure, but your cookbooks wouldn’t necessarily differ.

custom-json-per-layer-1.png

Let’s assume I’m using the community cookbook for MongoDB. I would configure this cookbook using attributes. The attribute for setting the shard name would be node[:mongodb][:shard_name]. But let’s say that I want to set a certain attribute for any deployment to any instance in a given layer. I would use custom JSON to set the that attribute.

When declared on a stack, custom JSON always applies to all instances, no matter which layer they’re in. Custom JSON declared on a deployment is helpful for one-off deployments with special settings; but, the provided custom JSON doesn’t stick to the instances you deploy to, so a subsequent deployment doesn’t know about custom JSON you might have specified in an earlier deployment.

Custom JSON declared on the layer applies to each instance that belongs to that layer. Like custom JSON declared on the stack, it’s permanently stored and applied to all subsequent deployments. So you just need to edit each layer and set the right shard, as shown in the following figure:

custom-json-per-layer-2.png

During a Chef run, OpsWorks makes custom JSON contents available as attributes. That way the settings are available in the MongoDB cookbook and configure the MongoDB server accordingly. For details about using custom JSON content as an attribute, see our documentation.

Custom JSON declared on the deployment overrides custom JSON declared on the stack. Custom JSON declared on the layer sits in between those two. So you can use it on the layer to override stack settings, and on the deployment to override stack or layer settings.

Using custom JSON gives you a way to tweak a setting for all instances in a given layer without having to affect the entire stack, and without having to provide custom JSON for every deployment.

Continuous Delivery for a PHP Application Using AWS CodePipeline, AWS Elastic Beanstalk, and Solano Labs

by David Nasi | on | in How-to, New stuff, Web app | | Comments

My colleague Joseph Fontes, an AWS Solutions Architect, wrote the guest post below to discuss continuous delivery for a PHP Application Using AWS CodePipeline, AWS Elastic Beanstalk, and Solano Labs.


Solano Labs recently integrated Solano CI with AWS CodePipeline, a continuous delivery service for fast and reliable application updates. Solano Labs provides CI/CD capabilities to a variety of organizations, such as Airbnb, Change.org, and Apptio.  Solano CI is an enterprise-grade, scalable continuous integration (CI) and continuous deployment (CD) SaaS solution.  CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models that you define. You can now take advantage of the CI/CD capabilities long enjoyed by Solano Labs customers from within CodePipeline and with all of the ease of using the AWS Management Console.

In this post, we demonstrate how to use Solano CI with CodePipeline to test a PHP application using PHPUnit, and then deploy the application to AWS Elastic Beanstalk (Elastic Beanstalk). 

You will learn how to:

  • Deploy a sample PHP application to Elastic Beanstalk
  • Create a CD tool chain to push your code to Elastic Beanstalk
  • Connect your GitHub source repository to CodePipeline
  • Set up a Solano CI build stage to build your application and perform integration tests
  • Deploy your tested application to Elastic Beanstalk 

After you have completed these steps, your code will be continuously tested and delivered safely in an automated fashion.

To follow along, you need to set up an account with Solano Labs and have a GitHub account and a repository for the demo. 

1. To create an account with Solano Labs, go to http://docs.solanolabs.com/introduction/.

2. If you don’t already have a GitHub account, create one. Also, create the repository for this demo. For instructions for both, go to

https://help.github.com/articles/signing-up-for-a-new-github-account/

https://help.github.com/articles/creating-a-new-repository/

As a part of the deployment process, you will use the PHPUnit testing framework on the demonstration application we’ve posted to our Elastic Beanstalk environment. For more information about PHPUnit and Solano Labs PHPUnit integration, go to https://phpunit.de/ and http://docs.solanolabs.com/ConfiguringLanguage/php/.

Now, let’s start the demo.

1. Clone the application code from the following location into your Git repository: https://github.com/awslabs/aws-demo-php-simple-app.git.

2. Create the destination application in Elastic Beanstalk by following the instructions at http://docs.aws.amazon.com/gettingstarted/latest/deploy/deploying-with-elastic-beanstalk.html.

You need to choose specific options for your configuration. Instead of Node.js for the Predefined configuration, choose PHP

When asked to provide an archive of an application to get started, download the following .zip file to your local machine and upload into the Elastic Beanstalk application.

3. Finish configuring the Elastic Beanstalk application and environment.

4. Ensure that the configuration is running properly by testing it in a web browser.  You should see a page similar to this:

5. Next, create a CodePipeline pipeline to establish the continuous delivery process.  From the AWS Management Console, choose Services, and then, choose AWS CodePipeline.  If this is the first time you’ve created a pipeline, CodePipeline displays the following page:


6. Choose Get started.

7. Enter a name for your pipeline. For this example, we use “solano-eb-build”.   Choose Next step.

8. Now define your source provider.  CodePipeline provides direct integration between GitHub repositories and versioned Amazon S3 locations.  After you define your source, CodePipeline tracks changes committed to the source and performs actions that you will define.

For Source provider, choose GitHub. You may be requested to login to your Github account to proceed.  Under Connect to GitHub, you’ll see a variety of repositories and branches.  Choose the repository and branch that you just created, and then choose Next step.

9. For Build provider, choose Solano CI, and then choose Connect.  You may be requested to login to your Solano CI account.  When you are redirected to the Solano site, confirm the connection between CodePipeline and Solano CI by choosing Connect.  Then, choose Next step on the Create pipeline page.

 

 

10. For Deployment provider, choose AWS Elastic Beanstalk, and then choose the Application name and the Environment name of the Elastic Beanstalk environment you created earlier.  Choose Next step.

An Amazon Identity and Access Management (IAM) role will provide the permissions necessary for CodePipeline to perform the necessary build actions and service calls.  If you already have a role that you want to use with the pipeline push, choose it for Role name. Otherwise, choose Create role to create a role with the sufficient permissions to perform the build and push tasks.  Please review these predefined permissions and then accept.  For information about IAM, see  http://docs.aws.amazon.com/codepipeline/latest/userguide/access-permissions.html. Then choose Next step.

 

11. Review the information and make any necessary changes, and then choose Create pipeline.

You’ll get confirmation that the pipeline’s been created:

12. Now that you have a pipeline and the initial version of the application running, let’s make some changes.  In your Git directory, open the www/index.php file with a text editor.  Change the value of:

$AppName = “Demo Web App”;

to

$AppName = “PHP Web App”;

Save the file and check your changes into Git, as follows.

Index.php prior to alteration:

Index.php after alteration:


Outcome of running the git commit command:

You can see that the name of the application rendered in the web browser has been changed:

13. Check on the status of the build process by viewing the CodePipeline console:

 

 

You can also see the build status on the Solano Labs dashboard and build details page:

 

 

You have leveraged Solano’s built-in capabilities to test the functionality of the updated code.

In the Git repository, there are six tests identified across two files.  If you look at the phpunit directory in the Git working directory, you will see two files, indexTest.php and loadGenTest.php.  loadGenTest.php tests the load generation feature of the demo application.  To test this functionality, you ordinarily need to generate load that would take time to test.  We can take advantage of the Solano CI parallel PHPUnit testing to ensure that the load is spread in a parallel fashion.

PHPUnit test definitions:

These are top three tests performed:

In the next screenshots, you can see the file responsible for PHPUnit test configuration and the solano.yml configuration file that invokes the PHPUnit testing:

This walkthrough demonstrates just a few of the many unique capabilities available when you integrate Solano CI into the CodePipeline build process.  Using Solano CI expands the portfolio of technologies available for use within your AWS CI/CD implementations.  You can expand the capabilities of this one example by pushing unique GitHub branches to different development, testing, and production environments.  You can also leverage other CodePipeline integrations to take advantage of AWS CodeDeploy and a collection of specialized testing services.

Now that you have the build process running, make some changes and observe the process flow from within the CodePipeline console, AWS Elastic Beanstalk console, and the Solano Labs’ Solano CI console.  After you check changes into the GitHub repository that you used for this demonstration, updates are automatically dispatched to your new Elastic Beanstalk application.

 

AWS OpsWorks Now Supports Chef 12 for Linux

by Daniel Huesch | on | in New stuff | | Comments

Update: In the meantime our friends at Chef published a post that walks you through deploying a Django app on AWS OpsWorks using Chef 12. Go check it out!

In addition to providing Chef 12 support for Windows, AWS OpsWorks (OpsWorks) now supports Chef 12 for Linux operating systems. This release benefits users who want to take advantage of the large selection of community cookbooks or want to build and customize their own cookbooks.

You can use the latest release of Chef 12 to support Linux-based stacks currently running Chef Client 12.5.1 (For those of you concerned about future Chef Client upgrades, be assured that new versions of the Chef 12.x Client will be made available shortly after their public release). OpsWorks now also prevents cookbook namespace conflicts by using two separate Chef runs (OpsWorks’s Chef run and yours run independently).

Use Chef Supermarket Cookbooks

Because this release focuses on providing you with full control and flexibility when using your own cookbooks, built-in layers and cookbooks will no longer be available for Chef 12 (PHP, Rails, Node.JS, MySQL, etc.,). Instead, Chef 12 users can use OpsWorks to leverage up-to-date community cookbooks to support the creation of custom layers. A Chef 12 Node.js sample stack (on Windows and Linux) is now available in the OpsWorks console. We’ll provide additional examples in the future.

"With the availability of the Chef 12 Linux client, AWS OpsWorks customers can now leverage shared Chef Supermarket cookbooks for both Windows and Linux workloads. This means our joint customers can maximize the full potential of the vibrant open source Chef Community across the entire stack."

– Ken Cheney, Vice President of Business Development, Chef

Chef 11.10 and earlier versions for Linux will continue to support built-in layers. The built-in cookbooks will continue to be available at https://github.com/aws/opsworks-cookbooks/tree/release-chef-11.10.

Beginning in January 2016, you will no longer be able to create Chef 11.4 stacks using the OpsWorks console. Existing Chef 11.4 stacks will continue to operate normally, and you will continue to be able to create stacks with Chef 11.4 by using the API.

Use Chef Search

With Chef 12 Linux, you can use Chef search, which is the native Chef way to obtain information about stacks, layers, instances, and stack resources, such as Elastic Load Balancing load balancers and RDS DB instances. The following examples show how to use Chef search to get information and to perform common tasks. A complete reference of available search indices is available in our documentation.

Use Chef search to retrieve the stack’s state:

search(:node, “name:web1”)
search(:node, “name:web*”)

Map OpsWorks layers as Chef roles:

appserver = search(:node, "role:my-app").first
Chef::Log.info(”Private IP: #{appserver[:private_ip]}")

Use Chef search to retrieve hostnames, IP addresses, instance types, Amazon Machine Images (AMIs), Availability Zones (AZs), and more:

search(:aws_opsworks_app, "name:myapp")
search(:aws_opsworks_app, ”deploy:true")
search(:aws_opsworks_layer, "name:my_layer*")
search(:aws_opsworks_rds_db_instance)
search(:aws_opsworks_volume)
search(:aws_opsworks_ecs_cluster)
search(:aws_opsworks_elastic_load_balancer)
search(:aws_opsworks_user)

Use Chef search for ad-hoc resource discovery, for example, to find the database connection information for your applications or to discover all available app server instances when configuring a load-balancer.

Explore a Chef 12 Linux or Chef 12.2 Windows Stack

To explore a Chef 12 Linux or Chef 12.2 Windows stack, simply select the “Sample stack” option in the OpsWorks console:

To create a Chef 12 stack based on your Chef cookbooks, choose Linux as the Default operating system:

Use any Chef 12 open source community cookbook from any source, or create your own cookbooks. OpsWorks’s built-in operational tools continue to empower you to manage your day-to-day operations.

Integrating AWS OpsWorks and AWS CodeCommit

by Nitin Verma | on | in How-to, New stuff | | Comments

Take advantage of CodeCommit as a repository for OpsWorks now!

AWS OpsWorks (OpsWorks) can automatically fetch the Apps and Chef cookbooks from Git repositories, among other sources. This post shows how AWS OpsWorks can use the new Git-based repository service, AWS CodeCommit (CodeCommit), to fetch and deploy an application stored in an CodeCommit repository.  

Unlike other Git-based services, CodeCommit uses AWS Identity and Access Management (IAM) users, groups and their policies to allow access to the repositories. To connect to CodeCommit, you need an IAM user with the required CodeCommit access permissions (like public SSH key and IAM policies). After you’ve done that, you simply create and deploy your app using OpsWorks.

Step 1: Set up an IAM user and SSH keys

Begin by creating an IAM user and attaching a policy to grant the user access to CodeCommit.

  • In the IAM console, choose User, and then choose Create new users. On the Create new users page, in the Enter User Name field, type codecommit_deploy, for this example, and then choose Create.
     
  • Attach a policy to grant the user access to CodeCommit. The built-in IAM policy, AWSCodeCommitReadOnly, is sufficient for deployment purposes. In this example, codecommit_deploy is the IAM user. For more information, see Setting Up for AWS CodeCommit.
     
  • Create an SSH key pair on your local machine (for example, by using ssh-keygen), making sure there is no password or passphrase for the key pair. In this example, ~/.ssh/codecommit_id_rsa is the private SSH key. ~/.ssh/codecommit_id_rsa.pub is the public SSH key. For more information, see the Setting Up for SSH Connection.
     
  • Use the IAM console or AWS CLI to upload the SSH public key (~/.ssh/codecommit_id_rsa.pub) to the codecommit_deploy user. If you use the CLI, the CLI user must have an IAM policy with the IAM:UploadSSHPublicKey action set to Allow. For example:
{     
"Sid": "Stmt1439339776000",
"Effect": "Allow",
"Action": [
  "iam:UpdateSSHPublicKey"
           ],
"Resource": [
              "*"
             ]
}
  • If you use the console, navigate to Users, choose codecommit_deploy, and then scroll down to SSH keys for AWS CodeCommit. Choose Upload SSH Key and copy and paste the contents of the public SSH key ~/.ssh/codecommit_id_rsa.pub
     
  • Make a note of the SSH key ID, which is like a user name and is required to access the CodeCommit repository. In this example, the SSH key ID is APKAJN47QZ7VONJX7A3Q.
     
  • (Optional) Test the SSH credentials on your local machine.
$ cat ~/.ssh/config 
Host git-codecommit.*.amazonaws.com
   User APKAJN47QZ7VONJX7A3Q
   IdentityFile ~/.ssh/codecommit_id_rsa

$ ssh git-codecommit.us-east-1.amazonaws.com
You have successfully authenticated over SSH. You can use Git to interact with AWS CodeCommit. Interactive shells are not supported.Connection to git-codecommit.us-east-1.amazonaws.com closed by remote host. Connection to git-codecommit.us-east-1.amazonaws.com

Step 2: Create and deploy an OpsWorks app

Now, create your app and deploy it:

  • In the OpsWorks console, choose the stack, then Apps, either add a new app or edit an existing app, and for Application Source, choose Git.
     
  • For Repository URL, type ssh://APKAJN47QZ7VONJX7A3Q@git-codecommit.us-east-1.amazonaws.com:v1:repos:myapp, where the SSH URL is obtained from the CodeCommit console and APKAJN47QZ7VONJX7A3Q is your own SSH key ID (created during step 1).
  • Under Repository SSH key, add the contents of the private SSH key file i.e. ~/.ssh/codecommit_id_rsa, in this case.
     
  • Add the app, and then deploy it as usual!

Step 3: Verify the Deployment

Check the OpsWorks deployment logs. If you see something similar to the following, your deployment succeeded:

[2015-07-29T04:29:40+00:00] INFO: deploy[/srv/www/myapp] cloning repo ssh://APKAJN47QZ7VONJX7A3Q@git-codecommit.us-east-1.amazonaws.com/v1/repos/myapp to /srv/www/myapp/shared/cached-copy

[2015-07-29T04:29:46+00:00] INFO: deploy[/srv/www/myapp] checked out branch: HEAD onto: deploy reference: 0a76794607c4a26369be3fdd855acd590c3be7bb

Conclusion

By following these instructions, you should be able to deploy OpsWorks apps or cookbooks from an CodeCommit repository. The AWS CodeCommit team is working to improve its integration with other AWS services. Stay tuned for updates!

Integrating AWS CodeCommit with Review Board

by Wade Matveyenko | on | in How-to, New stuff | | Comments

Today we have a guest post from Jeff Nunn, a Solutions Architect at AWS, specializing in DevOps and Big Data solutions.

By now you’ve probably heard of AWS CodeCommit–a secure, highly scalable, managed source control service that hosts private Git repositories. AWS CodeCommit supports the standard functionality of Git, allowing it to work seamlessly with your existing Git-based tools. In addition, CodeCommit works with Git-based code review tools, allowing you and your team to better collaborate on projects. By the end of this post, you will have launched an EC2 instance, configured the AWS CLI, and integrated CodeCommit with a code review tool.

What is Code Review?

Code review (sometimes called "peer review") is the process of making source code available for other collaborators to review. The intention of code review is to catch bugs and errors and improve the quality of code before it becomes part of the product. Most code review systems provide contributors the ability to capture notes and comments about the changes to enable discussion of the change, which is useful when working with distributed teams.

We’ll show you how to integrate AWS CodeCommit into development workflow using the Review Board code review system.

Getting Started

If you’re reading this, you most likely are familiar with Git and have it installed. To work with files or code in AWS CodeCommit repositories, you must install Git on your local machine, if you haven’t installed it already. AWS CodeCommit supports Git versions 1.7 and later.

In addition, you’ll need to have an AWS Identity and Access Management (IAM) user with an appropriate AWS CodeCommit permissions policy attached. Follow the instructions at Set Up Your IAM User Credentials for Git and AWS CodeCommit to give your user(s) access.

While this post covers integration with Review Board, you can take what you learn here and integrate with your favorite code review tools. We’ll soon publish integration methods for other tools, like Gerrit, Phabricator, and Crucible. When you have completed the above prerequisites, you are ready to continue.

Review Board

Review Board is a web-based collaborative code review tool for reviewing source code, documentation, and other text-based files. Let’s integrate Review Board with a CodeCommit repo. You can integrate CodeCommit with an existing Review Board server, or setup a new one. If you already have Review Board setup, you can skip down to Step 2: Setting up the Review Board Server.

Step 1: Creating a Review Board Server

To setup a Review Board server, we turn to the AWS Marketplace. The AWS Marketplace has a rich ecosystem of Independent Software Vendors (ISVs) and partners that AWS works with, and there you will find many pre-built Amazon Machine Images (AMIs) to help save you time and effort when setting up software or application stacks.

We launch an EC2 instance based off a public Review Board AMI from Bitnami. From the EC2 console, click the Launch Instance button. From Choose an Amazon Machine Image (AMI), click the "AWS Marketplace" link, and then search for "review board hvm".

In the search results returned, select "Review Board powered by Bitnami (HVM)". While some products in the AWS Marketplace do have an additional cost to use them, you’ll notice that there is no additional cost to run Review Board from Bitnami. Click the Select button to choose this image, and you are taken to the "Choose Instance Type" step. By default, the Review Board AMI selects an m3.medium instance to launch into, but you can choose any instance type that fits your needs. Click the Review and Launch button to review the settings for your instance. Scroll to the bottom of the screen, click the Edit Tags link, and create a "Name" tag with a descriptive value:

Click the Review and Launch button again, and then click the Launch button. Verify that you have a key pair that will connect to your instance, and then click the Launch Instance button.

After a short time, your instance should successfully launch, and be in a "running" state:

Because we used Bitnami’s prebuilt-AMI to do our install, the majority of the configuration is done for us, including the creation of an administrative user and password. To retrieve the password, select the instance, click the Actions button, and then click "Get System Log." You can find more information on this process at Bitnami’s website for retrieving AWS Marketplace credentials.

Scroll until you’re near the bottom of the log, and find the "Bitnami application password." You’ll need this to login to your Review Board server in Step 2.

Step 2: Setting Up The Review Board Server

SSH into your EC2 instance. If you’ve installed Review Board with the Bitnami AMI, you’ll need to login as the "bitnami" user rather than the "ubuntu" user. Download and install the AWS CLI, if you haven’t done so already. This is a prerequisite to enabling you to interact with AWS CodeCommit from the command line. For more information, see Getting Set Up with the AWS Command Line Interface.

Note: Although the Review Board AMI comes with Python and pip ("pip" is a package manager for Python), you’ll need to re-install pip before installing the AWS CLI. From the command line, type:

> curl -O https://bootstrap.pypa.io/get-pip.py

and then:

> sudo python get-pip.py

Follow the instructions from the "Install the AWS CLI using pip" section, and then configure the command line to work with your AWS account. Be sure to specify "us-east-1" as your default region, as CodeCommit currently only works from this region.

Configure the AWS CodeCommit Credential Helper

The approach that you take to set up your IAM user credentials for Git and AWS CodeCommit on your local machine depends on the connection protocol (HTTPS) and operating system (Windows, or Linux, OS X, or Unix) that you intend to use.

For HTTPS, you allow Git to use a cryptographically-signed version of your IAM user credentials whenever Git needs to authenticate with AWS in order to interact with repositories in AWS CodeCommit. To do this, you install and configure on your local machine what we call a credential helper for Git. (Without this credential helper, you would need to manually sign and resubmit a cryptographic version of your IAM user credentials frequently whenever you need Git to authenticate with AWS. The credential helper automatically manages this process for you.)

Follow the steps in Set up the AWS CodeCommit credential helper for Git depending on your desired connection protocol and operating system.

Create or Clone a CodeCommit Repository

Now that you have your Review Board server setup, you’ll need to add an AWS CodeCommit repository to connect to. If you have not yet created an AWS CodeCommit repository, follow the instructions here to create a new repository, and note the new AWS CodeCommit repository’s name.

If you have an existing AWS CodeCommit repository but you do not know its name, you get the name by following the instructions in View Repository Details.

Once you have your AWS CodeCommit repository name, you will create a local repo on the Review Board server. Change to a directory in which you will store the repository, and clone the repository. Cloning to your home directory is shown in the following example:

> cd ~
> git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyDemoRepo my-demo-repo

Setting up a Repository in Review Board

Now that you have cloned your repo to your Review Board server, we need to configure Review Board to watch your repo. Visit the Public DNS address of your EC2 instance, and login to the Review Board administration panel, which will look like http://ec2-dns-address-of-your-instance.amazonaws.com/admin/.  The username is “user,” and the password is the password you saved from the log file in Step 1.

After logging in, you are taken the admin dashboard. Create a new repository from the Manage section of the admin menu. Fill in the name, keep the “Hosting service” at “None”, and select “Git” as the “Repository type.” For the path, enter the path to your cloned repository, including the “.git” hidden folder, as seen in the example below:

 

Finally, click the Save button. Back on the Manage section of the admin menu click the Repositories link to be taken to a dashboard of the repositories on your Review Board system. Next to your repository, click the RBTools Setup link. Follow the instructions to create a .reviewboardrc file in your repository:

Commit the file to your repository and push it to AWS CodeCommit with a "git push" command. RBTools will then be able to find the correct server and repository when any developer posts changes for review.

Step 3: Setting up Your Client(s)

Now that you have a Review Board server setup, you’ll need to install AWS CodeCommit and RBTools. For each client machine, configure the AWS CLI and AWS CodeCommit similar to the way you did on the Review Board server. Then install RBTools, which will allow you to post your changes to the Review Board server, and let other collaborators comment on those changes. You may also want to create user accounts in the Review Board dashboard for each developer that will submit reviews. For the purposes of this demo, create at least one additional user to serve as a reviewer for your code review requests.

Step 4: Using Review Board in your AWS CodeCommit Workflow

You now have a Review Board server integrated with AWS CodeCommit, a client from which to send review requests, and an additional Review Board user to assign review requests to. Let’s take a look at a typical Git workflow with Review Board.

In many projects, feature work or bug fixes are first done in a Git branch, then moved into the main branch (sometimes called master or mainline) after a testing or review process.

Let’s go through a simple example to demonstrate branching, merging, and reviewing in an AWS CodeCommit and Review Board workflow. We’ll take a fictitious cookbook project and add recipes to it, and have a reviewer accept your changes before you merge them into your AWS CodeCommit project’s master branch.

Creating a Review Request

Create a branch in your project from which to add a new file:

> git checkout -b pies
Switched to a new branch pies

Now, add a new file to this branch (you could also modify an existing file, but for the sake of this demo, we will create a new one).

> echo "6 cups of sliced, peeled apples." > applepie.txt

You’ve now added the beginning of a new pie recipe to your cookbook. Ideally, you would now run unit tests to verify the validity of your work, or similarly validate that your code was functional and did not break other parts of your code base.

Add this recipe to your repo, and give it a meaningful commit message:

> git add .
> git commit -m "beginning apple pie recipe"
[pies 5d2a678] beginning apple pie recipe
1 file changed, 1 insertion(+)
create mode 100644 applepie.txt

You have added a new file to a branch in your project–let’s share it with your reviewers. We use rbt post, along with our branch name, to post it to the Review Board server for review. On your first post to Review Board, you will be asked a username and password, and upon successful post to the Review Board server, you are given a review request URL.

> rbt post pies
Review request #1 posted.

http://ec2-dns-address-of-your-instance.amazonaws.com:80/r/1/
http://ec2-dns-address-of-your-instance.amazonaws.com:80/r/1/diff/

We specified our branch name, "pies" in the "rbt post" command, which automatically chooses the latest commit in our commit history to send to the Review Board server. You are able to post any commit in your history however, by specifying the commit ID–which you can retrieve by issuing a "git log" command. For example, if you add additional pie recipes across several commits, you could choose a specific commit to send to the Review Board server.

> git log

commit 1d1bfc579bac494ae656eae9ce6ee23cae3f146b
Author: username <user@email.com>
Date: Mon May 11 10:37:12 2015 -0500

  Blueberry pie 

commit 468f20fc4272691a409ef21dc0d6eaab27c1ab35
Author: username <user@email.com>
Date: Mon May 11 10:35:22 2015 -0500 

  Cherry and chocolate pie recipes 

> rbt post 468f20
Review request #2 posted.

http://ec2-dns-address-of-your-instance.amazonaws.com:80/r/2/
http://ec2-dns-address-of-your-instance.amazonaws.com:80/r/2/diff/

Now that we have sent our apple pie recipe (and any additional recipes you may have created) to the Review Board server for review, let’s log in to walk through the process of assigning them to be reviewed.

Log in to your Review Board account and visit your dashboard. Under the Outgoing menu on the left-hand side, you’ll see a count of "All" and "Open" requests. Click "All," and then click the "[Draft] beginning apple pie recipe" request.

Edit your description or add comments to any testing you have done, and then assign a reviewer by clicking the pencil icon next to "People" under the Reviewers section:

Finally, click the Publish button to publish your review request and assign it to one or more reviewers.

Reviewing a Change Request

Now that we have at least one review request assigned to a user, log in as that user on the Review Board server. On your dashboard under the Incoming section, click the "To Me" link. Find the "beginning apple pie recipe" request and click it to be taken to the review request details page. Click the View Diff button in the summary toolbar to view the changes made to this file. Since this was a new file, you will only see one change. Click the Review button in the summary toolbar to add your review comments to this request. When you are finished with your comments, click the Publish Review button.

As a reviewer, we are satisfied with the modifications to the file. We could check the "Ship It" box, or click the Ship It button in the summary toolbar after we publish the review. We have now indicated that the code is ready to be merged into the master branch.

Log in again as the user who submitted the request, and notice two new icons next to your request. The speech bubble indicates you have comments available to view, and the green check oval indicates that your code is ready to be merged into your master AWS CodeCommit branch.

View the comments from your reviewer, and notice that your code is ready to be shipped.

Merging Your Commit

There are several viable ways to merge a branch into a master branch, like cherry-picking a single commit from a branch, or bringing in all commits. We keep things simple here, and simply merge the commit(s) from the pies branch into the master. Now, push the updated code up to the AWS CodeCommit Git repo.

> git checkout master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.

> git merge pies
Updating 304d704..9ab13cf
Fast-forward
 applepie.txt | 1 +
 1 files changed, 11 insertions(+)
 create mode 100644 applepie.txt

> git push

Counting objects: 2, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 226 bytes | 0 bytes/s, done.
Total 2 (delta 1), reused 0 (delta 0)

remote:
To https://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyDemoRepo

   9ab13cf..a0e3119 master -> master

Conclusion

Congratulations! You integrated Review Board with AWS CodeCommit, and created a feature development branch, created code, submitted it for review, and merged it into your master branch after acceptance. You seamlessly combined a secure, highly scalable and managed service for your Git repositories with a code review tool, and now you can ship your code faster, cleaner, and with more confidence. In future posts, we’ll show you how to integrate AWS CodeCommit with other common Git tools.

Integrating AWS CodeCommit with Jenkins

by Rob Brigham | on | in How-to, New stuff | | Comments

Today we have a guest post written by Emeka Igbokwe, a Solutions Architect at AWS.

This post walks you through the steps to set up Jenkins and AWS CodeCommit to support 2 simple continuous integration (CI) scenarios. 

In the 1st scenario, you will make a change in your local Git repository, push the change to your AWS CodeCommit hosted repository and have the change trigger a build in Jenkins.

For the 2nd scenario, you will make a change on a development branch in your local Git repository, push the change to your AWS CodeCommit hosted repository and have the change trigger a merge from the development branch to the master branch, perform a build on the merged master branch, then push the change on the merged master branch to the AWS CodeCommit hosted repository on a successful build.

For the walkthrough, we will run the Jenkins server on an Amazon Linux Instance and configure your workstation to access the Git repository hosted by AWS CodeCommit.

Set Up IAM Permissions

AWS CodeCommit uses IAM permissions to control access to the Git repositories. 

For this walkthrough, you will create an IAM user, an IAM role, and a managed policy. You will attach the managed policy to the IAM user and the IAM role, granting both the user and role the permissions to push and pull changes to and from the Git repository hosted by AWS CodeCommit.  

You will associate the IAM role with the Amazon EC2 instance you launch to run Jenkins. (Jenkins uses the permissions granted by the IAM role to access the Git repositories.)  

  1. Create an IAM user. Save the access key ID and the secret access key for the new user. 
  2. Attach the managed policy named AWSCodeCommitPowerUser to the IAM user you created.
  3. Create an Amazon EC2 service role named CodeCommitRole and attach the managed policy (AWSCodeCommitPowerUser) to it.

Set Up Your Development Environment

Install Git and the AWS CLI on your workstation.

Windows:

  1. Install Git on Windows
  2. Install the AWS CLI using the MSI Installer.

Linux or Mac:

  1. Install Git on Linux or Mac.
  2. Install the AWS CLI using the Bundled Installer.

After you install the AWS CLI, you must configure it using your IAM user credentials.

aws configure

Enter the AWS access key and AWS secret access key for the IAM user you created; enter us-east-1 for the region name; and enter json for the output format. 

AWS Access Key ID [None]: Type your target AWS access key ID here, and then press Enter
AWS Secret Access Key [None]: Type your target AWS secret access key here, and then press Enter
Default region name [None]: Type us-east-1 here, and then press Enter
Default output format [None]: Type json here, and then press Enter

Configure Git to use your IAM credentials and an HTTP path to access the repositories hosted by AWS CodeCommit.

git config --global credential.helper '!aws codecommit credential-helper $@'
git config --global credential.useHttpPath true

Create your central Git repository in AWS CodeCommit. 

aws codecommit create-repository --repository-name DemoRepo --repository-description "demonstration repository"

Set your user name and email address.

git config --global user.name "Your Name"
git config --global user.email "Your Email Address"

Create a local copy of the repository.

git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DemoRepo

Change directory to the local repository.

cd DemoRepo

In the editor of your choice, copy and paste the following into a file and save it as HelloWorld.java.

class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello World!"); 
    }
}

In the same directory where you created HelloWorld.java, run the following git commands to commit and push your change.

git add HelloWorld.java
git commit -m "Added HelloWord.java"
git push origin

Set Up the Jenkins Server

Create an instance using the Amazon Linux AMI. Make sure you associate the instance with the CodeCommitRole role and configure the security group associated with the instance to allow incoming traffic on ports 22 (SSH) and 8080 (Jenkins). You may further secure your server by restricting access to only the IP addresses of the developer machines connecting to Jenkins.

Use SSH to connect to the instance. Update the AWS CLI and install Jenkins, Git, and the Java JDK. 

sudo yum install -y git java-1.8.0-openjdk-devel
sudo yum update -y aws-cli

Add the Jenkins repository and install Jenkins.

sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install -y jenkins 

Configure the AWS CLI.

cd ~jenkins
sudo -u jenkins aws configure

Accept the defaults for the AWS access key and AWS secret access key; enter us-east-1 for the region name; and enter json for the output format. 

AWS Access Key ID [None]: Press Enter
AWS Secret Access Key [None]: Press Enter
Default region name [None]: Type us-east-1 here, and then press Enter
Default output format [None]: Type json here, and then press Enter

Configure Git to use IAM credentials and an HTTP path to access the repositories hosted by AWS CodeCommit.

sudo -u jenkins git config --global credential.helper '!aws codecommit credential-helper $@'
sudo -u jenkins git config --global credential.useHttpPath true
sudo -u jenkins git config --global user.email "me@mycompany.com"
sudo -u jenkins git config --global user.name "MyJenkinsServer"

Start Jenkins. 

sudo service jenkins start
sudo chkconfig jenkins on

Configure global security.

  1. Open the Jenkins home page (http://<public DNS name of EC2 instance>:8080) in your browser.
  2. Select Manage Jenkins and Configure Global Security. 
  3. Select the Enable Security check box.
  4. Under Security Realm, select the Jenkins’ own user database radio button.
  5. Clear the Allow users to sign up check box.
  6. Under Authorization, select the Logged-in users can do anything radio button.

Configure the Git plugin.

  1. Select Manage Jenkins and Manage Plugins. 
  2. On the Available tab, use the Filter box to find Git Plugin.  
  3. Select the Install check box next to Git Plugin.
  4. Choose Download now and install after restart.

After Jenkins has restarted, add a project that will execute a build each time a change is pushed to the AWS CodeCommit hosted repository. 

Scenario 1: Set Up Project 

  1. From the Jenkins home page, select New Item. 
  2. Select Build a free-style software project.   
  3. For the project name, enter "Demo".
  4. For Source Code Management, choose Git.
  5. For the repository URL, enter "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DemoRepo". 
  6. For the Build Trigger, select Poll SCM with a schedule of */05 * * *.
  7. For the Build under Add Build Step select Execute Shell and in the Command text box, type javac HelloWorld.java.
  8. Click Save.

Scenario 1: Update the Local Git Repository

Now that your development environment is configured and the Jenkins server is set up, modify the source in your local repository and push the change to the central repository hosted on AWS CodeCommit.

On your workstation, change directory to the local repository and create a branch where you will make your changes.

cd DemoRepo

Use the editor of your choice to modify Helloword.java with the content below, and then save the file in the DemoRepo directory.

class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Scenario 1: Build Hello World using Jenkins"); 
    }
}

Run the following git commands to commit and push your change.

git add HelloWorld.java
git commit -m "Modified HelloWord.java for scenario 1"
git push origin  

Scenario 1: Monitor Build

After five minutes, go to the Jenkins home page. You should see a build.

In the Last Success column, click the build (shown here as #1). This will take you to the build output. Click Console Output to see the build details.

Scenario 2: Modify Project To Support "Pre-Build Branch Merging"

  1. From the Jenkins home page, click on Demo in the Name column. 
  2. Select "Configure" to modify project
  3. Make sure “Branch Specifier” for Branches to build is blank.
  4. For Additional Behaviors, add Merge before Build.
  5. Set the name of the repository to origin.
  6. Set the branch to merge to master.
  7. Add the Post Build Action Git Publisher.
  8. Select Push Only If Build Succeeds.
  9. Select Merge Results.
  10. Select Add Tag.
  11. Set the tag to push to $GIT_COMMIT.
  12. Select Create new tag.
  13. Set the target remote name to origin.
  14. Click Save.

Scenario 2: Update the Local Git Repository

Now that your development environment is configured and the Jenkins server is set up, modify the source in your local repository and push the change to the central repository hosted on AWS CodeCommit.

On your workstation, change directory to the local repository and create a branch where you will make your changes.

cd DemoRepo
git checkout -b MyDevBranch

Use the editor of your choice to modify Helloword.java with the content below, and then save the file in the DemoRepo directory.

class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Build Hello World using Jenkins!"); 
    }
}

Run the following git commands to commit and push your change.

git add HelloWorld.java
git commit -m "Modified HelloWord.java for sceanrio 2"
git push origin MyDevBranch

Scenario 2: Monitor Build

After five minutes, go to the Jenkins home page. You should see a build.

In the Last Success column, click the build (shown here as #2). This will take you to the build output. Click Console Output to see the build details.

Scenario 2:  Verify The Master Branch Is Updated

Create another local repository named DemoRepo2. Verify the master branch includes your changes.

cd ..
git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DemoRepo DemoRepo2
cd DemoRepo2

Use the editor of your choice to open HelloWorld.java. It should include the change you made in your local DemoRepo repository. 

We hope this helps to get you started using Jenkins with your AWS CodeCommit repositories.  Let us know if you have questions, or if there are other product integrations you are interested in.