Tag: OpsWorks


OpsWorks September 2016 Updates

by Daniel Huesch | on | | Comments

Over the past few months, the AWS OpsWorks team has introduced several enhancements to existing features and added to support for new one. Let’s discuss some of these new capabilities.

·       Chef client 12.13.37 – Released a new AWS OpsWorks agent version for Chef 12 for Linux, enabling the latest enhancements from Chef. The OpsWorks console now shows the full history of enhancements to its agent software. Here’s an example of what the change log looks like:

·       Node.js 0.12.15 – Provided support for a new version of Node.js, in Chef 11.

–        Fixes a bug in the read/write locks implementation for the Windows operating system.
–        Fixes a potential buffer overflow vulnerability.

·       Ruby 2.3.1 – The built-in Chef 11 Ruby layer now supports Ruby 2.3.1, which includes these Ruby enhancements:

–        Introduced a frozen string literal pragma.
–        Introduced a safe navigation operator (lonely operator).
–        Numerous performance improvements.

·       Larger EBS volumes – Following the recent announcement from Amazon EBS, you can now use OpsWorks to create provisioned IOPS volumes that store up to 16 TB and process up to 20,000 IOPS, with a maximum throughput of 320 MBps. You can also create general purpose volumes that store up to 16 TB and process up to 10,000 IOPS, with a maximum throughput of 160 MBps.

·       New Linux operating systems – OpsWorks continues to enhance its operating system support and now offers:

–        Amazon Linux 2016.03 (Amazon Linux 2016.09 support will be available soon)
–        Ubuntu 16.04
–        CentOS 7

·       Instance tenancy – You can provision dedicated instances through OpsWorks. Dedicated instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Your dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts.

·       Define root volumes – You can define the size of the root volume of your EBS-backed instances directly from the OpsWorks console. Choose from a variety of volume types: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic.

·       Instance page – The OpsWorks instance page now displays a summary bar that indicates the aggregated state of all the instances in a selected stack. Summary fields include total instance count, online instances, instances that are in the setting-up stage, instances that are in the shutting-down stage, stopped instances, and instances in an error state.

·       Service role regeneration – You can now use the OpsWorks console to recreate your IAM service role if it was deleted.

Recreate IAM service role

Confirmation of IAM service role creation

As always, we welcome your feedback about features you’re using in OpsWorks. Be sure to visit the OpsWorks user forums, and check out the documentation.

 

 

Auto Scaling AWS OpsWorks Instances

by Daniel Huesch | on | | Comments

This post will show you how to integrate Auto Scaling groups with AWS OpsWorks so you can leverage  the native scaling capabilities of Amazon EC2 and the OpsWorks Chef configuration management solution.

Auto Scaling ensures you have the correct number of EC2 instances available to handle your application load.  You create collections of EC2 instances (called Auto Scaling groups), specify desired instance ranges for them, and create scaling policies that define when instances are provisioned or removed from the group.

AWS OpsWorks helps configure and manage your applications.  You create groups of EC2 instances (called stacks and layers) and associate to them configuration such as volumes to mount or Chef recipes to execute in response to lifecycle events (for example, startup/shutdown).  The service streamlines the instance provisioning and management process, making it easy to launch uniform fleets using Chef and EC2.

The following steps will show how you can use an Auto Scaling group to manage EC2 instances in an OpsWorks stack.

Integrating Auto Scaling with OpsWorks

This example will require you to create the following resources:

Auto Scaling group: This group is responsible for EC2 instance provisioning and release.

Launch configuration: A configuration template used by the Auto Scaling group to launch instances.

OpsWorks stack: Instances provisioned by the Auto Scaling group will be registered with this stack.

IAM instance profile: This profile grants permission to your instances to register with OpsWorks.

Lambda function: This function handles deregistration of instances from your OpsWorks stack.

SNS topic: This topic triggers your deregistration Lambda function after Auto Scaling terminates an instance.

Step 1: Create an IAM instance profile

When an EC2 instance starts, it must make an API call to register itself with OpsWorks.  By assigning an IAM instance profile to the instance, you can grant it permission to make OpsWorks calls.

Open the IAM console, choose Roles, and then choose Create New Role. Type a name for the role, and then choose Next Step.  Choose the Amazon EC2 Role, and then select the check box next to the AWSOpsWorksInstanceRegistration policy.  Finally, choose Next Step, and then choose Create Role. As the name suggests, the AWSOpsWorksInstanceRegistration policy only allows the API calls required to register an instance. Because you will have to make two more calls for this demo,  add the following inline policy to the new role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "opsworks:AssignInstance",
                "opsworks:DescribeInstances"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Step 2: Create an OpsWorks stack

Open the AWS OpsWorks console.  Choose the Add Stack button from the dashboard, and then  choose Sample Stack. Make sure the Linux OS option is selected, and  then choose Create Stack.  After the stack has been created, choose Explore the sample stack. Choose the layer named Node.js App Server.  You will need the IDs of this sample stack and layer in a later step.  You can extract both from the URL of the layer page, which uses this format:

https://console.aws.amazon.com/opsworks/home?region=us-west-2#/stack/ YOUR-OPSWORKS-STACK-ID/layers/ YOUR-OPSWORKS-LAYER-ID.

Step 3: Create a Lambda function

This function is responsible for deregistering an instance from your OpsWorks stack.  It will be invoked whenever an EC2 instance in the Auto Scaling group is terminated.

Open the AWS Lambda console and choose the option to create a Lambda function.  If you are prompted to choose a blueprint, choose Skip.  You can give the function any name you like, but be sure to choose the Python 2.7 option from the Runtime drop-down list.

Next, paste the following code into the Lambda Function Code text entry box:

import json
import boto3

def lambda_handler(event, context):
    message = json.loads(event['Records'][0]['Sns']['Message'])
    
    if (message['Event'] == 'autoscaling:EC2_INSTANCE_TERMINATE'):
        ec2_instance_id = message['EC2InstanceId']
        ec2 = boto3.client('ec2')
        for tag in ec2.describe_instances(InstanceIds=[ec2_instance_id])['Reservations'][0]['Instances'][0]['Tags']:
            if (tag['Key'] == 'opsworks_stack_id'):
                opsworks_stack_id = tag['Value']
                opsworks = boto3.client('opsworks', 'us-east-1')
                for instance in opsworks.describe_instances(StackId=opsworks_stack_id)['Instances']:
                    if ('Ec2InstanceId' in instance):
                        if (instance['Ec2InstanceId'] == ec2_instance_id):
                            print("Deregistering OpsWorks instance " + instance['InstanceId'])
                            opsworks.deregister_instance(InstanceId=instance['InstanceId'])
    return message

Then, from the Role drop-down list, choose Basic Execution Role.  On the page that appears, expand  View Policy Document, and then choose Edit

Next, paste the following JSON into the policy text box:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeInstances",
        "opsworks:DescribeInstances",
        "opsworks:DeregisterInstance"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
       "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

Choose Allow.  On the Lambda creation page, change the Timeout field to 0 minutes and 15 seconds, and choose Next.  Finally, choose Create Function.

Step 4: Create an SNS topic

The SNS topic you create in this step will be responsible for triggering an execution of the Lambda function you created in step 3.  It is the glue that ties Auto Scaling instance terminations to corresponding OpsWorks instance deregistrations.

Open the Amazon SNS console.  Choose Topics, and then choose Create New Topic.  Type topic and display names, and then choose Create Topic.  Select the check box next to the topic you just created, and from Actions, choose Subscribe to Topic.  From the Protocol drop-down list, choose AWS Lambda.  From the Endpoint drop-down list, choose the Lambda function you created in step 3.  Finally, choose Create Subscription.

Step 5: Create a launch configuration

This configuration contains two important settings: security group and user data.  Because you’re deploying a Node.js app that’s will listen on port 80, you must use a security group that has this port open. Then there’s the user data script that’s executed when an instance starts. Here we make the call to register the instance with OpsWorks.

 

 

Open the Amazon EC2 console and create a launch configuration. Use the latest release of Amazon Linux, which should be the first operating system in the list. On the details page, under IAM role, choose the instance profile you created in step 2. Expand the Advanced Details area and paste the following code in the User data field. Because this is a template, you will have to replace YOUR-OPSWORKS-STACK-ID and YOUR-OPSWORKS-LAYER-ID with the OpsWorks stack and layer ID you copied in step 1.

#!/bin/bash
sed -i'' -e 's/.*requiretty.*//' /etc/sudoers
pip install --upgrade awscli
INSTANCE_ID=$(/usr/bin/aws opsworks register --use-instance-profile --infrastructure-class ec2 --region us-east-1 --stack-id YOUR-OPSWORKS-STACK-ID --override-hostname $(tr -cd 'a-z' < /dev/urandom |head -c8) --local 2>&1 |grep -o 'Instance ID: .*' |cut -d' ' -f3)
/usr/bin/aws opsworks wait instance-registered --region us-east-1 --instance-id $INSTANCE_ID
/usr/bin/aws opsworks assign-instance --region us-east-1 --instance-id $INSTANCE_ID --layer-ids YOUR-OPSWORKS-LAYER-ID

Step 6. Create an Auto Scaling group

On the last page of the Launch Configuration wizard, choose Create an Auto Scaling group using this launch configuration. In the notification settings, add a notification to your SNS topic for the terminate event. In the tag settings, add a tag with key opsworks_stack_id. Use the OpsWorks stack ID you entered in the User data field as the value. Make sure the Tag New Instances check box is selected.

Conclusion

Because the default desired size for your Auto Scaling group is 1, a single instance will be started in EC2 immediately.  You can confirm this through the EC2 console in a few seconds:

A few minutes later, the instance will appear in the OpsWorks console:

To confirm your Auto Scaling group instances will be deregistered from OpsWorks on termination, change the Desired value from 1 to 0.  The instance will disappear from the EC2 console. Within minutes, it will disappear from the OpsWorks console, too.

Congratulations! You’ve configured an Auto Scaling group to seamlessly integrate with AWS OpsWorks. Please let us know if this helps you scale instances in OpsWorks or if you have tips of your own.

Using Custom JSON on AWS OpsWorks Layers

by Daniel Huesch | on | in How-to, New stuff | | Comments

Custom JSON, which has always been available on AWS OpsWorks stacks and deployments, is now also available as a property on layers in stacks using Chef versions 11.10, 12, and 12.2.

In this post I show how you can use custom JSON to adapt a single Chef cookbook to support different use cases on individual layers. To demonstrate, I use the example of a MongoDB setup with multiple shards.

In OpsWorks, each instance belongs to one or more layers, which in turn make up a stack. You use layers to specify details about which Chef cookbooks are run when the instances are set up and configured, among other things. When your stacks have instances that serve different purposes, you use different cookbooks for each.

Sometimes, however, there are only small differences between the layers and they don’t justify using separate cookbooks. For example, when you have a large MongoDB installation with multiple shards, you would have a layer per shard, as shown in the following figure, but your cookbooks wouldn’t necessarily differ.

custom-json-per-layer-1.png

Let’s assume I’m using the community cookbook for MongoDB. I would configure this cookbook using attributes. The attribute for setting the shard name would be node[:mongodb][:shard_name]. But let’s say that I want to set a certain attribute for any deployment to any instance in a given layer. I would use custom JSON to set the that attribute.

When declared on a stack, custom JSON always applies to all instances, no matter which layer they’re in. Custom JSON declared on a deployment is helpful for one-off deployments with special settings; but, the provided custom JSON doesn’t stick to the instances you deploy to, so a subsequent deployment doesn’t know about custom JSON you might have specified in an earlier deployment.

Custom JSON declared on the layer applies to each instance that belongs to that layer. Like custom JSON declared on the stack, it’s permanently stored and applied to all subsequent deployments. So you just need to edit each layer and set the right shard, as shown in the following figure:

custom-json-per-layer-2.png

During a Chef run, OpsWorks makes custom JSON contents available as attributes. That way the settings are available in the MongoDB cookbook and configure the MongoDB server accordingly. For details about using custom JSON content as an attribute, see our documentation.

Custom JSON declared on the deployment overrides custom JSON declared on the stack. Custom JSON declared on the layer sits in between those two. So you can use it on the layer to override stack settings, and on the deployment to override stack or layer settings.

Using custom JSON gives you a way to tweak a setting for all instances in a given layer without having to affect the entire stack, and without having to provide custom JSON for every deployment.

AWS OpsWorks Now Supports Chef 12 for Linux

by Daniel Huesch | on | in New stuff | | Comments

Update: In the meantime our friends at Chef published a post that walks you through deploying a Django app on AWS OpsWorks using Chef 12. Go check it out!

In addition to providing Chef 12 support for Windows, AWS OpsWorks (OpsWorks) now supports Chef 12 for Linux operating systems. This release benefits users who want to take advantage of the large selection of community cookbooks or want to build and customize their own cookbooks.

You can use the latest release of Chef 12 to support Linux-based stacks currently running Chef Client 12.5.1 (For those of you concerned about future Chef Client upgrades, be assured that new versions of the Chef 12.x Client will be made available shortly after their public release). OpsWorks now also prevents cookbook namespace conflicts by using two separate Chef runs (OpsWorks’s Chef run and yours run independently).

Use Chef Supermarket Cookbooks

Because this release focuses on providing you with full control and flexibility when using your own cookbooks, built-in layers and cookbooks will no longer be available for Chef 12 (PHP, Rails, Node.JS, MySQL, etc.,). Instead, Chef 12 users can use OpsWorks to leverage up-to-date community cookbooks to support the creation of custom layers. A Chef 12 Node.js sample stack (on Windows and Linux) is now available in the OpsWorks console. We’ll provide additional examples in the future.

"With the availability of the Chef 12 Linux client, AWS OpsWorks customers can now leverage shared Chef Supermarket cookbooks for both Windows and Linux workloads. This means our joint customers can maximize the full potential of the vibrant open source Chef Community across the entire stack."

– Ken Cheney, Vice President of Business Development, Chef

Chef 11.10 and earlier versions for Linux will continue to support built-in layers. The built-in cookbooks will continue to be available at https://github.com/aws/opsworks-cookbooks/tree/release-chef-11.10.

Beginning in January 2016, you will no longer be able to create Chef 11.4 stacks using the OpsWorks console. Existing Chef 11.4 stacks will continue to operate normally, and you will continue to be able to create stacks with Chef 11.4 by using the API.

Use Chef Search

With Chef 12 Linux, you can use Chef search, which is the native Chef way to obtain information about stacks, layers, instances, and stack resources, such as Elastic Load Balancing load balancers and RDS DB instances. The following examples show how to use Chef search to get information and to perform common tasks. A complete reference of available search indices is available in our documentation.

Use Chef search to retrieve the stack’s state:

search(:node, “name:web1”)
search(:node, “name:web*”)

Map OpsWorks layers as Chef roles:

appserver = search(:node, "role:my-app").first
Chef::Log.info(”Private IP: #{appserver[:private_ip]}")

Use Chef search to retrieve hostnames, IP addresses, instance types, Amazon Machine Images (AMIs), Availability Zones (AZs), and more:

search(:aws_opsworks_app, "name:myapp")
search(:aws_opsworks_app, ”deploy:true")
search(:aws_opsworks_layer, "name:my_layer*")
search(:aws_opsworks_rds_db_instance)
search(:aws_opsworks_volume)
search(:aws_opsworks_ecs_cluster)
search(:aws_opsworks_elastic_load_balancer)
search(:aws_opsworks_user)

Use Chef search for ad-hoc resource discovery, for example, to find the database connection information for your applications or to discover all available app server instances when configuring a load-balancer.

Explore a Chef 12 Linux or Chef 12.2 Windows Stack

To explore a Chef 12 Linux or Chef 12.2 Windows stack, simply select the “Sample stack” option in the OpsWorks console:

To create a Chef 12 stack based on your Chef cookbooks, choose Linux as the Default operating system:

Use any Chef 12 open source community cookbook from any source, or create your own cookbooks. OpsWorks’s built-in operational tools continue to empower you to manage your day-to-day operations.

Using Capistrano to run arbitrary commands on AWS OpsWorks instances

by Daniel Huesch | on | in How-to | | Comments

AWS OpsWorks customers frequently request the ability to run arbitrary commands. And while OpsWorks sets up and manages the Amazon EC2 instances your application runs on and manages users’ access to your instances, it doesn’t allow running arbitrary commands. Let’s say, for example, that you wanted to run uptime across your fleet. You could create a custom Chef recipe that executes uptime, create an execute_recipes deployment, and check the log files for the output. However, that’s fairly heavyweight for something as simple as running uptime across your fleet.

While OpsWorks doesn’t natively support running arbitrary commands, it’s not too difficult to come up with a decent solution using existing tools. In this post, I’m going to show how you can use Capistrano to run arbitrary commands on OpsWorks instances.

Capistrano, according to its website, is "a remote server automation and deployment tool written in Ruby." It can run commands on remote machines in parallel, collecting their return status and outputs. The commands are organized into tasks that are described with a simple Ruby Domain Specific Language (DSL) — much like Chef recipes. If you haven’t heard of Capistrano, feel free to get familiar with it before reading on.

At the time of this post, the Capistrano website actually includes an example that executes the uptime command:

role :demo, %w{example.com example.org example.net}

task :uptime do
  on roles(:demo), in: :parallel do |host|
    uptime = capture(:uptime)
    puts "#{host.hostname} reports: #{uptime}"
  end
end

As explained on the Capistrano website, you set up Capistrano by running cap install in your project’s root directory. This will create the following files and directories, as needed.

.
+-- Capfile
+-- config
|   +-- deploy
|   |   +-- production.rb
|   |   -- staging.rb
|   -- deploy.rb
-- lib
    -- capistrano
            -- tasks

My example files are located in a GitHub repository. Instead of using cap install, we will use the files from this repository.

The repository’s root directory structure looks like this:

.
+-- config
|   +-- deploy
|   |   -- meta.rb
|   -- deploy.rb
-- lib
    -- capistrano
        -- tasks
            -- run.rake

Capistrano uses the concepts of stages, servers, and roles. In the generated example files, the stages are called production and staging. Each stage has a set of servers that can have many roles. Technically, each stage doesn’t need to have its own set of servers, but that’s what is commonly done in practice.

Here’s how OpsWorks concepts translate into their Capistrano counterparts.

OpsWorks Capistrano
Stack Stage
Layer Role
Instance Server

To do anything on your servers, you first need to let Capistrano know about them. That’s what the stage-specific config files are for. Here are the contents of the generated staging.rb file.

set :stage, :staging

# Simple Role Syntax
# ==================
# Supports bulk-adding hosts to roles, the primary
# server in each group is considered to be the first
# unless any hosts have the primary property set.
role :app, %w{example.com}
role :web, %w{example.com}
role :db,  %w{example.com}

# Extended Server Syntax
# ======================
# This can be used to drop a more detailed server
# definition into the server list. The second argument
# is something that quacks like a hash and can be used
# to set extended properties on the server.

server 'example.com', roles: %w{web app}, my_property: :my_value

With Capistrano, each role corresponds to one or more servers, and a server can have multiple roles. When you run a command, you specify the roles, and then Capistrano runs the command on the associated servers. You set up the roles for each server in the stage config files. The last line in this example tells Capistrano about your servers and roles.

In this example, I use the AWS SDK for Ruby to generate Capistrano config files that reflect stacks and running instances in OpsWorks. This example can easily be made more or less dynamic. By making API calls to update the list of servers before running commands, you will never miss an instance that just launched. But, the additional API calls will take some time. On the other hand, having the list of servers change while you’re executing commands might lead to unexpected results. Anyway, keep in mind that the example shown here is just one of many ways to use Capistrano with OpsWorks.

To model your stacks, layers, and instances in Capistrano, you just need to iterate over all of them and use Capistrano’s DSL to declare each instance as a server. What makes this slightly more complex than just three nested loops is the fact that in OpsWorks an instance can be in multiple layers. To ensure that we have all of the layers we need for the server definition, we need to determine which layers each instance belongs to.

Capistrano uses the config/deploy directory for stage configuration and holds task definitions in lib/capistrano/tasks. The files that do the heavy lifting in my example are meta.rb and run.rake. deploy.rb can be empty, but Capistrano complains if it’s missing, so we will just let it sit there.

In meta.rb we iterate over all stacks and generate a stage config file for each stack. The meta.rb file defines two commands:

·      populate creates a set of stage files.

·      extinguish removes the created stage files.

You run these commands with cap meta commandname. After making sure you have Capistrano installed, the first command you run is cap meta populate. As described earlier, Capistrano’s top-level organizational unit is the stage. Commands are executed on stages, so populate iterates over stacks, layers, and instances, and then writes one stage file per stack, with one server entry per instance.

extinguish simply removes all files created by populate. I will not use it in this example.

Before we generate stage files, let’s have a look at the stacks I have in my account.

aws opsworks describe-stacks --output table --query 'Stacks[*].[StackId,Name]'
----------------------------------------------------------
|                     DescribeStacks                     |
+---------------------------------------+----------------+
|  b23dd487-1469-42bd-8d87-8f9e7aabdbc7 |  interview     |
|  122caab8-e407-4b44-8709-72c255b1fef2 |  demo          |
|  d9af7eb2-aeb6-4290-9522-f4f85793ed25 |  os-benchmark  |
+---------------------------------------+----------------+

Now run bundle exec cap meta populate to generate stages in config/deploy.

As I just explained, cap meta populate should have generated some stage config files. Let’s have a look at our project directory again.

.
+-- config
|   +-- deploy
|   |   +-- demo.rb
|   |   +-- interview.rb
|   |   +-- meta.rb
|   |   -- os-benchmark.rb
|   -- deploy.rb
-- lib
    -- capistrano
        -- tasks
            -- run.rake

Now let’s look at the instances in one of those stacks:

aws opsworks describe-instances --stack-id d9af7eb2-aeb6-4290-9522-f4f85793ed25 --output table --query 'Instances[*].[Hostname,Os,Status]'                                                              
-------------------------------------------------------------
|                     DescribeInstances                     |
+------------------------+------------------------+---------+
|  amazon-linux-2014-03i |  Amazon Linux          |  online |
|  amazon-linux-2014-09i |  Amazon Linux 2014.09  |  online |
|  ubuntu-12-04-ltsi     |  Ubuntu 12.04 LTS      |  online |
|  ubuntu-14-04-ltsi     |  Ubuntu 14.04 LTS      |  online |
+------------------------+------------------------+---------+

Let’s look at the stage config file for that stack.

role "blank", []
server "54.188.203.46", {:user=>"ec2-user", :roles=>["blank"]}
server "54.190.72.211", {:user=>"ec2-user", :roles=>["blank"]}
server "54.70.104.224", {:user=>"ubuntu", :roles=>["blank"]}
server "54.70.157.58", {:user=>"ubuntu", :roles=>["blank"]}

This means that Capistrano now knows about that stack’s instances, and we should be able to run commands on those instances.

Hint: I have set up private key authentication for all my instances, so there won’t be any password prompts. Here’s how to set this up.

In Capistrano, there are two modes for running commands: interactive and non-interactive. The interactive mode is helpful when you need immediate feedback, like when you don’t know yet which command you want to run on your servers. Use non-interactive mode when you know which command you want to use and just want to see the result.

Here’s an example of the output for interactive mode.

cap os-benchmark console
capistrano console - enter command to execute on os-benchmark
os-benchmark> uptime
INFO[477ac70f] Running /usr/bin/env uptime on 54.70.104.224
DEBUG[477ac70f] Command: /usr/bin/env uptime
INFO[496f9726] Running /usr/bin/env uptime on 54.190.72.211
DEBUG[496f9726] Command: /usr/bin/env uptime
INFO[92222ac7] Running /usr/bin/env uptime on 54.188.203.46
DEBUG[92222ac7] Command: /usr/bin/env uptime
INFO[8391c30a] Running /usr/bin/env uptime on 54.70.157.58
DEBUG[8391c30a] Command: /usr/bin/env uptime
DEBUG[496f9726]       16:03:37 up 21 min,  0 users,  load average: 0.00, 0.01, 0.05
INFO[496f9726] Finished in 3.021 seconds with exit status 0 (successful).
DEBUG[92222ac7]       16:03:37 up 20 min,  0 users,  load average: 0.02, 0.02, 0.05
INFO[92222ac7] Finished in 3.079 seconds with exit status 0 (successful).
DEBUG[8391c30a]       16:03:37 up  8:48,  0 users,  load average: 0.00, 0.01, 0.05
INFO[8391c30a] Finished in 3.076 seconds with exit status 0 (successful).
DEBUG[477ac70f]       16:03:37 up 20 min,  0 users,  load average: 0.02, 0.02, 0.05
INFO[477ac70f] Finished in 3.336 seconds with exit status 0 (successful).

This shows that we executed the uptime command on all running servers. Let’s say that we want to run uptime again, but in non-interactive mode, specifying the command with an environment variable. Here’s what that output looks like.

COMMAND=uptime cap os-benchmark run
INFO[6967488c] Running /usr/bin/env uptime on 54.70.157.58
DEBUG[6967488c] Command: /usr/bin/env uptime
INFO[b68600cf] Running /usr/bin/env uptime on 54.190.72.211
DEBUG[b68600cf] Command: /usr/bin/env uptime
INFO[a108126d] Running /usr/bin/env uptime on 54.70.104.224
DEBUG[a108126d] Command: /usr/bin/env uptime
INFO[e5e1af8f] Running /usr/bin/env uptime on 54.188.203.46
DEBUG[e5e1af8f] Command: /usr/bin/env uptime
DEBUG[b68600cf]       16:05:09 up 22 min,  0 users,  load average: 0.00, 0.01, 0.05
INFO[b68600cf] Finished in 3.054 seconds with exit status 0 (successful).
DEBUG[6967488c]       16:05:09 up  8:50,  0 users,  load average: 0.00, 0.01, 0.05
INFO[6967488c] Finished in 3.094 seconds with exit status 0 (successful).
DEBUG[e5e1af8f]       16:05:09 up 22 min,  0 users,  load average: 0.00, 0.01, 0.05
INFO[e5e1af8f] Finished in 3.055 seconds with exit status 0 (successful).
DEBUG[a108126d]       16:05:09 up 22 min,  0 users,  load average: 0.00, 0.01, 0.05
INFO[a108126d] Finished in 3.128 seconds with exit status 0 (successful).

It’s very simple to pass any command through Capistrano. console is one of Capistrano’s built-in commands. run is a custom command I created for this example; it’s located in run.rake.

desc "Run arbitrary command on hosts"
task :run do
  on roles(:all) do |host|
    execute(ENV["COMMAND"])
  end
end

The roles(:all) part doesn’t necessarily mean that each command has to run on every instance in every role in your stack. It just means that the list of roles isn’t constrained in the source code. By default, Capistrano runs against all roles that appear in config files. Feel free to narrow this down using Capistrano’s command line switches, for example, -r.

This should get you started with Capistrano and OpsWorks. For more information, see the Capistrano website and the GitHub repository that contains the complete source code mentioned in this example.

Using OpsWorks to Perform Operational Tasks

by Chris Barclay | on | in How-to, New stuff | | Comments

Today Jeff Barr blogged about a new feature that gives users the ability to deploy and operate applications on existing Amazon EC2 instances and on-premises servers with AWS OpsWorks. You may know OpsWorks as a service that lets users deploy and manage applications. However OpsWorks can also perform operational tasks that simplify server management. This blog includes three examples of how to use OpsWorks to manage instances. This blog will create EC2 instances using OpsWorks, but you can also use the newly launched features to register on-premises servers or existing EC2 instances.

Example 1: Use OpsWorks to perform tasks on instances  

Server administrators must often perform routine tasks on multiple instances, such as installing software updates. In the past you might have logged in with SSH to each instance and run the commands manually. With OpsWorks you can now perform these tasks on every instance with a single command as often as you like by using predefined scripts and Chef recipes. You can even have OpsWorks run your recipes automatically at key points in the instance’s life cycle, such as after the instance boots or when you deploy an app. This example will show how you can run a simple shell command and get the response back on the console.

Step 1: Create a stack

To get started, open the AWS Management Console. Your first task is to create a stack:
  1. Select Add a Stack to create an OpsWorks stack.
  2. Give it a name and select Advanced.
  3. Set Use custom Chef Cookbooks to Yes.
  4. Set Repository type to Git.
  5. Set the Repository URL to https://github.com/amazonwebservices/opsworks-first-cookbook
  6. Accept the defaults for the other settings and click the Add Stack button at the bottom of the page to create the stack.

Step 2: Add a Layer

An OpsWorks layer is a template that specifies how to configure a related set of EC2 instances. For this example:
  1. Select Add a Layer
  2. Choose a Custom layer; give it a Name and Short Name. The short name should be all lower case with no spaces or punctuation.

Step 3: Add an Instance

You now need to add some instances to the layer: 
  1. Click Instances in the navigation pane and under the layer you just created click + Instance to create a new EC2 instance. You can also Register an on-premises instance in this step.
  2. For this walkthrough, just accept the default settings and click Add Instance to add the instance to the layer.
  3. Click start in the row’s Actions column and OpsWorks will then launch a new EC2 instance. The instance’s status will change to online when it’s ready.

Step 4: Run a command

This step shows how to run a command that executes one of the custom recipes that you installed earlier. It detects whether the instance is vulnerable to Shellshock.

  1. Click Stack
  2. Click Run Command
  3. Select “Execute Recipes” from the drop down
  4. Set Recipes to execute to shellout 
  5. Select Advanced
  6. Copy the following to the Custom Chef JSON box:

    { "shellout" : { "code" : "env x='() { :;}; echo vulnerable' bash -c 'echo this is a test'" } }
  7. Click Execute Recipes
 
Step 5: View the results
 
Once the recipe run has completed, you can view the results by selecting the View link under Logs. About half way down the log file you should see the output:

[2014-12-03T23:49:03+00:00] INFO: @@@
this is a test
@@@

Next steps

It’s usually a better practice to put each script you plan to run into a Chef recipe. It improves consistency and avoids incorrect results. You can easily include Bash, Python and Ruby scripts in a recipes. For example, the following recipe is basically a wrapper for a one-line Bash script:

bash "change system greeting" do
  user "root"
  code <<-EOH
     echo "Hello OpsWorks World" > /etc/motd
  EOH
end

Example 2: Manage operating system users and ssh/sudo access

It is often useful to be able to grant multiple users SSH access to an EC2 instance. However Amazon EC2 installs only one SSH key when it launches an instance. With OpsWorks, each user can have their own SSH key and you can use OpsWorks to grant SSH and sudo permissions to selected users. OpsWorks then automatically adds the users’ keys to the instance’s authorized_keys file. If a user no longer needs SSH access, you remove those permissions and OpsWorks automatically removes the key.

Step 1: Import users into AWS OpsWorks

  1. Sign in to AWS OpsWorks as an administrative user or as the account owner.
  2. Click Users on the upper right to open the Users page.
  3. Click Import IAM Users to display the users that have not yet been imported.
  4. Select the users you want, then click Import to OpsWorks.
GitHub Fork
 

Step 2: Edit user settings

  1. On the Users page, click edit in the user’s Actions column.
  2. Enter a public SSH key for the user and give the user the corresponding private key. The public key will appear on the user’s My Settings page. For more information, see Setting an IAM User’s Public SSH Key. If you enable self-management, the user can specify his or her own key.
  3. Set the user’s permissions levels for the stack you created in Example 1 to include "SSH" access. You can also set permissions separately by using each stack’s Permissions page. 
GitHub Fork

Step 3: SSH to the instance

  1. Click Dashboard on the upper right to open the Dashboard page.
  2. Select the stack you created in Example 1 and navigate to Instances.
  3. Select the instance you created in Example 1.
  4. In the Logs section you will see the execute_recipes command that added the user and the user’s public key to the instance. When this command has completed, as indicated by the green check, select the SSH button at the top of the screen to launch an SSH client. You can then sign into the instance with your username and private key.

Example 3: Archive a file to Amazon S3

There are times when you may want to archive a file, for example to investigate a problem later. This script will send a file from an instance to S3.

Step 1: Create or select an existing S3 bucket

Open the S3 console and create a new bucket or select an existing bucket to use for this example.

Step 2: Run a command to push a file to S3

  1. Using the stack you created in Example 1, navigate to Stack
  2. Select Run Command
  3. Select “Execute Recipes” from the drop down menu
  4. Set Recipes to execute to sample::push-s3
  5. Select Advanced
  6. Set Custom Chef JSON to

    {   "s3": {
    	        "filename": "opsworks-agent.log",
    	        "bucketname": "your-s3-bucket-name",
    	        "filepath": "/var/log/aws/opsworks/opsworks-agent.log"
    	  }	}
    

    The sample::push-s3 recipe was included in the cookbook that you installed earlier. It gets the required information from the JSON and uses the AWS Ruby SDK to upload the file to S3.

  7. Click Execute Recipes

Step 3: View the file in S3

The file you selected in step 2 should now be in your bucket.

These examples demonstrate three ways that OpsWorks can be used for more than software configuration. See the documentation for more information on how to manage on-premises and EC2 instances with OpsWorks.

Running Docker on AWS OpsWorks

by Chris Barclay | on | in How-to | | Comments
AWS OpsWorks lets you deploy and manage application of all shapes and sizes. OpsWorks layers let you create blueprints for EC2 instances to install and configure any software that you want. This blog will show you how to create a custom layer for Docker. For an overview of Docker, see https://www.docker.com/tryit. Docker lets you precisely define the runtime environment for your application and deploy your application code and its runtime as a Docker container. You can use Docker containers to support new languages like Go or to incorporate your dev and test workflows seamlessly with AWS OpsWorks.
 
The Docker layer uses Chef recipes to install Docker and deploy containers to the EC2 instances running in that layer. Simply provide a Dockerfile and OpsWorks will automatically run the recipes to build and run the container. A stack can have multiple Docker layers and you can deploy multiple Docker containers to each layer. You can extend or change the Chef example recipes to use Docker in the way that works best for you. If you aren’t familiar with Chef recipes, see Cookbooks 101 for an introduction.
 

Step 1: Create Recipes

First, create a repository to store your Chef recipes. OpsWorks supports Git and Subversion, or you can store an archive bundle on Amazon S3. The structure of a cookbook repository is described in the OpsWorks documentation.
 
The docker::install recipe installs the necessary Docker software on your instances:
 
case node[:platform]
when "ubuntu","debian"
  package "docker.io" do
    action :install
  end
when 'centos','redhat','fedora','amazon'
  package "docker" do
    action :install
  end
end

service "docker" do
  action :start
end

The docker::docker-deploy recipe deploys your docker containers (specified by a Dockerfile):

include_recipe 'deploy'

node[:deploy].each do |application, deploy|
  
  if node[:opsworks][:instance][:layers].first != deploy[:environment_variables][:layer]
    Chef::Log.debug("Skipping deploy::docker application #{application} as it is not deployed to this layer")
    next
  end

  opsworks_deploy_dir do
    user deploy[:user]
    group deploy[:group]
    path deploy[:deploy_to]
  end

  opsworks_deploy do
    deploy_data deploy
    app application
  end

  bash "docker-cleanup" do
    user "root"
    code <<-EOH
      if docker ps | grep #{deploy[:application]}; 
      then
        docker stop #{deploy[:application]}
        sleep 3
        docker rm #{deploy[:application]}
        sleep 3
      fi
      if docker images | grep #{deploy[:application]}; 
      then
        docker rmi #{deploy[:application]}
      fi
    EOH
  end

  bash "docker-build" do
    user "root"
    cwd "#{deploy[:deploy_to]}/current"
    code <<-EOH
     docker build -t=#{deploy[:application]} . > #{deploy[:application]}-docker.out
    EOH
  end
  
  dockerenvs = " "
  deploy[:environment_variables].each do |key, value|
    dockerenvs=dockerenvs+" -e "+key+"="+value
  end
  
  bash "docker-run" do
    user "root"
    cwd "#{deploy[:deploy_to]}/current"
    code <<-EOH
      docker run #{dockerenvs} -p #{node[:opsworks][:instance][:private_ip]}:#{deploy[:environment_variables][:service_port]}:#{deploy[:environment_variables][:container_port]} --name #{deploy[:application]} -d #{deploy[:application]}
    EOH
  end

end

Then create a repository to store your Dockerfile. Here’s a sample Dockerfile to get you going:

FROM ubuntu:12.04

RUN apt-get update
RUN apt-get install -y nginx zip curl

RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN curl -o /usr/share/nginx/www/master.zip -L https://codeload.github.com/gabrielecirulli/2048/zip/master
RUN cd /usr/share/nginx/www/ && unzip master.zip && mv 2048-master/* . && rm -rf 2048-master master.zip

EXPOSE 80

CMD ["/usr/sbin/nginx", "-c", "/etc/nginx/nginx.conf"]

Step 2: Create an OpsWorks Stack

Now you’re ready to use these recipes with OpsWorks. Open the OpsWorks console
  1. Select Add a Stack to create an OpsWorks stack.
  2. Give it a name and select Advanced.
  3. Set Use custom Chef Cookbooks to Yes.
  4. Set Repository type to Git.
  5. Set the Repository URL to the repository where you stored the recipes created in the previous step.
  6. Click the Add Stack button at the bottom of the page to create the stack.

Step 3: Add a Layer

  1. Select Add Layer
  2. Choose Custom Layer, set the name to “Docker”, shortname to “docker”, and click Add Layer. 
  3. Click the layer’s edit Recipes action and scroll to the Custom Chef recipes section. You will notice there are several headings—Setup, Configure, Deploy, Undeploy, and Shutdown—which correspond to OpsWorks lifecycle events. OpsWorks triggers these events at these key points in instance’s lifecycle, which runs the associated recipes. 
  4. Enter docker::install in the Setup box and click + to add it to the list
  5. Enter docker::docker-deploy in the Deploy box and click + to add it to the list
  6. Click the Save button at the bottom to save the updated configuration. 

Step 4: Add an Instance

The Layer page should now show the Docker layer. However, the layer just controls how to configure instances. You now need to add some instances to the layer. Click Instances in the navigation pane and under the Docker layer, click + Instance. For this walkthrough, just accept the default settings and click Add Instance to add the instance to the layer. Click start in the row’s Actions column to start the instance. OpsWorks will then launch a new EC2 instance and run the Setup recipes to configure Docker. The instance’s status will change to online when it’s ready.
 

Step 5: Add an App & Deploy

Once you’ve started your instances:
  1. In the navigation pane, click Apps and on the Apps page, click Add an app.
  2. On the App page, give it a Name.
  3. Set app’s type to other.
  4. Specify the app’s repository type. 
  5. Specify the app’s repository URL. This is where your Dockerfile lives and is usually a separate repository from the cookbook repository specified in step 2.
  6. Set the following environment variables:

    • container_port – Set this variable to the port specified by the EXPOSE parameter in your Dockerfile.
    • service_port – Set this variable to the port your container will expose on the instance to the outside world. Note: Be sure that your security groups allow inbound traffic for the port specified in service_port.
    • layer – Set this variable to the shortname of the layer that you want this container deployed to (from Step 3). This lets you have multiple docker layers with different apps deployed on each, such as a front-end web app and a back-end worker. 
    • For our example, set container_port=80, service_port=80, and layer=docker. You can also define additional environment variables that are automatically passed onto your Docker container, for example a database endpoint that your app connects with. 
  7. Keep the default values for the remaining settings and click Add App
  8. To install the code on the server, you must deploy the app. It will take some time for your instances to boot up completely. Once they show up as “online” in the Instances view, navigate to Apps and click deploy in the Actions column. If you create multiple docker layers, note that although the deployment defaults to all instances in the stack, the containers will only be deployed to the layer specified in the layer environment variable.
  9. Once the deployment is complete, you can see your app by clicking the public IP address of the server. You can update your Dockerfile and redeploy at any time.

Step 6: Making attributes dynamic

The recipes written for this blog pass environment variables into the Docker container when it is started. If you need to update the configuration while the app is running, such as a database password, solutions like etcd can make attributes dynamic. An etcd server can run on each instance and be populated by the instance’s OpsWorks attributes. You can update OpsWorks attributes, and thereby the values stored in etcd, at any time, and those values are immediately available to apps running in Docker containers. A future blog post will cover how to create recipes to install etcd and pass OpsWorks attributes, including app environment variables, to the Docker containers.
 

Summary

These instructions have demonstrated how use AWS OpsWorks and Docker to deploy applications, represented by Dockerfiles. You can also use Docker layers with other AWS OpsWorks features, including automatic instance scaling and integration with Elastic Load Balancing and Amazon RDS. 
 

AWS OpsWorks supports application environment variables

by Chris Barclay | on | in How-to | | Comments
AWS OpsWorks now allows users to define environment variables per application. Instead of creating a custom recipe and managing environment variables as Chef attributes, you define the environment variables on each app and OpsWorks takes care of securely storing and sending the environment variables from your OpsWorks app definition to your instances and adding them to the application server context. All you have to do is reference the environment variables in your application’s code using the methods defined in the Java, Ruby, PHP, and Node.js application servers. The environment variables are passed to the application server during instance setup and can be updated on each application deployment. Environment variables can also be defined as protected values, so that they cannot be read by OpsWorks users. For example, a user can set separate environment variables for database endpoint, username, and password. The password environment variable can be defined as a protected value, so it cannot be viewed in the console, SDK or CLI and is only made available to the defined application.
 
1. To get started, create a stack with an application server layer as described in steps 1-3 of the OpsWorks walk-through
 
2. Create the following sample PHP application as index.php in a source repository to display the environment variable database_endpoint that is set in step 3.
 
<?php
  $d = array(‘environment’ => getenv("database_endpoint"));
  echo $d
?>

3. Once you have a stack, PHP layer and instance created, start the instance and navigate to Apps to add an app. Point to the repository where your app’s code is stored. Scroll down to the app’s Environment Variables section, then enter the environment variables as keys and values making sure to include an environment variable database_endpoint with a value of your choosing.

 
Check the “protected value” box next to values such as passwords that you do not want to be displayed.  Once you have finished entering the environment variables, click Add App.
 
4. When you edit an app you can add, update, and delete environment variables that are set in OpsWorks. Notice that protected values are not displayed in the console, CLI or SDK. You can update the value by selecting the Update value link. Use OpsWorks permissions to choose which users can edit these values.
 
 
5. Finally, deploy your app. When the app is deployed, select the instance’s IP address to view the values that were passed to the app. You will see:
 
{"environment":"mydbinstance.us-east-1.rds.amazonaws.com"}

 
The environment variables were sent to your instance as attributes that you can also use in custom recipes:
 
  "deploy": {
    "myapp": {
      "application": "myapp",
      "application_type": "php",
      "environment": {
        "database_endpoint": "mydbinstance.us-east-1.rds.amazonaws.com",
     …

For more information including examples for other application servers, see the documentation.

 

 

Using New Relic to monitor applications on AWS OpsWorks

by Chris Barclay | on | in How-to, Partners | | Comments
A good practice for maintaining highly available applications is to monitor the metrics that impact performance and service levels. AWS OpsWorks includes built-in integration with 14 Amazon CloudWatch metrics, including load, CPU and memory, but you may also want to monitor other metrics such as disk space utilization or application-level metrics such as error rates.
 
That’s where a monitoring solution such as New Relic can help. In addition to server-level metrics, New Relic offers application metrics that give deeper visibility into how your application is running. 
 
This blog post explains how to use community Chef cookbooks to enable New Relic monitoring on your application’s instances using the OpsWorks PHP walk-through as an example.
 

1. Create a cookbook repository 

Before starting the walk-through, first create a repository that references the community cookbook using a Berksfile, for example on GitHub:
 
 
The Berksfile needs two lines to reference the community cookbook:
 
source "https://api.berkshelf.com" 
 
cookbook "newrelic", git: ‘git://github.com/escapestudios-cookbooks/newrelic.git’, tag: ‘1.0.6’
 
And that’s it; no other files are necessary in the cookbook repository. 
 
You might be curious about the recipes we are going to use from the New Relic cookbook. This blog uses two recipes:
 

2. Create a stack

Now you’re ready to create an OpsWorks stack using the OpsWorks console:
 
  1. Select Add a Stack to create an OpsWorks stack.
  2. Give it a name and select Advanced.
  3. Set Use custom Chef Cookbooks to Yes.
  4. Set Repository type to Git.
  5. Set the Repository URL to the repository created in the previous step
  6. Set Manage Berkshelf to Yes.
  7. In the Custom JSON box, add your New Relic license code and some options that override the default values in the community recipe. You can get a free trial license when you sign up for New Relic

    {
        "newrelic": {
            "license": "—ADD-YOUR-NEWRELIC-LICENSE-HERE—",
            "php-agent": {
                "php_recipe": "mod_php5_apache2::default",
                "config_file": "/etc/php.d/newrelic.ini"
            }
        }
  8. Click the Add Stack button at the bottom of the page to create the stack.

3. Create and configure a Layer

Now that the stack is set up, you need to create a layer. For this example we will use the PHP layer.
 
  1. Select Add Layer.
  2. Choose the PHP App Server layer. Click Add Layer. 
  3. Click the layer’s edit Recipes action and scroll to the Custom Chef recipes section. You will notice there are several headings—Setup, Configure, Deploy, Undeploy, and Shutdown—that correspond to OpsWorks lifecycle events. OpsWorks triggers these events at these key points in the instance’s lifecycle, which runs the associated recipes.
  4. Enter newrelic::default, newrelic::php-agent next to Setup, click + to add it to the list and click the Save button at the bottom to save the updated configuration. OpsWorks will then run the recipes whenever you initially boot an instance in this layer.

4. Launch Instances

After you’ve saved your layer, add some instances and start them. 
 

5. Create and deploy an app.

Once you’ve started your instances:
  1. In the navigation pane, click Apps and on the Apps page, click Add an app.
  2. On the App page, give it a Name.
  3. Enter the app’s repository type. The example app is stored in a Git repository.
  4. Enter the app’s repository URL. The example app’s repository URL is: git://github.com/amazonwebservices/opsworks-demo-php-simple-app.git
  5. Enter the app’s branch or version. This part of the walkthrough uses the version1 branch.
  6. Keep the default values for the remaining settings and click Add App.
  7. To install the code on the server, you must deploy the app. It will take some time for your instances to boot up completely. Once they show up as “online” in the Instances view, navigate to Apps and click deploy in the Actions column.
  8. Once the deployment is complete, you can see your app by selecting the public IP address of the server.

6. Check NewRelic

Your servers should now be ready to view in the NewRelic Servers view.
 
 
When you navigate to the New Relic Applications view, you can see the application metrics for the running app servers.
 

Conclusion:

This example demonstrates the combined power of OpsWorks, Berkshelf, community cookbooks, and New Relic. OpsWorks custom JSON allows you to provide your license code without having to put it in the recipe and exposing your license key in a repository. 
 
Finally, note that this example specifes a version in the Berksfile. Before using something like this on production servers, you may want to include a specific version or tag (e.g. 1.0.6). That way, if the New Relic cookbook is updated, and a newer version is not compatible, your servers will still use the one that worked before. You can read more about versions and Berkshelf in general at http://berkshelf.com/.
 

 

 

Customer Highlights from Around the Web: June 2014

by Evan Brown | on | | Comments

June was an exciting month with a number of interesting, technical posts from customers using AWS Elastic Beanstalk and OpsWorks. We’ve aggregated some of these posts and grouped them by service below. If we missed your post – or you’d like to be included in next month’s roundup – shoot me an e-mail at evbrown at amazon.com

Elastic Beanstalk

Travis, Docker, and Elastic Beanstalk: http://paulbutcher.com/2014/06/20/travis-docker-and-elastic-beanstalk/ Paul Butcher talks integrating Travis with Elastic Beanstalk’s recent support for Docker containers.

10 Steps Deploying Docker Containers on Elastic Beanstalk: http://flux7.com/blogs/docker/10-steps-deploying-docker-containers-on-elastic-beanstalk/ The folks at Flux7 labs have a structured tutorial for getting up and running with Docker and Elastic Beanstalk.

Drupal Deployment Automation using AWS Elastic Beanstalk for Docker: http://x-team.com/2014/05/drupal-deployment-automation-using-aws-elastic-beanstalk-for-docker/ Paul de Paula at x-team.com has a great post with a tutorial for deploying Drupal to Elastic Beanstalk.

Bleacher Report’s Continuous Integration & Delivery Methodology: Continuous Delivery Through Elastic Beanstalk: http://sauceio.com/index.php/2014/06/continuous-delivery-through-elastic-beanstalk/ Felix Rodriguez talks about Elastic Beanstalk, operations, and CI/CD in production.

Deploying to Elastic Beanstalk from your Continuous Integration System: https://nudaygames.squarespace.com/blog/2014/5/26/deploying-to-elastic-beanstalk-from-your-continuous-integration-system?utm_content=buffer7830c&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer Nuday Games talks Clojure web apps, Elastic Beanstalk, and CI.

OpsWorks

Cluster-wide Java/Scala application deployments with Docker, Chef, and AWS OpsWorks: http://www.warski.org/blog/2014/06/cluster-wide-javascala-application-deployments-with-docker-chef-and-amazon-opsworks/ Adam Warski gives an overview of how to create custom recipes on OpsWorks to deploy Docker images with Java/Scala applications.

OpsWorks and Fabric: Automation and Continuous Integration: https://www.youtube.com/watch?v=Vj47nXHFcW4 The video from a SendHub meetup where Max Smythe discusses how to use OpsWorks with Python’s Fabric for a full-featured continuous integration platform.

Scalable Python Application Deployments on AWS OpsWorks: http://blog.jazkarta.com/2014/06/03/scalable-python-application-deployments-on-aws-opsworks/ The folks at Jazkarta have shared their OpsWorks cookbooks for installing Python and Plone.