AWS Germany – Amazon Web Services in Deutschland

Elastic JBoss AS 7 clustering in AWS using EC2, S3, ELB and Chef

I always like it when customers are telling us their experience with AWS services. This time, Sascha Möllering from Zanox created a guest post on how to run a scalable JBoss AS 7 environment in AWS. Many thanks to Sascha!

Steffen

In most cases you want to set up your infrastructure in an elastic manner: it should grow if the load increases and shrink if the load decreases. In this case, we’re using the CPU-load as metric. If you’re using JBoss 7 as a Java EE-compliant application-server, this is simple to set up if you’re using a stateless application. But if your application is stateful and needs to share state between nodes (i.e. if you’re using clustered 2nd-level caches for Hibernate), this can be pretty complicated. The easiest way to achieve this, is to set up the JBoss cluster using UDP, but this won’t work – for good reasons – in AWS. A different way of setting up a cluster is using TCP and TCPPING, but this is a really static approach. JBoss introduced an additional way to set up a cluster in AWS using the so called S3PING-module. This module stores the cluster configuration in a S3-bucket instead of writing the config in an XML-file. By using this module, the JBoss cluster can be set up in an elastic way using EC2, S3, ELB, IAM, AutoScale and Chef, even without using the AWS console.

First of all we need to set up a S3-user (in our case jboss) that will be used by JBoss to access the S3-bucket:

aws iam create-user –user-name jboss

and the S3-bucket to store the cluster-config (replace the bucket name with your own):

aws s3api create-bucket –bucket jboss7clusterconfig –create-bucket-configuration '{ "LocationConstraint":"eu-west-1" }'

and a user-policy to access S3:

aws iam put-user-policy –user-name jboss –policy-name jbosspolicy –policy-document file:///tmp/s3_permissions.json

The policy document (s3_permissions.json) contains the following permissions:

{

   "Statement": [

       {

           "Effect": "Allow",

           "Action": "s3:ListAllMyBuckets",

           "Resource": "arn:aws:s3:::*"

       },

       {

           "Effect": "Allow",

           "Action": "s3:*",

           "Resource": [

               "arn:aws:s3:::jboss7clusterconfig",

               "arn:aws:s3:::jboss7clusterconfig/*"

           ]

       }

   ]

}

After setting up and configuring the S3-bucket and the user, we should set up two JBoss 7-instances in EC2 based on Amazon Linux (ami-c7c0d6b3 for EU-West-1). Usually this is a very time consuming step, but in this case, we’re using Chef (http://www.opscode.com/chef/) as an automation platform. If you don’t have a local Chef-installation on your workstation or use AWS OpsWorks, take a look at the excellent documentation by Opscode (http://docs.opscode.com/install_workstation.html).

The required Chef-Cookbook for JBoss can be found here: https://github.com/SaschaMoellering/jboss7. In the default “attributes”-file you have to add your AWS-credentials and the name of the S3-bucket.  

jboss7 / attributes / default.rb:

default['aws']['s3']['access_key'] = ""

default['aws']['s3']['secret_access_key'] = ""

default['aws']['s3']['bucket'] = ""

The whole “magic” for the clustering-part using S3 is done in the standalone-full-ha.xml-file of JBoss:

<subsystem xmlns="urn:jboss:domain:jgroups:1.1" default-stack="s3ping">
           …            

<stack name="s3ping">
               <transport type="TCP" socket-binding="jgroups-tcp" diagnostics-socket-binding="jgroups-diagnostics"/>
               <protocol type="S3_PING">
                   <property name="access_key">
                       <%= @s3_access_key %>
                   </property>
                   <property name="secret_access_key">
                       <%= @s3_secret_access_key %>
                   </property>
                   <property name="prefix">
                       <%= @s3_bucket %>
                   </property>
                   <property name="timeout">
                       60000
                   </property>
               </protocol>
               <protocol type="MERGE2"/>
               <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
               <protocol type="FD"/>
               <protocol type="VERIFY_SUSPECT"/>
               <protocol type="BARRIER"/>
               <protocol type="pbcast.NAKACK"/>
               <protocol type="UNICAST2"/>
               <protocol type="pbcast.STABLE"/>
               <protocol type="pbcast.GMS"/>
               <protocol type="UFC"/>
               <protocol type="MFC"/>
               <protocol type="FRAG2"/>
           </stack>

    …

 </subsystem>

In the JBoss 7-cookbook, the templating mechanism of Chef is used to replace @s3_access_key, @s3_secret_access_key and @s3_bucket by the values in the attributes-file.

Additionally, we need the standard Java-Cookbook from Opscode (https://github.com/opscode-cookbooks/java). This Cookbook will install OpenJDK 1.7 by default. To start the installation, you have to fire up knife with the following parameters:

knife ec2 server create -I ami-c7c0d6b3 -i <your/pem/file.pem> -S knife -r "recipe[java],recipe[jboss7]"

If your JBoss-instances were created without errors, ssh into the machines, start the JBoss-service using /etc/init.d/jboss start. Right now we have to clustered JBoss-instances but no application using them. A test application is located here:

https://github.com/SaschaMoellering/JBossCluster

Basically this is a “mavenized” version of a clustering demo app from “Masterthejboss” (http://www.mastertheboss.com/jboss-cluster/clustering-jboss-as-7). To build this app, you have to install Maven (http://maven.apache.org/) on your local workstation (installation instructions are located here: http://maven.apache.org/download.cgi#Installation) and enter mvn package in your command line. In the target-folder, the deployable war-file (JBossCluster-1.0-SNAPSHOT.war) can be found. This war-file has to be copied to our running JBoss-instances under /srv/jboss/jboss-7.1.1/standalone/deployments.

The next step is to create an AMI-image of one machine (we need this AMI-image later for the autoscaling-configuration):

aws cli ec2 create-snapshot –volume-id <JBOSS-AMI-ID>

Now we have to create an ELB for our JBoss-instances using

aws elb create-load-balancer –load-balancer-name jboss-elb –listeners Protocol=http,LoadBalancerPort=80,InstancePort=8080 –availability-zones "eu-west-1a"

The load-balancer is listening on port 8080, the standard HTTP-port for JBoss. Additionally, the ELB needs a health-check configuration. The JBoss-cookbook contains a status.txt-file that is deployed in the web-root.

aws elb configure-health-check –load-balancer-name jboss-elb –health-check Target=http:8080/status.txt,Interval=30,Timeout=3,UnhealthyThreshold=2,HealthyThreshold=2

Now we have to register the JBoss-instances to the ELB

aws elb register-instances-with-load-balancer –load-balancer-name jboss-elb –instances ID1 ID2

After registering the instances, we have to create a launch configuration based on our recently created JBoss-AMI:

aws autoscaling create-launch-configuration –launch-configuration-name jbossscalelc –image-id <JBOSS-AMI-ID> –instance-type m1.medium –key-name <yourKeyPair>

It is very important to add your key to your autoscaling launch config, otherwise it would not be possible to ssh into the EC2-instances created by autoscaling. We need the autoscale configuration to start additional JBoss nodes based on the AMI-image we created earlier. The previously defined launch configuration needs an additional autoscaling group to define how many JBoss nodes have to be started (0 min, 4 max).

aws autoscaling create-auto-scaling-group –auto-scaling-group-name jbossscalesg –launch-configuration-name jbossscalelc –min-size 0 –max-size 4 –load-balancer-names jboss-elb –availability-zones eu-west-1a

The next step is to add a scaling policy that defines the adjustment of the scaling (in our case 1) and the cooldown-phase (the amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start). In this step we define the scaling policy for scaling up:

aws autoscaling put-scaling-policy –auto-scaling-group-name jbossscalesg –policy-name ScaleUpPolicy –scaling-adjustment 1 –adjustment-type ChangeInCapacity –cooldown 300

The last thing we have to do for the autoscale policy is to define a Cloudwatch alarm: we’re firing an alarm if the CPU-usage is > 80% for at least 60 seconds.

aws cloudwatch put-metric-alarm –alarm-name HighCPUAlarm –comparison-operator GreaterThanThreshold –evaluation-periods 1 –metric-name CPUUtilization –namespace "AWS/EC2" –period 60 –statistic Average –threshold 80 –alarm-actions arn:aws:autoscaling:eu-west-1:851073193649:scalingPolicy:7c480462-1fcc-45d3-83ef-09f37be96412:autoScalingGroupName/jbossscalesg:policyName/ScaleUpPolicy –dimensions Name=AutoScalingGroupName,Value=jbossscalesg

Of course we need a scale down policy as well: we’re decreasing the number of JBoss-instances by 1 with a cooldown time of 300 seconds.

aws autoscaling put-scaling-policy –auto-scaling-group-name jbossscalesg –policy-name ScaleDownPolicy –scaling-adjustment -1 –adjustment-type ChangeInCapacity  –cooldown 300

For the scale down policy we need a Cloudwatch alarm: if the CPU-usage is < 40% for at least 600 seconds, the alarm is fired.

aws cloudwatch put-metric-alarm –alarm-name MyLowCPUAlarm –comparison-operator LessThanThreshold –evaluation-periods 1 –metric-name CPUUtilization –namespace "AWS/EC2" –period 600 –statistic Average –threshold 40 –alarm-actions arn:aws:autoscaling:eu-west-1:851073193649:scalingPolicy:d3fb0c39-b3b5-40c5-9a4f-c4a918fd4ac3:autoScalingGroupName/jbossscalesg:policyName/ScaleDownPolicy –dimensions Name=AutoScalingGroupName,Value=jbossscalesg

Now we have everything we need to create an elastic JBoss 7 cluster in AWS without using the console.