AWS DevOps & Developer Productivity Blog
Use AWS CodeDeploy to Deploy to Amazon EC2 Instances Behind an Elastic Load Balancer
- Set up the environment described above
- Create your artifact bundle, which includes the deployment scripts, and upload it to Amazon S3
- Create an AWS CodeDeploy application and a deployment group
- Start the zero-downtime deployment
- Monitor your deployment
1. Set up the environment
- An Auto Scaling group and its launch configuration. The Auto Scaling group launches by default three Amazon EC2 instances. The AWS CloudFormation template installs Apache on each of these instances to run a sample website. It also installs the AWS CodeDeploy Agent, which performs the deployments on the instance. The template creates a service role that grants AWS CodeDeploy access to add deployment lifecycle event hooks to your Auto Scaling group so that it can kick off a deployment whenever Auto Scaling launches a new Amazon EC2 instance.
- The Auto Scaling group spins up Amazon EC2 instances and monitors their health for you. The Auto Scaling Group spans all Availability Zones within the region for fault tolerance.
- An Elastic Load Balancing load balancer, which distributes the traffic across all of the Amazon EC2 instances in the Auto Scaling group.
aws cloudformation create-stack --stack-name "CodeDeploySampleELBIntegrationStack" --template-url "http://s3.amazonaws.com/aws-codedeploy-us-east-1/templates/latest/CodeDeploy_SampleCF_ELB_Integration.json" --capabilities "CAPABILITY_IAM" --parameters "ParameterKey=KeyName,ParameterValue=<my-key-pair>"
Note: AWS CloudFormation will change your AWS account’s security configuration by adding two roles. These roles will enable AWS CodeDeploy to perform actions on your AWS account’s behalf. These actions include identifying Amazon EC2 instances by their tags or Auto Scaling group names and for deploying applications from Amazon S3 buckets to instances. For more information, see the AWS CodeDeploy service role and IAM instance profile documentation.
2. Create your artifact bundle, which includes the deployment scripts, and upload it to Amazon S3
version: 0.0 os: linux files: - source: /html destination: /var/www/html hooks: BeforeInstall: - location: scripts/deregister_from_elb.sh timeout: 400 - location: scripts/stop_server.sh timeout: 120 runas: root ApplicationStart: - location: scripts/start_server.sh timeout: 120 runas: root - location: scripts/register_with_elb.sh timeout: 120
- BeforeInstall deployment lifecycle event
First, it deregisters the instance from the load balancer (deregister_from_elb.sh). I have increased the time out for the deregistration script above the 300 seconds that the load balancer waits until all connections are closed, which is the default value if connection draining is enabled.
After that it stops the Apache Web Server (stop_server.sh). - Install deployment lifecycle event
The next step of the host agent is to copy the HTML pages defined in the ‘files’ section from the ‘/html’ folder in the archive to ‘/var/www/html’ on the server. - ApplicationStart deployment lifecycle event
It starts the Apache Web Server (start_server.sh).
It then registers the instance with the load balancer (register_with_elb.sh).
- The script gets the instance ID (and AWS region) from the Amazon EC2 metadata service.
- It checks if the instance is part of an Auto Scaling group.
- After that the script deregisters the instance from the load balancer by putting the instance into standby mode in the Auto Scaling group.
- The script keeps polling the Auto Scaling API every second until the instance is in standby mode, which means it has been deregistered from the load balancer.
- The deregistration might take a while if connection draining is enabled. The server has to finish processing the ongoing requests first before we can continue with the deployment.
# Get this instance's ID INSTANCE_ID=$(get_instance_id) if [ $? != 0 -o -z "$INSTANCE_ID" ]; then error_exit "Unable to get this instance's ID; cannot continue." fi msg "Checking if instance $INSTANCE_ID is part of an AutoScaling group" asg=$(autoscaling_group_name $INSTANCE_ID) if [ $? == 0 -a -n "$asg" ]; then msg "Found AutoScaling group for instance $INSTANCE_ID: $asg" msg "Attempting to put instance into Standby" autoscaling_enter_standby $INSTANCE_ID $asg if [ $? != 0 ]; then error_exit "Failed to move instance into standby" else msg "Instance is in standby" exit 0 fi fi
autoscaling_enter_standby() { local instance_id=$1 local asg_name=$2 msg "Putting instance $instance_id into Standby" $AWS_CLI autoscaling enter-standby --instance-ids $instance_id --auto-scaling-group-name $asg_name --should-decrement-desired-capacity if [ $? != 0 ]; then msg "Failed to put instance $instance_id into standby for ASG $asg_name." return 1 fi msg "Waiting for move to standby to finish." wait_for_state "autoscaling" $instance_id "Standby" if [ $? != 0 ]; then local wait_timeout=$(($WAITER_INTERVAL * $WAITER_ATTEMPTS)) msg "$instance_id did not make it to standby after $wait_timeout seconds" return 1 fi return 0 }
{ "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:Describe*", "autoscaling:EnterStandby", "autoscaling:ExitStandby", "cloudformation:Describe*", "cloudformation:GetTemplate", "s3:Get*" ], "Resource": "*" } ] }
3. Create an AWS CodeDeploy application and a deployment group
# Create a new AWS CodeDeploy application. aws deploy create-application --application-name "SampleELBWebApp" # Get the AWS CodeDeploy service role ARN and Auto Scaling group name # from the AWS CloudFormation template. output_parameters=$(aws cloudformation describe-stacks --stack-name "CodeDeploySampleELBIntegrationStack" --output text --query 'Stacks[0].Outputs[*].OutputValue') service_role_arn=$(echo $output_parameters | awk '{print $2}') autoscaling_group_name=$(echo $output_parameters | awk '{print $3}') # Create an AWS CodeDeploy deployment group that uses # the Auto Scaling group created by the AWS CloudFormation template. # Set up the deployment group so that it deploys to # only one instance at a time. aws deploy create-deployment-group --application-name "SampleELBWebApp" --deployment-group-name "SampleELBDeploymentGroup" --auto-scaling-groups "$autoscaling_group_name" --service-role-arn "$service_role_arn" --deployment-config-name "CodeDeployDefault.OneAtATime"
4. Start the zero-downtime deployment
aws deploy create-deployment --application-name "SampleELBWebApp" --s3-location "bucket=aws-codedeploy-us-east-1,key=samples/latest/SampleApp_ELB_Integration.zip,bundleType=zip" --deployment-group-name "SampleELBDeploymentGroup"
5. Monitor your deployment
watch -n1 aws autoscaling describe-scaling-activities --auto-scaling-group-name "$autoscaling_group_name" --query 'Activities[*].Description'
Every 1.0s: aws autoscaling describe-scaling-activities [...] [ "Moving EC2 instance out of Standby: i-d308b93c", "Moving EC2 instance to Standby: i-d308b93c", "Moving EC2 instance out of Standby: i-a9695458", "Moving EC2 instance to Standby: i-a9695458", "Moving EC2 instance out of Standby: i-2478cade", "Moving EC2 instance to Standby: i-2478cade", "Launching a new EC2 instance: i-d308b93c", "Launching a new EC2 instance: i-a9695458", "Launching a new EC2 instance: i-2478cade" ]
# Get the URL output parameter of the AWS CloudFormation template. aws cloudformation describe-stacks --stack-name "CodeDeploySampleELBIntegrationStack" --output text --query 'Stacks[0].Outputs[?OutputKey==`URL`].OutputValue'
- Graceful shut-down of your application
You do not want to kill a process with running executions. Make sure that the running threads have enough time to finish work before shutting down your application. - Connection draining
The AWS CloudFormation template sets up an Elastic Load Balancing load balancer with connection draining enabled. The load balancer does not send any new requests to the instance when the instance is deregistering, and it waits until any in-flight requests have finished executing. (For more information, see Enable or Disable Connection Draining for Your Load Balancer.) - Sanity test
It is important to check that the instance is healthy and the application is running before the instance is added back to the load balancer after the deployment. - Backward-compatible changes (for example, database changes)
Both application versions must work side by side until the deployment finishes, because only a part of the fleet is updated at the same time. - Warming of the caches and service
This is so that no request suffers a degraded performance after the deployment.