AWS Partner Network (APN) Blog

Facilitating a Migration to AWS with CloudEndure by Leveraging Automation

By Carmen Puccio and Mandus Momberg, Partner Solutions Architects at AWS focused on Migration

AWS MigrationIt’s no secret that migrating software and services from an on-premises environment to the cloud entails unique considerations and requirements. To provide confidence in the outcome of your migration, your migration strategy needs to scale easily. This means that a large part of your workflow must be automated.

There is no shortage of documentation on why automation in the cloud is important. In this post, we will show you how to perform an automated migration utilizing AWS Advanced Technology Partner CloudEndure, with a focus on incorporating automated tests so you can be confident that your application is working as expected post-migration.

The migration of a workload from on-premises to AWS requires careful planning and precise execution. There are many different strategies for moving to the cloud, and there are also numerous tools that help facilitate migration. All migration tools share common goals: to facilitate a migration to AWS by minimizing downtime and application workload impact, and to ensure that data loss is minimized.

Customers who want to quickly move their workloads to the cloud typically follow the rehost method, i.e. lift and shift. One of the challenges when executing a rehost is the amount of time it takes to manually confirm that a migrated application is performing as expected. Migrations that incorporate automation and rapid testing pipelines to validate proper migration are not only more likely to succeed but also improve efficiency as you take advantage of repeatable processes and decrease manual verification times.

Solution Overview

The solution we’ll describe in this blog post uses CloudEndure and AWS Database Migration Service (AWS DMS) to facilitate the migration of a Go Git Service (Gogs) deployment from a source Amazon VPC to a destination Amazon VPC which will simulate a live on-premises to AWS migration. Although we are using two different VPC’s for the purpose of this demo, the automation and combination of tools we are using in this blog post can easily be used in your toolkit to facilitate a true on-premises to AWS migration. For the setup of the mock source environment which is running CentOS 7, we chose to use a combination of AWS CloudFormation and Ansible so you can follow along in your test AWS environment.

CloudEndure is responsible for migrating the application server, and AWS DMS is responsible for replatforming the Gogs DB from a MySQL server running on an EC2 instance to a fully managed Amazon RDS database. We decided to leverage DMS for the purpose of this demonstration to show you how to do a replatform of a database to RDS; another option could have been to have used CloudEndure to do a rehost to EC2 when migrating a database.

CloudEndure has the ability to invoke custom post-processing scripts on the migrated instance upon launch. Using this capability enables you to do custom configuration and to run automated acceptance tests to prove that the application is working as expected on the migrated server.

For migration confidence, we are leveraging AWS Lambda, AWS SNS, AWS SQS, and CloudEndure’s post-processing capabilities to build an automated testing pipeline to perform a series of tests. If all tests complete successfully, we automatically launch an AWS CloudFormation template that deploys a highly available Gogs environment using the images built from your source environment.

The following diagram illustrates the migration process covered in this post.

Here is how the process works:

1. Ansible installs the AWS Application Discovery Service, the CloudEndure agent, and the scripts that will be used to reconfigure and test on the Gogs source server.

2. AWS DMS migrates the Gogs source DB server to the destination RDS instance.

3. Once the CloudEndure agent is running, it starts a block-level copy to perform the initial sync of the Gogs source server to AWS.

4. Once CloudEndure has completed the initial sync, their Continuous Data Protection (CDP) engine commences a real-time sync of any new data and the server is marked as ready for testing in AWS. The CloudEndure.py script then initiates the migration based on the hosttomigrate variable in the config.yml file. (This variable appears as Instance Name in the CloudEndure dashboard.)

5. The CloudEndure.py script calls the CloudEndure API and initiates a test instance off the latest snapshot of the source instance.

6. CloudEndure launches a new instance in the destination from the latest snapshot and runs the CloudEndure.sh post-provisioning script, which does the following:

a. Reconfigures Gogs to point to the RDS instance that DMS is replicating to, and restarts the Gogs service.

b. Checks to see if the Gogs service is up and running. If yes, the CloudEndure.sh post-provisioning script calls the CloudEndure_PostProcessing.py script, which sends a success notification to the CloudEndure Pass/Fail SNS topic. An example message would look like this:

"Message": "{"instanceId": " i-0bb669daff4b1eea1","Pass": "True"}"

c. The CloudEndure Lambda function is subscribed to the CloudEndure Pass/Fail SNS topic. The Lambda function looks for a success message. If it receives a success message, it creates an Amazon Machine Image (AMI) based on the incoming instance ID and posts the AMI information in Amazon SQS. You can track the status in CloudWatch for the Lambda function:

7. The CloudEndure.py script constantly polls the SQS queue for a message about the migrated instance. Once it receives a message, it checks to see if the AMI is ready. If it’s ready, the script launches the Gogs CloudFormation template and passes the AMI ID as a parameter. The CloudFormation template deploys a highly available environment that looks like this:

Getting Started

Now that you know how the migration process works, let’s get started. First, you’ll need to set up an account with CloudEndure. If you don’t have an account, you can register for one via the CloudEndure Migration product page on AWS SaaS Subscription marketplace.[1]

Once your account is set up and you’ve followed the getting started guide on the CloudEndure website, you’ll need to familiarize yourself with the below files. The full solution is hosted on GitHub for further detail.

Ansible playbooks, variables, and files:

  • playbooks/files/CloudEndure.sh – This file will be deployed to /boot/ce_conversion, which is where CloudEndure executes post-migration scripts. It is used to reconfigure Gogs to point to RDS and test the service.
    • The reinvent-ent312-source-instances.yml CloudFormation template replaces all occurrences of ent312.five0.ninja in this file with your Amazon Route 53 domain alias that you want to point to your ELB load balancer for a highly available Gogs environment with Auto Scaling. This value is passed into the template via the GogsDNS parameter in the CloudFormation Template.
  • playbooks/cloudendure_agent_install.yml
    • The reinvent-ent312-source-instances.yml CloudFormation template sets your CloudEndure UserName and Password in this Ansible Playbook in the section called “Install CloudEndure” based off the CloudEndureUser and CloudEndurePassword parameters in the CloudFormation Template.

Migration script config.yml used by the CloudEndure.py script:

Edit the file to provide the following information:

  • username – User name for CloudEndure
  • password – Password for CloudEndure
  • hosttomigrate – Name of host to migrate in the CloudEndure dashboard. This value won’t be available in the dashboard until after CloudEndure starts the initial replication process.
  • stackname – Name of your CloudFormation stack. Only change this if you choose to change the default value of CloudEndureBlogDemo when naming your CloudFormation stack.
  • keypairname – Key pair for launching the Gogs automatic scaling stack
  • gogsdns – Route 53 domain alias that you want to map to your ELB load balancer for Gogs automatic scaling

CloudFormation template:

  • reinvent-ent312-migrated-gogs.template
    • This value is your Route 53 domain alias that you want to map to your ELB load balancer for Gogs automatic scaling. The parameter GogsDNSName is passed in based off the gogsdns value in the config.yml when the CloudEndure.py script is run.

Deploying the Solution Using AWS CloudFormation

Now let’s take a look at the migration in detail and go through each step. In this demonstration, the CloudFormation template will spin up the source environment in a separate virtual private cloud (VPC) in your AWS account and will migrate it to a destination VPC within the same account.

First, you will need to deploy the AWS CloudFormation template into your AWS account.

You can also download the template to use it as a starting point for your own implementation.

On the Select Template page, keep the default setting for the template URL, and then choose Next.

Leave the default Stack name or enter a name for the stack and fill out the values per the below screenshots.

 

Take note of the values you set for Source Database Username and Source Database Password as you will need these when you configure Gogs. Choose Next and Next again on the following two screens and check the box that says “I acknowledge that AWS CloudFormation might create IAM resources with custom names.” Then choose Create.

It will take a couple of minutes for CloudFormation to create the resources in your account. When you see the stack with {YourStackName}-SourceInstanceResources marked as CREATE_COMPLETE, you can log in and configure Gogs.

The custom DMS task we created in CloudFormation depends on the Gogs DB being present, so you must install and configure Gogs before the CloudFormation stacks complete. (At the time of this writing, CloudFormation does not support DMS resources, but we wanted to show you one particular way to build automation around certain aspects of your migration.)

In the Outputs tab for your stack, find AnsibleSourceInstance. SSH into the instance using the value with the following command:

ssh -i {KeyPairYouAssociatedWithTheStack}centos@{ValueFromAnsibleSourceInstance}

After you SSH into the instance, run the following command to make sure the updates and CloudFormation user data steps are complete.

sudo tail -f /var/log/cloud-init.log

Once cloud-init finishes bootstrapping the instance, you should see a message that says something like the below:

Mar  7 18:30:29 ip-10-10-138-101 cloud-init: Cloud-init v. 0.7.5 finished at Tue, 07 Mar 2017 18:30:29 +0000. Datasource DataSourceEc2.  Up 369.01 seconds

Now you need to add the key pair to the instance so it can be used by Ansible to SSH into the source instances and to configure Gogs. On your local machine, from the directory where you’ve stored the key pair, copy the private key to your clipboard with the command:

cat {KeyPairYouAssociatedWithTheStack}.pem | pbcopy

On the Ansible source instance, run the command:

vi key.pem

Paste the private key into the vi window and save the file. Then change the permissions by running the command:

chmod 400 key.pem

Ensure that ssh-agent is enabled by running the following commands. You should receive an agent pid (for example, Agent pid 417).

eval `ssh-agent`

Then add the SSH key to ssh-agent and press Enter for the empty passphrase:

ssh-add key.pem

Now you can provision the source Gogs DB via Ansible:

ansible-playbook -i playbooks/hosts playbooks/database_provision.yml

Provision the source Gogs instance:

ansible-playbook -i playbooks/hosts playbooks/gogs_provision.yml

Once Gogs is configured via Ansible you can log in and configure Gogs in the source environment. You will need the value from GogsSourceInstance in the Outputs tab of your SourceInstanceResources stack in CloudFormation:

http://{ValueFromGogsSourceInstance}:3000

In the Gogs User and Password fields enter the values you noted earlier from the Source Database Username and Source Database Password in CloudFormation:

You can then register a user and password of your choice with Gogs. Please take note of it for later in this demonstration.

When you see that the DMS stack in CloudFormation is complete, you can inspect the setup. You should see a replication instance:

You should also see both the source and destination endpoints:

You should also see a task that performs database synchronization:

When you’ve finished verifying DMS, return to the AnsibleSourceInstance SSH window and run the following to install the Application Discovery Service and CloudEndure:

ansible-playbook -i playbooks/hosts playbooks/aws_cli_ads_agent_install.yml

ansible-playbook -i playbooks/hosts playbooks/cloudendure_agent_install.yml

Log into the CloudEndure Dashboard and you should see your server. It may take a while for it to show as ready for testing, because CloudEndure is in the process of completing the initial block-level sync to AWS.

The value for INSTANCE NAME in the CloudEndure Dashboard is the value you need to set for the hosttomigrate variable in the config.yml file.

Run the CloudEndure.py script to initialize the migration:

python scripts/CloudEndure.py

To see an example output of the script, please look at the README.

Once the script finishes, you should now see the highly available Gogs environment with AutoScaling spinning up, using the AMI that was created from the Lambda function.

It will take a couple of minutes for the highly available Gogs environment to pass health checks and go into service behind the ELB load balancer, but you should eventually be able to access the migrated Gogs environment that now is configured to use RDS by signing in with the user name you created in the source environment. This proves that the DMS task was successful in migrating your source Gogs database to RDS.

Summary

In this post, we demonstrated how you can incorporate automation and testing into your toolkit to speed up your migrations from your on-premises environment to AWS. With careful planning and configuration at the outset, you will have a series of tools you can reuse across your migration scenarios. This will enable you to migrate your workloads faster and will give you greater confidence that your applications are working as expected post-migration.


The content in this blog is not an endorsement of a third-party product. This blog is intended for informational purposes.

[1] Please note that you are responsible for any costs incurred while following the steps in this blog post.