AWS Cloud Operations Blog

How to import migrated Amazon EC2 instances into infrastructure code

Modeling Infrastructure as Code (IaC) enables you to automate the lifecycle of AWS resources. However, the timing for IaC adoption can vary. AWS customers often move quickly in the beginning by performing block-level replication of their servers to the cloud. This is suitable when hundreds or thousands of servers need to exit their data center on a strict timeline with minimal involvement of application teams and other vendors. After the migration, customers look for ways to manage their cloud environments more efficiently. They also look for ways to reduce the number of outdated Amazon Machine Images (AMIs).

This blog discusses the use case of server migrations using AWS Application Migration Service. We describe how to simplify the CloudFormation import and manage your cutover instances. We use Former2, an open-source IaC template creation utility. Former2 supports AWS CloudFormation, AWS Cloud Developer Kit (AWS CDK), and Terraform, among others. Post-launch actions ensure every cutover instance has an installed Systems Manager Agent. This offers a simple way to automate administrative tasks such as AMI management. This allows you to maintain the migration speed and reduce the effort to operate at scale.

Considerations and prerequisites

The following are recommended for continuing with this post:

Important note

Since Former2 is an open-source utility, we recommend a thorough review of its output prior to importing it into CloudFormation.

Solution overview

We assume the use case of a rehost migration factory, executing server migrations using Application Migration Service for batches of servers also called migration waves. The solution is equally applicable if you are using the Cloud Migration Factory on AWS Solution or another orchestrator to manage Application Migration Service jobs.

Identify resources outside of CloudFormation management

In the case of a rehost migration factory with multiple migration waves, we can build an event-driven workflow to add cutover instances into IaC. This can be achieved using an Amazon EventBridge rule listening on cutover events. The cutover events store data about the instance ID and the associated Application Migration Service job ID. We can deliver them to an Amazon Simple Storage Service (Amazon S3) bucket using Amazon Kinesis Data Firehose (see the following figure 1).

With Amazon EventBridge and Amazon Kinesis Data Firehose delivery stream as target, you can put Application Migration Service cutover events to an S3 bucket. Amazon S3 event notifications invoke an AWS Lambda function which reads the server ID from the event log. Then, it invokes Former2, which we run as a task in AWS Fargate. Former2 puts the infrastructure code for the associated server ID in an S3 bucket. The code should be reviewed and then imported into a CloudFormation stack.

Figure 1. Add existing AWS resources to infrastructure code

Import resources into CloudFormation management

We use Amazon S3 event notifications to invoke AWS Lambda each time a new object is put to Amazon S3. Lambda fetches the server ID from Amazon S3 and starts Former2, which we run as a task in AWS Fargate. Former2 generates the infrastructure code and puts it to a designated S3 bucket. Then, you can review and import your resource into a CloudFormation stack using the AWS CLI or AWS Management Console, without having to recreate the environment.

Enable post-launch actions

You can enable post-launch actions at any time for all cutover instances. This will ensure every instance has an installed Systems Manager Agent and reports itself into the AWS Systems Manager Inventory. Once instances are added to the Systems Manager Inventory, you can apply in-place operating system (OS) patches using the Systems Manager Patch Manager and automate administrative tasks using Systems Manager Run Command. Systems Manager Automation runbooks can perform an automated upgrade by launching a new instance and upgrading them in-place, resulting in a new Amazon Machine Image (AMI). This allows to test the new AMI while serving user traffic, and then switching traffic to the new instance. The following figure visualizes the workflow.

Enabling post-launch actions triggers the installation of Systems Manager agent and adds Systems Manager IAM Role for all cutover instances. Once Systems Manager has added the cutover instances to its Inventory, you can use Patch Manager to patch AMIs and Automation runbooks to upgrade cutover instances.

Figure 2. Enabling automated upgrades for cutover instances

Implementation procedures

Step 1. Identify the relevant events

Application Migration Service creates transient Amazon Elastic Compute Cloud (Amazon EC2) resources prior to the cutover event. Therefore, it’s important to only fetch the cutover instances. Do this by setting up the EventBridge rule to listen on FinalizeCutover API calls and the “CUTOVER” Lifecycle state, as follows.

{
  "source": ["aws.mgn"],
  "detail": {
    "eventName": ["FinalizeCutover"],
    "responseElements": {
      "lifeCycle": {
        "state": ["CUTOVER"]
      }
    }
  }
}

Step 2. Create destination for Amazon S3 event notifications

Amazon S3 event notifications initiate our Lambda function, which fetches the associated server ID by reading the value from the AWSApplicationMigrationServiceSourceServerID tag (created by Application Migration Service) and formulating the generate call for our Former2 Docker image. We implement this logic using Python, as follows. The associated Lambda resource needs Amazon S3 read access to parse cutover events as well as ecs:RunTask permission to invoke Former2.

import json
import boto3

def lambda_handler(event, context):
    # Read S3 Event and identify event file
    s3 = boto3.resource('s3')

    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']
    
    content_object = s3.Object(bucket, key)
    
    data = json.load(content_object.get()['Body'])
    
    sourceServerID = data['detail']['responseElements']['sourceServerID']
    
    print(sourceServerID)
    run_former2(sourceServerID)

def run_former2(sourceServerID):
    # Run Former2 CLI using the source Server ID tag as filter
    client = boto3.client('ecs')

    override = {

        "containerOverrides": 
            [ 
                {
                    'name': 'former-poc',
                    'command': ['--cfn-deletion-policy','Retain','--services', 'EC2', '--search-filter', sourceServerID] 
                }
            ] 
    }
    
    response = client.run_task(
    cluster='former', # name of the cluster
    launchType = 'FARGATE',
    taskDefinition='task-former-poc:8', # replace with your task definition name and revision
    count = 1,
    platformVersion='LATEST',
    networkConfiguration={
        'awsvpcConfiguration': {
            'subnets': [
                'subnet-0272da36749d3cbc1'
            ],
            'assignPublicIp': 'ENABLED'
        }
    },
    overrides=override
    )
    return str(response)

Step 3. Set up Former2

Former2 provides a CLI that allows direct use from the command line. It’s available as a Docker image that can be uploaded directly to Amazon Elastic Container Registry (Amazon ECR). Follow the steps in the Amazon ECR user guide to push the Former2 CLI Docker image to Amazon ECR. Then, you can proceed with Former2 task definition in Fargate which requires no port mapping or mountpoints. Remember to create an IAM role for the task with AWS read-only access as well as permission to put to Amazon S3.

Step 4. CloudFormation import

The CloudFormation user guide describes the import steps. Each resource must have a DeletionPolicy attribute, if you want to import into an existing CloudFormation stack. Former2 adds this automatically.

For any ancillary resources such as Elastic Load Balancing that build up your application stack, you can use the describe_stack_resources AWS Command Line Interface (AWS CLI) command and pass the resource ID to confirm whether these belong to an existing CloudFormation stack.

We recommend keeping stacks small to reduce blast radius and de-risk follow-on changes. Resources managed by different teams and with different frequency of changes should be separated into different stacks. If your application servers are managed by one team, while database servers are managed by another, we recommend splitting the layers into separate stacks.

Step 5. Enable Application Migration Service post-launch actions

Activate post-launch actions in AWS Application Migration Service at any time for all cutover instances, by navigating to Settings and editing the Post-launch actions template. Consult the user guide for more information. The will also create the associated AWS Identity and Access Management (IAM) role for AWS Systems Manager to interact with instances.

Cleaning up

Remember to delete example resources if you no longer need them to avoid incurring future costs. This includes:

  • Delete or disable the EventBridge rule
  • Clean up the Kinesis Data Firehose delivery stream on the AWS Management Console. Go to Amazon Kinesis, then Delivery streams, select the delivery stream you want to delete, and click on Delete.
  • Delete the S3 buckets with cutover events and infrastructure code
  • Delete the Lambda function triggering Former2, by logging in to the AWS Management Console. Go to the Functions section in AWS Lambda, select the function, click on Actions and select Delete.
  • Deregister the Former2 Fargate task definition
  • Terminate any Amazon EC2 instances (including cutover instances) used to test this solution
  • Delete any Application Migration Service jobs used to test the presented solution

You can view your costs and usage using the AWS Cost Explorer user interface.

Conclusion

In this post, we simplified the import of your migrated Amazon EC2 instances into infrastructure code using Former2. We showed how you can set up an event-driven workflow and continuously import cutover instances into CloudFormation. This keeps the number of resources outside the IaC state to a minimum. The presented solution can be adapted to different use cases including rapid prototyping and experimentation with new AWS resources in sandbox environments. You can also initiate Former2 from the command line and control which AWS resources are included in the stack.

Want to learn more?

About the authors:

Rostislav Markov

Rostislav is principal architect with AWS Professional Services. As technical leader in Strategic Industries, he works with AWS customers and partners on their cloud transformation programs. Outside of work, he enjoys spending time with his family outdoors and exploring New York City culture.

Carlos Antonio Perea Gomez

Carlos is a Builder with AWS Professional Services. He enables customers to become AWSome during their journey to the cloud. When not up in the cloud he enjoys scuba diving deep in the waters.

Torsten Reitemeyer

Torsten is Senior Practice Manager with AWS Professional Services. A software developer at heart, he works with the next generation of builders and enables them to build faster together. Outside of work, he enjoys spending time with his family, attending concerts, and traveling abroad.