AWS Cloud Operations & Migrations Blog

Import existing AWS Control Tower accounts to Account Factory for Terraform

AWS Control Tower Account Factory for Terraform (AFT) allows customers to provision and customize their account in AWS Control Tower using Terraform. AFT can also import existing AWS Control Tower managed accounts into AFT management, allowing you to manage the global and account-specific customization at scale using Terraform. We hear from customers that they want additional guidance, best practices, and troubleshooting tips when importing accounts to AFT. In this blog post, you will learn the steps to import accounts into AFT and deep dive into the troubleshooting steps.

Prerequisites
Before you can import existing accounts to AFT, following pre-requisites needs be satisfied.

  1. You should have deployed AFT. Follow the guide on this post to set up AFT if you need additional guidance
  2. You need AFT version 1.3.1 or higher to support the account import feature. To check the AFT version, you can inspect the AWS Systems Manager Parameter Store called /aft/config/aft/version in the AFT Management account
  3. You need to identify an existing AWS Control Tower managed target account that you wish to import to AFT

For this exercise you need to take note of the account name, account root email address and the target organizational unit (OU). If the target account is not managed by AWS Control Tower yet, follow the instruction to enroll existing AWS account to AWS Control Tower.

Importing account to AFT
Let’s review at the steps required to import an account into AFT, remember that all steps must be taken from AFT Management account.

  1. Start by editing the aft-account-request git repository from your IDE of choice To distinguish between imported account and new account request, create a new Terraform configuration called account_import.tf
  2. You should  use a separate terraform manifest file for every new account you import following this blog. Enter the required variables including AccountEmail, AccountName, ManagedOrganizationalUnit, SSOUserEmail, SSOUserFirstName and SSOUserLastName. You can find this information from the AWS Control Tower console
  3. For importing accounts into AFT, build a new Terraform configuration file with an import section. An example configuration file is listed below
module "aft-import-sandbox" {
  source = "./modules/aft-account-request"

  control_tower_parameters = {
    AccountEmail              = "sandbox@example.com"
    AccountName               = "Sandbox-01"
    ManagedOrganizationalUnit = "SandboxOU"
    SSOUserEmail              = "sandbox@example.com"
    SSOUserFirstName          = "FirstName"
    SSOUserLastName           = "LastName"
  }

  account_tags = {
    "Sandbox" = "Sandbox-01"
  }

  change_management_parameters = {
    change_requested_by = "Account_Infra"
    change_reason       = "Import a Sandbox account in AFT"
  }

  custom_fields = {
    group = "sandbox"
  }

  account_customizations_name = "sandbox"
}

Example-1: Terraform Account Import manifest file

In Control Tower parameters, always set unique value for AccountName. You can also add the optional variables such as custom_fields and account_customizations_name based on the nature of customizations to be added. When you are finished, commit, and push the required changes to the aft-account-request repository. In the next section we will review how to validate a successful account import.

Validating account import in AFT
There are a few steps you can take to confirm that the AWS account has successfully become AFT managed. You need to perform these validation steps on the AFT Management account. The AFT managed account list should be listed in Amazon DynamoDB tables. Review account imports by checking the contents of aft-request-metadata DynamoDB table. You could also check the aft-request DynamoDB table to verify it.

AWS console showing DynamoDB items in a tabular format from aft-request-metadata table. Each row represents an account managed by AFTAWS console showing DynamoDB items in a tabular format from aft-request-metadata table. Each row represents an account managed by AFT

Figure-1: AWS Console view of aft-request-metadata DynamoDB table

Then check that a new account customization in AWS CodePipeline was created and the pipeline ran successfully. The pipeline name will have the account number as the prefix.

AWS console showing CodePipeline pipelines, with pipeline names in the format Account-ID first, then suffixed by “customizations-pipeline”

Figure-2: AWS Console view of AWS CodePipeline

Finally, verify that the AWS Step Functions state machine aft-account-provisioning-framework is in succeeded state. Open the recent executions and inspect the execution input to verify if the execution corresponds to the imported account.

AWS console showing AWS Step Functions state machine named “aft-account-provisioning-framework”, with a tabular entry at the bottom showing the list of state machine executions, with status, execution start and end timestamps

Figure-3: AWS console of view of aft-account-provisioning-framework

The successful execution of aft-account-provisioning-framework Step Functions state machine verifies that all the AFT account import steps were successful.

Dive deep to account import in AFT
It is useful to understand the workflow of AFT account import process as it helps you to troubleshoot if there is an account import failure. Below is a high level workflow of importing an existing AWS control tower managed account into AFT.

Workflow diagram showing action initiated by AFT Admin, by committing account import manifest into GIT repository, which in turn triggers about a dozen AWS components to customize the account and import into AFT as detailed the following sequence

Figure-4: High-level workflow of Account Import process

  1. AFT Admin submits account request terraform manifest into AWS CodeCommit or a supported VCS repository
  2. An item with this account request is inserted into aft-request Amazon DynamoDB table
  3. As the DynamoDB table is updated, DynamoDB stream triggers aft-account-request-action-trigger AWS Lambda function.
  4. The Lambda function aft-account-request-action-trigger checks AWS Service Catalog to verify, if Control Tower Account Factory provisioned product for the target account exists.
  5. If a provisioned product exists, then aft-account-provisioning-framework AWS Step Functions state machine is executed. This state machine invokes multiple Lambda functions to perform the following actions:
      • Validate account details from AWS Control Tower (CT) management account
      • Deploy AWSAFTService and AFTExecution AWS Identity and Access Management (IAM) roles in the imported account
      • Create tags for the imported account
      • Create the custom parameters AWS Systems Manager Parameter Store in the imported account
      • Create tags for the imported account
      • Create the custom parameters AWS Systems Manager Parameter Store in the imported account

    State diagram showing sequence of actions that could run as part of aft-account-provisioning-framework state machine.

    Figure-5: State diagram of aft-account-provisioning-framework state machine

  6. The process is handed over to aft-feature-options AWS Step Functions state machine invokes multiple AWS Lambda functions to perform the following actions:
      • Delete the default Amazon Virtual Private Cloud (Amazon VPC), if opted in the terraform manifest file
      • Enroll for AWS Enterprise Support, if opted
      • Enable AWS CloudTrail data events, if opted

    State diagram showing sequence of actions that could run as part of aft-feature-options state machine

    Figure-6: State diagram of aft-feature-options state machine

  7. The state machine aft-account-provisioning-framework invokes the aft-account-provisioning-customizations state machine responsible for managing customizations. The state machines can use AWS Lambda functions as an example to communicate with external applications. This stage runs before the global and account level customizations stage
  8. Lastly, the state machine invokes the CodeBuild project aft-create-pipeline. This CodeBuild project creates a new AWS CodePipeline pipeline for the newly imported account. This pipeline runs automatically for the first time after it gets provisioned

Troubleshooting account import in AFT
In this section, we will discuss several troubleshoot tips when importing accounts to AFT. Perform the following troubleshooting steps in the AFT Management account. First inspection points it to ensure there are no error in the account request:

  1. Make sure your account import request was inserted correctly by AFT
  2. Inspect the terraform module aft-account-request. Check the account_import.tf file in your aft-account-request repository (your file name might be different)
  3. Inspect any typos on the mandatory variables such as the AccountEmail, AccountName or ManagedOrganizationalUnit. If using account_tags, make sure you don’t use reserved prefix such as “aws:” or “AWS:”. If you notice any error, make the correction, push another commit to your repository, and validate it again
  4. Validate the latest execution of CodePipeline ct-aft-account-request completed successfully. Note: Error in the pipeline typically indicates input errors in your account_import.tf file
  5. If the CodePipeline ct-aft-account-request runs successfully, you should move your attention to the Amazon DynamoDB table aft-request. Confirm that you found an item in the table that matches your input from account_import.tf
  6. The next inspection point is the DynamoDB. At this stage, the account import request have made it to the DynamoDB table and there are no typos or errors on the request. A successful update to DynamoDB table would invoke a DynamoDB stream. Here are the steps to inspect:
    • Ensure the DynamoDB stream is configured to invoke the AWS Lambda function.
    • Locate the  AWS Lambda function aft-account-request-action-trigger and scan the associated Amazon CloudWatch logs for any errors,
    • Match the timestamp around the time you pushed the commit to the repository
    • A successful Lambda invocation should indicate that the message is now sent to Amazon SQS queue. Refer to following example of successful log, notice the HTTPStatusCode 200 indicates that SQS received the message
{
    "time_stamp": "2023-03-22 00:50:18,207",
    "log_level": "INFO",
    "log_message": "Sending SQS message to https://sqs.us-east-1.amazonaws.com/027320203244/aft-account-request.fifo"
}
...
{
    "time_stamp": "2023-03-22 00:50:18,301",
    "log_level": "INFO",
    "log_message": {
        ...
        "ResponseMetadata": {
            "RequestId": "bdc0eb2f-f48b-507c-b9d6-b7bba7818a03",
            "HTTPStatusCode": 200,
            "HTTPHeaders": {
                ...
            },
            "RetryAttempts": 0
        }
    }
}

Example-2: Amazon CloudWatch log showing message sent to SQS queue

One common error found in the aft-account-request-action-trigger CloudWatch log is due to access denied to list AWS Service Catalog provisioned products from the AWS Control Tower Management account. To resolve this issue, check if AFT execution role has access to the Service Catalog Account Factory portfolio.

The third inspection point is the AWS Lambda function aft-account-request-processor. Like the previous steps, scan the CloudWatch logs for any errors. Pro-tip: don’t rely on the latest log stream, the error log might be on the older log streams. Match the  timestamp with the latest invocation of aft-account-request-action-trigger to triangulate the timestamps.

  1. Ensure the aft-lambda-account-request-processor event bridge rule is enabled and scheduled to invoke aft-account-request-processor

    Event bridge rule showing the event bridge rule aft-lambda-account-request-processor is scheduled to invoke the aft-account-request-processor lambda function every 5 minutes

    Figure-7: Event bridge rule aft-lambda-account-request-processor

  2. A common error found in this Lambda function is the invalid input, such as illegal character in the Account name
  3. Another common error is if the imported account has a Service Catalog Account Factory provisioned product in unhealthy status. Refer to the following sample error log as a reference. This error can be caused by drift on the Service Catalog provisioned product
    {
        "time_stamp": "2023-03-22 00:54:49,426",
        "log_level": "ERROR",
        "log_message": "Account Email: account1@example.com already used in Organizations"
    }
    ...
    {
        "time_stamp": "2023-03-22 00:54:49,584",
        "log_level": "ERROR",
        "log_message": "CT Request is not valid"
    }

    Example-3: Amazon CloudWatch sample error log

  4. Next, check the AWS Step Function aft-account-provisioning-framework.
  5. The aft_account_provisioning_framework_create_pipeline. step launches the AWS CodeBuild aft-create-pipeline to build the account specific pipeline. Inspect the CodeBuild logs for any errors. Finally, locate the account-specific AWS CodePipeline with name format: <account_id>-customizations-pipeline. Check for successful execution of the pipeline that indicates that the customizations are applied successfully.

Summary

Let’s do a quick recap. In this post, we showed you how to import an AWS Control Tower managed account into AFT. You should now understand the various AFT components that are important for importing accounts successfully. There are common problems that can cause issues with your account import, such as incorrect email address, incorrect Organizational Unit (OU) name, incorrect account name, reserved tag prefix and unhealthy AWS Service Catalog Account Factory product. Correcting the problem on the source repository (aft-account-request) will trigger AFT to reattempt the import process. If you are still blocked because of other errors that are not covered in this blog, check the troubleshooting guide and look for relevant GitHub issues on the AFT repository. Do you want to learn more about Control Tower and AFT by hands-on experience? Don’t forget to check out the AFT workshop.

Authors:

Ramesh Rajan

Ramesh Thiagarajan is a Senior Solutions Architect based out of San Francisco. He holds a Bachelor of Science in Applied Sciences and a master’s in Cyber Security and Information Assurance. He specializes in cloud migration, cloud security, compliance, and risk management. Outside of work, he is a passionate gardener, and has an avid interest in real estate and home improvement projects.

Welly Siauw

Welly Siauw is a Principal Partner Solution Architect at Amazon Web Services (AWS). He spends his day working with customers and partners, solving architectural challenges. He is passionate about service integration and orchestration, serverless and artificial intelligence (AI) and machine learning (ML). He authored several AWS blogs and actively leading AWS Immersion Days and Activation Days. Welly spends his free time tinkering with espresso machine and outdoor hiking.

Kingsly Theodar Rajasekar

Kingsly Theodar Rajasekar is a Senior DevOps Consultant based out of Georgia. He holds a Bachelor of Engineering in Computer Science and masters in Software Engineering. He specializes in assisting customers with DevOps adoption and automation, cloud migration and modernizing applications. Outside work, he enjoys spending time with his family outdoors, teaching kids and acquiring new skills.

Kriti Bhandari

Kriti Bhandari has worked as a Cloud Infrastructure Architect with AWS Professional Services where she helped customers across varied industries to create, modernize and operate their multi-account cloud environments. She holds a Masters in Computer Networking and was also a member of the AWS Data Center Network Engineering team as a Network Development Engineer. Currently she is implementing the Cloud Network Architecture at Ripple Labs.