Integration & Automation

Manage multiaccount and multi-Region infrastructure in Terraform using AWS Cloud9

Begin your HashiCorp Terraform journey using security best practices with AWS. HashiCorp Terraform is a popular infrastructure as code (IaC) tool to automate infrastructure of any cloud. In this post, we show how to create an AWS Cloud9 instance as a web-based integrated development environment (IDE). We also use Amazon Simple Storage Service (Amazon S3) for a remote backend and Amazon DynamoDB for remote lock files. AWS CodeCommit will be our repository for version control for our Terraform files, and AWS Identity and Access Management (IAM) roles will provide cross-account access. Use this post as a guide to set up infrastructure on AWS for Terraform deployments.

AWS Cloud9 can be the central point to deploy Terraform code, and it integrates well with CodeCommit. We use CodeCommit to create repositories for both our Terraform modules (a container for multiple resources that are used together) as well as our implementation code.

Terraform manages multiple clouds and environments (such as Kubernetes and Active Directory) through Terraform providers. However, when you use Terraform on AWS, you may encounter the following challenges:

  1. Managing AWS security credentials, by using secret keys and access keys for multiple AWS accounts.
  2. The local storage of IaC, dependency lock files, and state files can be subject to single point of failure.
  3. Tracking changes can be an issue.
  4. Securely deploying and managing infrastructure to multiple accounts and multi-Region.

This post addresses these challenges in the following ways:

  1. To reduce the use of managing secret and access keys, we assign the AWS Cloud9 instance (in the admin or central account) an IAM role. This IAM role will have a trust relationship to the admin IAM role in our spoke accounts.
  2. We show you how you can quickly deploy all the resources, such as a DynamoDB table for maintaining locks and Amazon S3 for storing state files securely, without a single point of failure.
  3. With version control of our Terraform infrastructure files, we can track all the changes made to the infrastructure. If needed, we can revert to the last working change in the event of failures. If our AWS Cloud9 instance is shut down, our files will be secure and can be used to continue building our infrastructure.
  4. We enable better control on our multiaccount infrastructure until we get the confidence and automation in place to create a CI/CD pipeline for our infrastructure with Terraform.
About this blog post
Time to read ~11 min.
Time to complete ~15 min.
Cost to complete ~$0-$2 Depending on instance size
Learning level Advanced (300)
AWS services AWS Cloud9
AWS CloudFormation
Amazon Simple Storage Service (Amazon S3)
Amazon DynamoDB
AWS CodeCommit
AWS Identity and Access Management (IAM)
AWS Security Token Service (AWS STS)
Amazon Elastic Compute Cloud (Amazon EC2)
AWS Systems Manager

Overview

Figure 1 shows the architecture that we use to demonstrate Terraform AWS Cloud9.

Terraform-Cloud9 Architecture Diagram

Figure 1. Terraform–AWS Cloud9 architecture diagram

  1. AWS Cloud9 has code and backend information with Amazon S3 and DynamoDB. AWS Cloud9 has an Instance Profile with a central AWS Cloud9 role.
  2. In the Terraform provider, use AWS Security Token Service (AWS STS) to specify AssumeRole with cross-account Terraform spoke role, which has a trust policy with a central AWS Cloud9 role.
  3. The command terraform apply creates a virtual private cloud (VPC) and security group in the spoke account and the Region specified in the provider information.

Set up AWS resources for Terraform deployments

We’ll create an AWS Cloud9 environment and IAM role for an AWS Cloud9 instance and an Amazon S3 bucket for state files. We use a DynamoDB table for locks and CodeCommit in the central account and central Region.

In the spoke account, we create an IAM role that allows the AWS Cloud9 instance to deploy a VPC and a security group in the spoke account.

The AWS Cloud9 environment is used to deploy resources in the spoke account (any Region) by using the AssumeRole capability of the Terraform AWS provider.

Prerequisites

  • Two AWS accounts: central account and spoke account
  • Access to AWS CloudFormation
  • Permission to create resources in both central and spoke accounts
  • Understanding of Terraform with providers and remote backends
  • Understanding of Git
  • Familiarity with DynamoDB and Amazon S3 with respect to Terraform deployments and services like CodeCommit, AWS Cloud9, and IAM roles cross-account access

Target technology stack (tools)

  • AWS CloudFormation: To deploy the initial infrastructure to securely deploy AWS Cloud9, IAM roles, CodeCommit (Git), Amazon S3 buckets, and a DynamoDB table
  • AWS Cloud9: As an IDE or jump box for cross-account or cross-Region infrastructure deployments
  • CodeCommit: Version control for Terraform code
  • Amazon S3: Used as a backend configuration to store state files of Terraform infrastructure.
  • DynamoDB: Amazon S3 supports state locking and consistency checking using DynamoDB, which can be enabled by setting the dynamodb_table field to an existing DynamoDB table name. A single DynamoDB table can be used to lock multiple remote state files. Terraform generates key names that include the values of the bucket and key variables
  • IAM: Used to assign to an Amazon Elastic Compute Cloud (Amazon EC2) instance using Instance Profile, which enables cross-account access

Repository

Clone this GitHub repository locally.

Walkthrough

Step 1. Deploy the central account infrastructure (create a CloudFormation stack to create resources)

  1. Sign in to the AWS Management Console, and open the AWS CloudFormation console.
  2. Create the CloudFormation stack from the template Cloud9CFN.yaml. For more information, refer to Creating a stack on the AWS CloudFormation console.
  3. Add the stack parameter TerraformBackendBucketName, and add an appropriate, unique bucket name.
  4. After the stack creation completes, copy the following values from the Outputs section to a local text editor:
    • BackendDynamoDbTable
    • S3BackendName
    • TerraformCloud9Role

Notes:

  • By default, this uses no-ingress Amazon EC2 instances to maintain instance security. The security group for this type of Amazon EC2 instance doesn’t have any inbound rule.
  • We recommend using a private subnet to allow the instance for the subnet to communicate with the internet by hosting a NAT gateway in a public subnet.
  • If you create AWS Cloud9 in a public subnet (not recommended), attach an internet gateway to the VPC, as well as an internet gateway route to a public subnet. This will enable the SSM Agent for the instance to connect to AWS Systems Manager.

Step 2. Create the spoke account infrastructure deployment

  1. Sign in to the AWS Management Console, and open AWS CloudFormation console.
  2. Log in to the spoke account.
  3. Create the AWS CloudFormation stack from the template SpokeCFN.yaml.
  4. Add the stack parameter CentralAccount. Use the account number where you created the AWS Cloud9 CloudFormation stack.

This creates an IAM role. It also creates a trust relationship with the role that has been associated to the AWS Cloud9 instance. This enables cross-account access from this AWS Cloud9 instance.

Step 3. Open AWS Cloud9 in the central account

  1. Log out of the spoke account.
  2. Sign in to the AWS Management Console, and open the AWS Cloud9 console.
  3. Choose the AWS Cloud9 console, and choose Open IDE.

Note: If the Open IDE option is unavailable, make sure that the user/role that you used to create the AWS CloudFormation stack is the same user/role that you’re using to access the AWS Cloud9 environment.

Step 4. Configure AWS credentials for your AWS Cloud9 workspace

  1. Choose the AWS Cloud9 logo.
  2. Choose Preferences.
  3. In the Preferences tab, choose AWS Settings.
  4. Select the AWS managed temporary credentials option to turn it off.

Close the Preferences tab.

AWS Cloud9 settings

Figure 2. AWS Cloud9 settings

This provides that instead of user/role credentials, the IAM role (attached to the Amazon EC2 instance) is used to establish cross-account access.

Note: If you get a “Session Token Expired” error, make sure you repeat the step to turn off AWS managed temporary credentials on the Preferences page of AWS Cloud9.

Step 5. Clone the CodeCommit repository

  1. Sign in to the AWS Management Console, and open the CodeCommit console.
  2. Choose the TerraformCodeCommit repository.
  3. Choose Clone HTTPS.

    Cloning HTTPS in your repository

    Figure 3. Cloning HTTPS in your repository

  1. Open the AWS Cloud9 console and add the following command (Figure 4):

git clone <Paste the Repo Copied Above>

Git clone repo

Figure 4. Git clone repo

An empty repo will be added here.

Step 6. Use the AWS CloudFormation stack to create resources in the spoke account

Repeat Step 2 (“Create the spoke account infrastructure deployment”).

Terraform deployment

Create a VPC and security group from sample Terraform code

  1. Sign in to the AWS Management Console, and open the AWS Cloud9 console.
  2. Choose the AWS Cloud9 console, and choose Open IDE.
  3. Move files from your local machine and copy all files from the Terraform directory into the empty TerraformCodeCommit directory in the AWS Cloud9 (see Figure 5).

    Move files to TerraformCodeCommit directory

    Figure 5. Move files to TerraformCodeCommit directory

  4. Make changes to following files:
    • backend.tf
      • backend “s3” bucket: S3BackendName
      • region: Where you created the AWS CloudFormation template
      • dynamodb table: Values from BackendDynamoDbTable
terraform {
   backend "s3" {
       bucket        = <BACKEND_S3_BUCKET>
       key             = 
"vpc_securitygroup_sample/terraform.tfstate"
       region         = <REGION>
       dynamodb_table = <LOCK_DYNAMODB>
       encrypt        = true
   }
}

Use the values from S3BackendName in place of <BACKEND_S3_BUCKET>, the Region where you created the AWS CloudFormation template in place of <REGION>, and BackendDynamoDbTable in place of <LOCK_DYNAMODB>.

    • provider.tf
      • region: Where you want to deploy this VPC and the security group
provider "aws" {
    region = <DESTINATION_REGION>
    assume_role {
        role_arn = "arn:aws:iam::${var.spoke_account}:role/TerraformSpokeRole"
    }
}
    • terraform.tfvars
      • spoke account: Account number (where you deployed the spoke AWS CloudFormation template) where you want to create these resources

spoke_account = <SPOKEACCOUNT>

Note: Add terraform.tfvars to .gitignore (when using a Git repository) and make it local to the directory. This file might contain production secrets or variables local to an environment.

After the CloudFormation stack creation is complete

  1. Right-click the directory where all the files reside, and choose Open Terminal Here (see Figure 6).

    TerraformCodeCommit directory

    Figure 6. TerraformCodeCommit directory

  2. Run the following commands in the AWS Cloud9 workspace terminal:

terraform init

terraform plan -out=tfplan -input=false

terraform apply -input=false tfplan

3. Use these Git commands to add the files to the CodeCommit repository:

git add *

git commit -m "First Commit"

git push

This keeps the files version-controlled.

Validate this information

  • The VPC and the security group are created in the spoke account, and the Region was added in the provider.tf files.
  • The backend state files were created in the Amazon S3 bucket from the Amazon S3 console.
  • You can find the Terraform files in the CodeCommit repository by navigating to the CodeCommit console.

Cleanup

  1. Sign in to the AWS Management Console, and open the AWS Cloud9 console.
  2. Use the following command to destroy all the resources created with Terraform using the same directory:

terraform destroy

  1. Empty the Amazon S3 bucket for successful deletion of the AWS CloudFormation stack. Use the Amazon S3 console to empty a bucket, which deletes all the objects in the bucket without deleting the bucket.
    1. Open the Amazon S3 console.
    2. From the bucket name list, choose the option next to the name of the bucket that you want to empty, and then choose Empty.
    3. To empty the bucket page, enter the bucket name into the text field, and then choose Empty.
    4. Monitor the progress of the process on the Empty bucket: Status page.
  2. Delete the AWS CloudFormation stack from both the spoke account and the AWS Cloud9 central account.

Conclusion

In this blog post, we discuss how to deploy a multi-Region infrastructure with multiple accounts with Terraform infrastructure as code using AWS Cloud9. We demonstrated how locks and state files can be externalized and maintained centrally for resources across accounts and Regions. We also showed how infrastructure can be deployed securely with the use of IAM roles without managing any credentials.

About the authors

Badal DavdaBadal Davda

Badal Davda is a senior cloud architect on the AWS Professional Services India team. His focus is on operational excellence and cost optimization for customers, achieved through automating day-to-day tasks using DevOps practices and infrastructure as code while ensuring that security is top of mind in infrastructure design. Recently married, he has been blessed with an amazing wife who is his biggest supporter and a constant source of inspiration. Outside of work, he has a passion for cinema and literature. Additionally, he prioritizes his health and fitness by taking a swim now and then. He’s grateful for the opportunity to serve businesses and provide solutions that help them succeed.

Akshay PendkarAkshay Pendkar

Akshay Pendkar is a senior cloud architect at AWS based out of Hyderabad, India. He specializes in DevOps, cloud infrastructure, and networking. He’s passionate about helping customers build and deliver scalable, secure, cost-effective, highly available, and well-designed cloud solutions with great outcomes. In his spare time, he enjoys watching movies and going on road trips with his beloved family.