使用 Amazon CodeCatalyst 引导 Terraform 自动化

了解如何使用 Amazon CodeCatalyst 从头开始设置 Terraform,为您的基础设施创建 CI/CD 管道
发布时间:2023 年 1 月 31 日
Terraform
CodeCatalyst
CI-CD
基础设施即代码
Github-Actions
DevOps
教程
亚马逊云科技
Olawale Olaleye
亚马逊云科技使用经验
200 - 中级
完成所需时间
30 分钟
所需费用
支持亚马逊云科技免费套餐
前提条件

注册 / 登录 亚马逊云科技账户
CodeCatalyst 账户
Terraform 1.3.7+
(可选)GitHub 账户

上次更新时间
2023 年 2 月 22 日

Terraform 在管理所有基础设施方面是一个非常棒的工具,但是当多个开发人员尝试对基础设施进行更改时,如果没有适当的机制(CI/CD 管道)来进行管理,局面很快就会变得混乱。如果没有建立适当的机制,对任何基础设施进行更改都需要协调和沟通,而参与这些更改的人员越多,挑战也会迅速扩大。想象一下,您必须四处奔波,大喊,“嘿,Bob!Jane!你们搞定了数据库更改没?我需要添加一个新的容器构建作业!”。正如 Jeff Bezos 所言:

“光凭善意是不够的,需要建立良好的机制才能实现目标”。

本教程将展示如何使用 Amazon CodeCatalyst 和 Terraform 设置 CI/CD 管道。管道将利用拉取请求 (PR) 来提交、测试和审查对基础设施的任何变更请求。本教程涵盖以下内容内容:

  • 使用 S3 作为 Terraform 状态文件的后端,使用 DynamoDB 进行锁定,并使用 KMS 对静态状态文件进行加密
  • 使用 CodeCatalyst 运行 CI/CD 管道,以创建和更新所有基础设施

“先有鸡还是先有蛋”问题

实现基础设施自动化固然是上策,但自动化本身也需要有基础设施支持。您可以通过三种方法来实现:

  1. 在控制台中点击,执行所有设置,也就是 ClickOps
  2. 使用 CLI 通过脚本创建资源,也就是程序化
  3. 在引导时不储存 Terraform 的状态文件,然后加入状态文件配置实现存储

本教程中我们将使用第三种方法,请参考 Stack Overflow 中有关方法的讨论,详细了解权衡利弊。

初始步骤

让我们开始设置吧!确保您在同一个浏览器中登录亚马逊云科技账户和 CodeCatalyst 账户。

设置 CodeCatalyst 空间、项目、存储库和环境

下面我们来设置 CodeCatalyst 空间和项目。点击 CodeCatalyst 控制面板上的 Create Space(创建空间),创建一个新空间,为其命名(这里使用 Terraform CodeCatalyst),再添加亚马逊云科技账户 ID 来关联计费信息(111122223333 是占位符),您可以在亚马逊云科技管理控制台的右上角查看账户 ID,并按照提示将您的亚马逊云科技账户与 CodeCatalyst 相关联。

接下来,我们需要创建一个新项目,点击 Create Project(创建项目)按钮,点击 Start from scratch(从头开始),再为项目命名,这里使用的名称是 TerraformCodeCatalyst。

现在,我们需要为代码创建一个新的存储库。点击左侧导航菜单中的 Code(代码),再点击 Source repositories(源存储库)、Add repository(添加存储库)、Create repository(创建存储库)。设置存储库的名称(本教程采用 bootstrapping-terraform-automation-for-amazon-codecatalyst),添加描述,并将 .gitignore file(.gitignore 文件)设置为 Terraform:

最后,我们需要设置将用于工作流的亚马逊云科技环境。在左侧导航菜单中,点击 CI/CD,再点击 Environments(环境)、Create environment(创建环境)。添加 Environment name(环境名称)、Description(描述),从 AWS account connection,(亚马逊云科技账户连接)下的下拉列表中选择您的亚马逊云科技账户,然后点击 Create environment(创建环境)。

搭建开发环境

在着手编写代码之前,我们需要搭建一个开发环境,此时将采用 CodeCatalyst 所提供的内置开发环境。在左侧导航菜单中,点击 Code(代码)下的 Dev Environment(开发环境),再点击 Create Dev Environment,(创建开发环境),选择 Cloud9,本教程将使用 Cloud9。选择 Clone a repository,(克隆存储库),对于 Repository(存储库),从下拉列表中选择 bootstrapping-terraform-automation-for-amazon-code-catalyst,添加 Alias(别名)TerraformBootstrap,然后点击 Create(创建)按钮。

预配开发环境需要 1 - 2 分钟,完成后,您将看到一个欢迎界面:

Terraform 版本可能不是最新版本,可以通过运行 terraform --version,检查安装的版本。本教程使用版本 1.3.7,为确保使用的是该版本,请使用以下命令:

注意:如果您使用的是本地开发环境,而不是 CodeCatalyst 托管的环境,则架构/操作系统可能会有所不同,请参见下载页面,下载相应版本的 Terraform。

TF_VERSION=1.3.7
wget -O terraform.zip https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip
unzip terraform.zip
rm terraform.zip
sudo mv terraform /usr/bin/terraform
sudo chmod +x /usr/bin/terraform

# Confirm correct version
terraform --version

最后,我们需要将 Amazon CLI 凭证添加到开发环境,以访问我们账户中的资源。建议不要使用根用户,如果您尚未设置 IAM 用户,请立即按照说明进行设置,并确保复制 Access key ID(访问密钥 ID)和 Secret access key(秘密访问密钥)值,然后在开发环境的终端运行 aws configure(可以将最后两个默认值留空,也可以输入您喜欢的值):

$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]:
Default output format [None]:

可以在终端运行 aws sts get-caller-identity,验证访问是否正确设置:

$ aws sts get-caller-identity
{
 "UserId": "AIDACKCEVSQ6C2EXAMPLE",
 "Account": "111122223333",
 "Arn": "arn:aws:iam::111122223333:user/JaneDoe"
}

引导 Terraform

接下来,我们需要使用 Terraform 将所需的基础设施添加到我们的亚马逊云科技账户。我们将创建以下资源:

  1. IAM 角色:在账户中提供工作流可以担任的角色,一个角色用于 main 分支,一个角色用于任何拉取请求 (PR)。
  2. IAM 策略:设置工作流 IAM 角色在我们账户中的权限范围 - main 分支拥有完全管理员权限,可以创建基础设施;PR 分支拥有 ReadOnly 权限以便校验所有更改。
  3. S3 存储桶:用于存储 Terraform 状态文件的 S3 存储桶。
  4. S3 存储桶版本控制:Terraform 状态文件每次更改时保留备份副本。
  5. DynamoDB 表:Terraform 用于在运行时创建锁定,这可防止多个 CI 作业在并行运行时进行更改。
  6. KMS 加密密钥:(可选)虽然状态文件存储在 S3 中,但我们希望在静态时使用 KMS 密钥对其进行加密。在本教程中,我们将使用预先存在的 aws/s3 密钥,如果您希望使用不同的 KMS 密钥(1 美元/月/密钥),下面将有一部分专门介绍如何进行更改。

要创建所有必需的文件,可以使用以下命令创建目录,并直接从示例存储库下载文件。通过开发环境终端在克隆的 git 存储库的根目录下运行命令:

cd bootstrapping-terraform-automation-for-amazon-codecatalyst
mkdir -p _bootstrap
cd _bootstrap
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/main_branch_iam_role.tf
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/pr_branch_iam_role.tf
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/providers.tf
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/state_file_resources.tf
wget https://raw.githubusercontent.com/build-on-aws/bootstrapping-terraform-automation/main/_bootstrap/codecatalyst/variables.tf

创建的文件将包含以下内容:

  • variables.tf
  • variable "aws_region" {
     default = "us-east-1"
    }
    
    variable "state_file_bucket_name" {
     default = "tf-state-files"
    }
    
    variable "state_file_lock_table_name" {
     default = "TerraformMainStateLock"
    }
    
    variable "kms_key_alias" {
     default = "Terraform-Main"
    }
  • main_branch_iam_role.tf
  • # Policy allowing the main branch in our repo to assume the role.
    data "aws_iam_policy_document" "main_branch_assume_role_policy" {
      statement {
        actions = ["sts:AssumeRole"]
        principals {
          type        = "Service"
          identifiers = [
            "codecatalyst.amazonaws.com",
            "codecatalyst-runner.amazonaws.com"
          ]
        }
      }
    }
    
    # Role to allow the main branch to use this AWS account
    resource "aws_iam_role" "main_branch" {
      name               = "Main-Branch-Infrastructure"
      assume_role_policy = data.aws_iam_policy_document.main_branch_assume_role_policy.json
    }
    
    # Allow role admin rights in the account to create all infra
    resource "aws_iam_role_policy_attachment" "admin_policy_main_branch" {
      role       = aws_iam_role.main_branch.name
      policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
    }
    
    
  • pr_branch_iam_role.tf
  • # Policy allowing the PR branches in our repo to assume the role. 
    data "aws_iam_policy_document" "pr_branch_assume_role_policy" {
      statement {
        actions = ["sts:AssumeRole"]
        principals {
          type        = "Service"
          identifiers = [
            "codecatalyst.amazonaws.com",
            "codecatalyst-runner.amazonaws.com"
          ]
        }
      }
    }
    
    # Role to allow PR branch to use this AWS account
    resource "aws_iam_role" "pr_branch" {
      name               = "PR-Branch-Infrastructure"
      assume_role_policy = data.aws_iam_policy_document.pr_branch_assume_role_policy.json
    }
    
    # Allow PR Branch read-only access in the account to run `plan`
    resource "aws_iam_role_policy_attachment" "readonly_policy_pr_branch" {
      role       = aws_iam_role.pr_branch.name
      policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
    }
    
    # Additional policy allowing read and write access to the DynamoDB table
    # to create locks when `plan` is run.
    data "aws_iam_policy_document" "pr_branch_lock_table_access" {
      statement {
        sid    = "DynamoDBIndexAndStreamAccess"
        effect = "Allow"
        actions = [
          "dynamodb:GetShardIterator",
          "dynamodb:Scan",
          "dynamodb:Query",
          "dynamodb:DescribeStream",
          "dynamodb:GetRecords",
          "dynamodb:ListStreams"
        ]
        resources = [
          "arn:aws:dynamodb:${var.aws_region}:${data.aws_caller_identity.current.account_id}:table/${var.state_file_lock_table_name}/index/*",
          "arn:aws:dynamodb:${var.aws_region}:${data.aws_caller_identity.current.account_id}:table/${var.state_file_lock_table_name}/stream/*"
        ]
      }
    
      statement {
        sid    = "DynamoDBTableAccess"
        effect = "Allow"
        actions = [
          "dynamodb:BatchGetItem",
          "dynamodb:BatchWriteItem",
          "dynamodb:ConditionCheckItem",
          "dynamodb:PutItem",
          "dynamodb:DescribeTable",
          "dynamodb:DeleteItem",
          "dynamodb:GetItem",
          "dynamodb:Scan",
          "dynamodb:Query",
          "dynamodb:UpdateItem"
        ]
        resources = [
          "arn:aws:dynamodb:${var.aws_region}:${data.aws_caller_identity.current.account_id}:table/${var.state_file_lock_table_name}"
        ]
      }
    
      statement {
        sid    = "DynamoDBDescribeLimitsAccess"
        effect = "Allow"
        actions = [
          "dynamodb:DescribeLimits"
        ]
        resources = [
          "arn:aws:dynamodb:${var.aws_region}:${data.aws_caller_identity.current.account_id}:table/${var.state_file_lock_table_name}",
          "arn:aws:dynamodb:${var.aws_region}:${data.aws_caller_identity.current.account_id}:table/${var.state_file_lock_table_name}/index/*"
        ]
      }
    
      statement {
        sid    = "KMSS3Acess"
        effect = "Allow"
        actions = [
          "kms:Encrypt",
          "kms:Decrypt",
          "kms:GenerateDataKey"
        ]
        # NB: While we allow "*" access to all KMS resources, we limit it to only the
        # "alias/s3" default key with the `StringLike` condition.
        resources = ["*"]
        condition {
          test     = "StringLike"
          variable = "kms:RequestAlias"
          values = [
            "alias/s3"
          ]
        }
      }
    }
    
    # Create a policy that allows reading and writing to the lock table
    resource "aws_iam_policy" "lock_table_policy_pr_branch" {
      name   = "pr_branch_lock_table_access_policy"
      path   = "/"
      policy = data.aws_iam_policy_document.pr_branch_lock_table_access.json
    }
    
    # Allow PR branch read and write to the lock table for `plan`
    resource "aws_iam_role_policy_attachment" "lock_table_policy_pr_branch" {
      role       = aws_iam_role.pr_branch.name
      policy_arn = aws_iam_policy.lock_table_policy_pr_branch.arn
    }
    
  • providers.tf
  • # Configuring the AWS provider
    provider "aws" {
      region = var.aws_region
    }
    
    # Used to retrieve the AWS Account Id
    data "aws_caller_identity" "current" {}
    
  • state_file_resources.tf
  • # Bucket used to store our state file
    resource "aws_s3_bucket" "state_file" {
      bucket = var.state_file_bucket_name
    
      lifecycle {
        prevent_destroy = true
      }
    }
    
    # Enabling bucket versioning to keep backup copies of the state file
    resource "aws_s3_bucket_versioning" "state_file" {
      bucket = aws_s3_bucket.state_file.id
    
      versioning_configuration {
        status = "Enabled"
      }
    }
    
    # Table used to store the lock to prevent parallel runs causing issues
    resource "aws_dynamodb_table" "state_file_lock" {
      name           = var.state_file_lock_table_name
      read_capacity  = 5
      write_capacity = 5
      hash_key       = "LockID"
    
      attribute {
        name = "LockID"
        type = "S"
      }
    }
    
    ## (Optional) KMS Key and alias to use instead of default `alias/s3` one.
    # resource "aws_kms_key" "terraform" {
    #   description = "Key used for Terraform state files."
    # }
    
    # resource "aws_kms_alias" "terraform" {
    #   name          = "alias/terraform"
    #   target_key_id = aws_kms_key.terraform.key_id
    # }
    

完成后,编辑 _bootstrap/variable.tf 文件,将 state_file_bucket_name(S3 存储桶名称具有全局唯一性)更新为存储状态文件的 S3 存储桶的名称,(可选)将 state_file_lock_table_name 变量更新为用于锁定的 DynamoDB 表名称,(可选)将 aws_region 更改为您希望使用的其他区域。

我们现在将引导基础设施(使用 ... 省略了执行 terraform plan 命令后输出的每个 Terraform 资源正文):

terraform init
terraform plan

输出结果应该如下所示:

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v4.53.0...
- Installed hashicorp/aws v4.53.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

$ terraform plan
data.aws_caller_identity.current: Reading...
data.aws_iam_policy_document.pr_branch_assume_role_policy: Reading...
data.aws_iam_policy_document.main_branch_assume_role_policy: Reading...
data.aws_iam_policy_document.pr_branch_assume_role_policy: Read complete after 0s [id=2789987180]
data.aws_iam_policy_document.main_branch_assume_role_policy: Read complete after 0s [id=2789987180]
data.aws_caller_identity.current: Read complete after 0s [id=111122223333]
data.aws_iam_policy_document.pr_branch_lock_table_access: Reading...
data.aws_iam_policy_document.pr_branch_lock_table_access: Read complete after 0s [id=813239658]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_dynamodb_table.state_file_lock will be created
  ...
  # aws_iam_policy.lock_table_policy_pr_branch will be created
  ...
  # aws_iam_role.main_branch will be created
  ...
  # aws_iam_role.pr_branch will be created
  ...
  # aws_iam_role_policy_attachment.admin_policy_main_branch will be created
  ...
  # aws_iam_role_policy_attachment.lock_table_policy_pr_branch will be created
  ...
  # aws_iam_role_policy_attachment.readonly_policy_pr_branch will be created
  ...
  # aws_s3_bucket.state_file will be created
  ...
  # aws_s3_bucket_versioning.state_file will be created
  ...

  Plan: 9 to add, 0 to change, 0 to destroy.

接下来,我们将刚刚创建的包含基础设施所有详细信息的状态文件移至 S3 存储桶中。为此,我们需要使用 S3 配置 Terraform 后端。创建包含以下内容的 _bootstrap/terraform.tf,并将 bucket 和 region 值替换为实际值:

terraform {
 backend "s3" {
 bucket = "tf-state-files"
 key = "terraform-bootstrap-state-files/terraform.tfstate"
 region = "us-east-1"
 dynamodb_table = "TerraformMainStateLock"
 kms_key_id = "alias/s3" # Optionally change this to the custom KMS alias you created - "alias/terraform"
 }

 required_providers {
 aws = {
 source = "hashicorp/aws"
 version = "~> 4.33"
 }
 }
 
 required_version = "= 1.3.7"
}

我们本可以在 Terraform 后端配置中直接引用 region 和 state_file_bucket 变量,但遗憾的是Terraform 不支持任何类型的变量或局部插值。

要将状态文件迁移到 S3,请运行 terraform init -migrate-state,输出结果应该如下所示:

$ terraform init -migrate-state

Initializing the backend...
Do you want to copy existing state to the new backend?
 Pre-existing state was found while migrating the previous "local" backend to the
 newly configured "s3" backend. No existing state was found in the newly
 configured "s3" backend. Do you want to copy this state to the new "s3"
 backend? Enter "yes" to copy and "no" to start with an empty state.

 Enter a value: yes

Releasing state lock. This may take a few moments...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v4.53.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

现在,我们可以设置工作流了,但首先,需要确保将更改提交到 git 存储库。运行 git add .、git commit -m "Terraform bootstrapped" 和 git push:

$ git add .
$ git commit -m "Terraform bootstrapped"
[main b05ffa8] Terraform bootstrapped
 10 files changed, 494 insertions(+)
 create mode 100644 _bootstrap/.terraform.lock.hcl
 create mode 100644 _bootstrap/main_branch_iam_role.tf
 create mode 100644 _bootstrap/pr_branch_iam_role.tf
 create mode 100644 _bootstrap/providers.tf
 create mode 100644 _bootstrap/state_file_resources.tf
 create mode 100644 _bootstrap/terraform.tf
 create mode 100644 _bootstrap/variables.tf

$ git push
Enumerating objects: 14, done.
Counting objects: 100% (14/14), done.
Delta compression using up to 2 threads
Compressing objects: 100% (13/13), done.
Writing objects: 100% (13/13), 4.22 KiB | 2.11 MiB/s, done.
Total 13 (delta 2), reused 0 (delta 0), pack-reused 0
remote: Validating objects: 100%
To https://git.us-west-2.codecatalyst.aws/v1/Cobus-AWS/TerraformCodeCatalyst/bootstrapping-terraform-automation-for-amazon-codecatalyst
 fa1b9aa..b05ffa8 main -> main

设置工作流

在上一节中,我们为工作流创建了两个新的 IAM 角色,一个用于具有创建资源权限的 main 分支,另一个用于具有只读权限的所有拉取请求。我们需要将这些角色添加到 CodeCatalyst 空间。在页面的左上角,点击 Space(空间)下拉列表,然后点击您的空间名称。前往 AWS accounts(亚马逊云科技账户)选项卡,点击您的亚马逊云科技账号,再点击 Manage roles from the AWS Management Console(从亚马逊云科技管理控制台管理角色)。此操作将打开一个新的选项卡,选择 Add an existing role you have created in IAM(添加已在 IAM 中创建的现有角色),然后从下拉菜单中选择 Main-Branch-Infrastructure。点击 Add role(添加角色):

此操作将带您跳转到一个新页面,顶部显示一条绿色横幅 Successfully added IAM role Main-Branch-Infrastructure.(已成功添加 IAM 角色 Main-Branch-Infrastructure)。点击 Add IAM role(添加 IAM 角色),然后按照同一流程操作,添加 PR-Branch-Infrastructure 角色。完成后,可以关闭此窗口并返回 CodeCatalyst 窗口。

现在,自动化底层的基础设施已建立,我们可以开始使用工作流处理将来对基础设施的任何更改。我们需要为将使用工作流创建的所有资源创建一个类似的 Terraform 后端配置,如前所述,我们有意将引导基础设施与日常基础设施分开。在存储库的根目录中,创建包含以下内容的 terraform.tf,请注意,存储桶的 key 不同于用于引导基础设施的键,与之前一样,将 bucket、region、dynamodb_table 和 kms_key_id 替换为实际值:

terraform {
 backend "s3" {
 bucket = "tf-state-files"
 key = "terraform-state-file/terraform.tfstate"
 region = "us-east-1"
 dynamodb_table = "TerraformMainStateLock"
 kms_key_id = "alias/s3" # Optionally change this to the custom KMS alias you created - "alias/terraform"
 }
 required_providers {
 aws = {
 source = "hashicorp/aws"
 version = "~> 4.33"
 }
 }

 required_version = "= 1.3.7"
}

上面代码块中设置的 region 表示创建 S3 存储桶的区域,而不是将创建资源的区域。我们还需要配置 AWS 提供商并设置要使用的 region。我们将使用一个变量,您也可以将其固定为某个值,但将所有变量保存在一个 variables.tf 文件中更易于管理。创建具有以下内容的 providers.tf:

# Configuring the AWS provider
provider "aws" {
 region = var.aws_region
}

创建具有以下内容的 variables.tf 文件(可以将这里的区域更改为要在其中创建资源的其他区域):

variable "aws_region" {
 default = "us-east-1"
}

现在,我们可以创建工作流文件了。首先,需要创建工作流目录和文件:

cd .. # go to the root folder of the repo
mkdir -p .codecatalyst/workflows
touch .codecatalyst/workflows/main_branch.yml

在您的 IDE 中打开 .codecatalyst/workflows/main_branch.yml,添加以下内容,务必将占位符亚马逊云科技账户 ID 111122223333 替换为您账户的值;如果您更改了 IAM 角色名称,还需要替换角色名称(可以选择使用标准的 CodeCatalyst 工作流,也可以结合使用 GitHub Actions 和 CodeCatalyst):

  • CodeCatalyst 工作流
  • # Adaptation of the https://developer.hashicorp.com/terraform/tutorials/automation/github-actions workflow
    Name: TerraformMainBranch
    SchemaVersion: "1.0"
    
    Triggers:
     - Type: Push
     Branches:
     - main
    
    Actions:
     Terraform-Main-Branch-Apply:
     Identifier: aws/build@v1
     Inputs:
     Sources:
     - WorkflowSource
     Environment:
     Connections:
     - Role: Main-Branch-Infrastructure
     Name: "111122223333"
     Name: TerraformBootstrap
     Configuration: 
     Steps:
     - Run: export TF_VERSION=1.3.7 && wget -O terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
     - Run: unzip terraform.zip && rm terraform.zip && mv terraform /usr/bin/terraform && chmod +x /usr/bin/terraform
     - Run: terraform fmt -check -no-color
     - Run: terraform init -no-color
     - Run: terraform validate -no-color
     - Run: terraform plan -no-color -input=false
     - Run: terraform apply -auto-approve -no-color -input=false
     Compute:
     Type: EC2
  • 使用 GitHub Actions 的 CodeCatalyst 工作流
  • # Adaptation of the https://developer.hashicorp.com/terraform/tutorials/automation/github-actions workflow
    Name: TerraformMainBranch
    SchemaVersion: "1.0"
    
    Triggers:
      - Type: Push
        Branches:
          - main
    
    Actions:
      Terraform-Main-Branch-Apply:
        Identifier: aws/github-actions-runner@v1
        Inputs:
          Sources:
            - WorkflowSource
        Environment:
          Connections:
            - Role: Main-Branch-Infrastructure
              Name: "111122223333"
          Name: TerraformBootstrap
        Configuration:
          Steps:
            - name: Setup Terraform
              uses: hashicorp/setup-terraform@v1
              with:
                terraform_version: 1.3.7
            - name: Terraform Format
              run: terraform fmt -check -no-color
            - name: Terraform Init
              run: terraform init -no-color
            - name: Terraform Validate
              run: terraform validate -no-color
            - name: Terraform Plan
              run: terraform plan -no-color -input=false
            - name: Terraform Apply
              run: terraform apply -auto-approve -no-color -input=false
        Compute:
          Type: EC2
    

现在测试新工作流。首先,我们需要暂存、提交并将我们的更改直接推送到 main 分支。此操作必需执行,因为只有提交到存储库的工作流才会由 CodeCatalyst 运行。运行以下命令:

git add . -A
git commit -m "Adding main branch workflow"
git push

输出:

$ git add .
$ git commit -m "Adding main branch workflow"
[main 1b88c0f] Adding main branch workflow
 2 files changed, 54 insertions(+)
 create mode 100644 .codecatalyst/workflows/main_branch.yml
 create mode 100644 terraform.tf

$ git push
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 2 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 1.12 KiB | 1.12 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
remote: Validating objects: 100%
To https://git.us-west-2.codecatalyst.aws/v1/Cobus-AWS/TerraformCodeCatalyst/bootstrapping-terraform-automation-for-amazon-codecatalyst
 b05ffa8..1b88c0f main -> main

在浏览器中,导航到 CI/CD -> Workflows(工作流)页面。您应该看到工作流正在运行:

点击 Recent runs(最近运行)将其展开,即可看到当前正在运行的作业的详细信息。点击作业 ID (Run-XXXXX) 可以查看构建的不同阶段:

拉取请求 (PR) 工作流

现在我们已经完成了 main 分支工作流,下面可以设置拉取请求工作流了。该工作流与 main 分支工作流非常类似,只是有以下几点不同之处:

  1. 使用不同的工作流名称 - TerraformPRBranch
  2. 使用 PR-Branch-Infrastructure IAM 角色,以确保不会在 PR 工作流中进行任何基础设施更改
  3. 移除了 terraform apply 步骤
  4. 构建的触发条件是打开或更新对 main 分支的 PR (REVISION)

为 PR 工作流创建一个新文件 .codecatalyst/workflows/pr_branch.yml,并添加以下内容(务必将占位符亚马逊云科技账户 ID 111122223333 替换为您账户的值;如果您更改了 IAM 角色名称,还需要替换角色名称),可以选择使用标准的 CodeCatalyst 工作流,也可以结合使用 GitHub Actions 和 CodeCatalyst:

  • CodeCatalyst 工作流
  • Name: TerraformPRBranch
    SchemaVersion: "1.0"
    
    Triggers:
     - Type: PULLREQUEST
     Events:
     - OPEN
     - REVISION
    
    Actions:
     Terraform-PR-Branch-Plan:
     Identifier: aws/build@v1
     Inputs:
     Sources:
     - WorkflowSource
     Environment:
     Connections:
     - Role: PR-Branch-Infrastructure
     Name: "111122223333"
     Name: TerraformBootstrap
     Configuration: 
     Steps:
     - Run: export TF_VERSION=1.3.7 && wget -O terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
     - Run: unzip terraform.zip && rm terraform.zip && mv terraform /usr/bin/terraform && chmod +x /usr/bin/terraform
     - Run: terraform fmt -check -no-color
     - Run: terraform init -no-color
     - Run: terraform validate -no-color
     - Run: terraform plan -no-color -input=false
     Compute:
     Type: EC2
  • 使用 GitHub Actions 的 CodeCatalyst 工作流
  • # Adaptation of the https://developer.hashicorp.com/terraform/tutorials/automation/github-actions workflow
    Name: TerraformPRBranch
    SchemaVersion: "1.0"
    
    # Optional - Set automatic triggers.
    Triggers:
      - Type: PULLREQUEST
        Branches:
          - main
        Events:
          - OPEN
          - REVISION
    
    # Build actions
    Actions:
      Terraform-PR-Branch-Plan:
        Identifier: aws/github-actions-runner@v1
        Inputs:
          Sources:
            - WorkflowSource
        Environment:
          Connections:
            - Role: PR-Branch-Infrastructure
              Name: "111122223333"
          Name: TerraformBootstrap
        Configuration:
          Steps:
            - name: Setup Terraform
              uses: hashicorp/setup-terraform@v1
              with:
                terraform_version: 1.3.7
            - name: Terraform Format
              run: terraform fmt -check -no-color
            - name: Terraform Init
              run: terraform init -no-color
            - name: Terraform Validate
              run: terraform validate -no-color
            - name: Terraform Plan
              run: terraform plan -no-color -input=false
        Compute:
          Type: EC2
    

在触发新的 PR 之前,需要将此工作流添加到 main 分支,我们现在来添加一下:

git add .codecatalyst/workflows/pr_branch.yml
git commit -m "Adding PR branch workflow"
git push

由于我们添加了更改,将触发 main 分支工作流,但没有添加任何额外的 Terraform 资源,因此不会进行任何更改:

下面,我们将借助 Terraform,通过 PR 添加亚马逊云科技资源。首先,我们需要创建一个新的分支:

git checkout -b test-pr-workflow

接下来,在项目的根目录中创建一个新文件 vpc.tf - 我们将创建一个 VPC,该 VPC 包含三个公有子网和所需的路由表。在该文件中添加以下内容:

module "vpc" {
 source = "terraform-aws-modules/vpc/aws"

 name = "CodeCatalyst-Terraform"
 cidr = "10.0.0.0/16"

 azs = ["${var.aws_region}a", "${var.aws_region}b", "${var.aws_region}c"]
 public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

 enable_nat_gateway = false
 enable_vpn_gateway = false
}

我们需要提交更改,并使用 --set-upstream origin test-pr-workflow 推送分支,因为远程分支还不存在:

git add vpc.tf
git commit -m "Adding a VPC with only public subnets"
git push --set-upstream origin test-pr-workflow

输出显示远程分支已经创建,并且我们已将更改从本地分支推送到该分支:

$ git push --set-upstream origin test-pr-workflow

Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 2 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 469 bytes | 469.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0), pack-reused 0
remote: Validating objects: 100%
To https://git.us-west-2.codecatalyst.aws/v1/Cobus-AWS/TerraformCodeCatalyst/bootstrapping-terraform-automation-for-amazon-codecatalyst
 * [new branch] test-pr-workflow -> test-pr-workflow
branch 'test-pr-workflow' set up to track 'origin/test-pr-workflow'.

这还不会触发 PR 分支工作流,因为我们还没有提交拉取请求。在 CodeCatalyst 中,前往 Code(代码),然后依次点击 Pull requests(拉取请求)、Create pull request(创建拉取请求)。选择 test-pr-workflow 作为 Source branch(源分支),并选择 main 作为 Destination branch(目标分支),然后添加 Pull request title(拉取请求标题)和 Pull request description(拉取请求描述)。您还可以在页面底部预览 PR 将进行的更改:

点击 Create(创建),然后前往 CI/CD -> Workflows(工作流),并从 Workflows(工作流)菜单顶部的下拉列表中选择 All branches(所有分支)。选择 All branches(所有分支)后,您将看到四个工作流:TerraformMainBranch 和 TerraformPRBranch 工作流,以及 main 分支和 test-pr-workflow 分支的副本。TerraformMainBranch 工作流显示错误 Workflow is inactive(工作流处于非活动状态),这是预期行为,因为我们将该工作流限制为仅在我们的主分支上运行。点击 TerraformPRBranch 工作流下的 Recent runs(最近运行),查看 test-pr-workflow 分支,然后点击 Terraform-PR-Branch-Plan 作业,查看详细信息。

点击 Terraform Plan 步骤后,您将能够看到输出中列出了建议的基础设施更改。现在,可以检查将从此拉取请求对基础设施进行的确认更改。在标准日常操作中,您现在需要回到拉取请求,决定要采取的行动。如果已经审查并批准了提出的更改,您可以合并拉取请求,也可以在 PR 上发起对话,以解决任何问题或疑虑。下面,我们将合并此请求,以便在我们的账户中部署此基础设施:前往 Code(代码)-> Pull requests(拉取请求),点击 PR 的 Title(标题)或 ID,然后点击 Merge(合并)按钮。您可以选择 Fast forward merge(快进合并)或 Squash and merge(挤压合并)选项。Fast forward merge 将接受分支上的所有提交,并按顺序添加到 main 分支,就好像在主分支中完成一样。Squash merge(挤压合并)会将 test-pr-workflow 分支上的所有提交合并为单个提交,然后再将该提交合并到 main。使用哪个选项取决于您的开发方式,在本教程中,将使用 Fast forward merge(快进合并)。也可以选择选项 Delete the source branch after merging this pill request. Source branch: test-pr-workflow(合并此拉取请求后删除源分支。源分支: test-pr-workflow),这将有助于不再使用分支时保持存储库整洁,避免分支过多。点击 Merge(合并),前往 CI/CD -> Workflows(工作流),查看正在创建的新 VPC。点击当前正在运行的 TerraformMainBranch 工作流的 Recent runs(最近运行),然后点击作业 ID,再点击第 2 步,在右侧窗格中查看进度。作业完成后,我们可以通过以下方式验证 VPC 是否已创建:前往亚马逊云科技管理控制台的 VPC 部分,点击名为 CodeCatalyst-Terraform 的 VPC 的 VPC ID。您应当会看到类似下图的内容:

清理资源

至此,本教程已经接近尾声,您可以保留当前架构并在此基础上进行扩展,也可以删除刚才创建的所有资源。如果您计划管理多个亚马逊云科技账户,建议您阅读使用 Terraform 实现多个环境的自动化教程(该教程恰好是本教程的下一个教程),因此您可以保留创建的资源。

要删除在此项目中创建的所有资源,请在开发环境中执行以下步骤:

1. 运行 git checkout main,确保位于 main 分支,再运行 git pull,确保具有最新的更改,然后运行 terraform destroy,并输入 yes 进行确认,这将删除创建的 VPC

2. 要删除所有引导资源,请先运行 cd _bootstrap,切换至该目录。在删除所有资源之前,需要更新 S3 状态文件存储桶。需要将生命周期策略更改为允许删除,并添加 force_destroy = true 以删除存储桶中的所有对象。编辑 _bootstrap/state_file_resources.tf,并将第一个 aws_s3_bucket 资源替换为:

# Bucket used to store our state file
resource "aws_s3_bucket" "state_file" {
 bucket = var.state_file_bucket_name
 force_destroy = true

 lifecycle {
 prevent_destroy = false
 }
}

3. 运行 terraform apply 并接受更改。

4. 运行 terraform destroy 并接受更改。将显示两个错误,因为我们正在删除 S3 存储桶(试图在其中存储更新的状态文件)和 DynamoDB 表(Terraform 用于存储锁定以防止并行运行)。输出将如下所示:

│ Error: Failed to save state
│ 
│ Error saving state: failed to upload state: NoSuchBucket: The specified bucket does not exist
│ status code: 404, request id: VJDXS21J9YFQ2J5J, host id: aG3pXy1Kfx2jncT1js0iDL5d+5j/rf3mNDVNzRp7aYpa3bCkAIKKJDh8HJQymS2prphHrazmjmo=
╵
╷
│ Error: Failed to persist state to backend
│ 
│ The error shown above has prevented Terraform from writing the updated state to the configured backend. To allow for recovery, the state has been written to the file "errored.tfstate" in the current working directory.
│ 
│ Running "terraform apply" again at this point will create a forked state, making it harder to recover.
│ 
│ To retry writing this state, use the following command:
│ terraform state push errored.tfstate
│ 
╵
╷
│ Error: Error releasing the state lock
│ 
│ Error message: failed to retrieve lock info: ResourceNotFoundException: Requested resource not found
│ 
│ Terraform acquires a lock when accessing your state to prevent others
│ running Terraform to potentially modify the state at the same time. An
│ error occurred while releasing this lock. This could mean that the lock
│ did or did not release properly. If the lock didn't release properly,
│ Terraform may not be able to run future commands since it'll appear as if
│ the lock is held.
│ 
│ In this scenario, please call the "force-unlock" command to unlock the
│ state manually. This is a very dangerous operation since if it is done
│ erroneously it could result in two people modifying state at the same time.
│ Only call this command if you're certain that the unlock above failed and
│ that no one else is holding a lock.

5. 最后,需要删除在 CodeCatalyst 中创建的项目。在左侧导航栏中,前往 Project settings(项目设置),点击 Delete project(删除项目),然后按说明删除项目。

总结

恭喜您!现在,您已经使用 CodeCatalyst 引导了 Terraform,并且可以使用 PR 工作流部署任何基础设施更改。