AWS Database Blog
Building a cross-account continuous delivery pipeline for database migrations
To increase the speed and quality of development, you may use continuous delivery strategies to manage and deploy your application code changes. However, continuous delivery for database migrations is often a manual process.
Adopting continuous integration and continuous delivery (CI/CD) for database migrations has the following benefits:
- An automated multi-account setup simplifies database migrations.
- The confidence level of the changes applied increases.
- The process is easier to reproduce.
- Database migration is faster and more efficient.
- There is no need to use jump stations to connect to a database, which removes the need to provision, manage, and patch Amazon EC2
This post demonstrates how to set up the AWS Developer Tools suite to automate database migrations in a multi-account setup. The intended audience is developers, solutions architects, and database administrators with knowledge of AWS CloudFormation.
The solution uses four AWS accounts to showcase multi-account and multi-VPC CI/CD database migration.
- Shared Services – A central location for all the tools related to the continuous delivery/deployment services, such as AWS CodePipeline and AWS CodeBuild. The Shared Services account also hosts the AWS CodeCommit repo in which developers can push their code.
- Development, Test, and Production – The target accounts in which you create an Amazon RDS for SQL Server database and store its credentials with Secrets Manager. You also create and configure a CodeBuild project within the same VPC and private subnets as the RDS for SQL Server instance.
Prerequisites
You need four AWS accounts: three for creating development, test, and production stages, and a Shared Services account. For more information, see How do I create and activate a new Amazon Web Services account?
The target accounts (development, test, and production) must each have an virtual private cloud (VPC) containing two private subnets in different Availability Zones, and have internet access through a NAT gateway. The RDS deploys in the private subnets because you don’t want the database to be accessible from the internet. For more information, see Creating a VPC with Public and Private Subnets for Your Compute Environments.
What you build
-
- RDS for SQL Server creates a database in each target account and deploys in private subnets. RDS makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It allows you to focus on your applications so you can give them the fast performance, high availability, security, and compatibility they need.
- CodePipeline orchestrates the CI/CD database migration stages.
- IAM roles and policies allow cross-account access to applicable AWS resources.
- CodeCommit creates a repo to store the SQL statements used when running the database migration.
- Amazon S3 creates a bucket to store pipeline artifacts.
- Flyway is an open-source database migration tool that favors simplicity and convention over configuration. You can write Flyway migrations in an SQL database-specific syntax, such as PL/SQL, T-SQL, or Java. Flyway manages the database changes. For more information, see Flyway.
- CodeBuild creates a project in target accounts using Flyway to apply database changes.
- VPC security groups ensure the secure flow of traffic between a CodeBuild project deployed within a VPC and RDS for SQL Server databases.
- AWS Secrets Manager stores secrets securely and centrally, such as the RDS for SQL Server database username and password.
- Amazon SNS notifies you by email when a developer pushes changes to the CodeCommit repo.
Setup local development environment
- Create a folder where the solutions artifacts and resources are stored named
db-migration
, using the following command: - Download
templates.zip
from the S3 bucket containing AWS CloudFormation templates. - Extract the
templates.zip
file to the db-migration folder using the following command: - Download
cicd-db.zip
from the S3 bucket containing the database migration example project. - Extract the
cicd-db.zip file
to thedb-migration
folder using the following command: - Change the directory to
db-migration
folder using the following command: - Install the AWS CLI. For more information, see Installing the AWS CLI.
- Create development, test, and production AWS accounts. For more information, see How do I create and activate a new Amazon Web Services account?
- Prepare your access keys or
assume-role
AWS CLI commands to make calls to AWS. Configure the AWS CLI as described in Configuring the AWS CLI, including setting up separate profiles for each of the accounts.
Provision AWS CloudFormation templates
Follow these steps in order. Otherwise, you might not create the resources correctly.
1. For the Shared Services account, deploy the AWS CloudFormation template db-migration-master.yml
.
It creates the CodePipeline pipeline, CodeBuild project, CodeCommit repo, and S3 bucket that CodePipeline uses to store pipeline artifacts. Changes pushed to the CodeCommit repo trigger SNS topic, which sends an email to a configured email address. This step also creates IAM roles and policies to allow cross-account access.The following diagram shows how the resources are connected.
You must set these parameters when deploying the template:
ProjectName
– Name of the project, for example,myProjectX
. Use this as a prefix when naming the AWS resources that the template creates.EmailAddress
– Email address used for notifications of when a developer pushes changes to the CodeCommit repository.DevAccountId
– AWS account ID for the development account ID.TestAccountId
– AWS account ID for the test account ID.ProdAccountId
– AWS account ID for the production account ID.
To deploy the db-migration-master.yml template, run the following command. Substitute the parameters applicable for your AWS environment. Make sure that you execute this command from the root level of the db-migration folder you created in the previous section, so the command finds the template file properly.
The following example shows deploying db-migration-master.yml
You must set these parameters when deploying the template:
ProjectName
– Same name as used when deployingdb-migration-master.yml
.TargetEnvironment
– Name of the target environment—dev
,test
, orprod
. Use this when naming the resources that the AWS CloudFormation template creates.PrivateSubnet1A
– Private subnet ID in Availability Zone A.PrivateSubnet1B
– Private subnet ID in Availability Zone B.VPCId
– VPC ID in which the RDS database deploys.
To deploy the db-migration-rds.yml
template, run the following command. Make sure that you use --profile
to specify and switch your AWS CLI profile, as instructed previously. Run this command three times for each of the target accounts, so that you deploy the template three times.
The following example shows provisioning db-migration-rds.yml
:
In the output section of the AWS CloudFormation console, make a note of the CodeBuildSecurityGroupId
and JDBCUrl
values. You need them in the next step.
3. Deploy the db-migration-target.yml
template, which creates a CodeBuild project deployed in a VPC. To deploy the db-migration-target.yml
template, run the following command. Make sure that you use –profile to specify and switch your AWS CLI profile as instructed previously. You must run this command three times for each of the target accounts, so you deploy the template three times.
For the commands, enter the parameter overrides as follows:
SharedServicesAccountId
– Account ID of the Shared Services account that you use to enable cross-account access from your Shared Services account to the target account.ProjectName
– Same name that you used when deployingdb-migration-master.yml
.TargetEnvironment
– Name of the target environment—eitherdev
,test
, orprod
. Use this when naming the resources that the AWS CloudFormation template creates.PrivateSubnet1A
– RDS instance deployed into private subnets. To make it possible for the CodeBuild project to access the RDS instance, configure the CodeBuild projects in the same subnets as the RDS instance. Specify the same subnet ID as specified for the RDS instance.PrivateSubnet1B
– Subnet ID for the private subnet in the second Availability Zone. Make sure that it is the same as the RDS instance.VPCId
– VPC ID in which the CodeBuild project deploys. Use the same VPC as the RDS instance to which to run the database migration.
To deploy the db-migration-rds.yml
template, run the following command:
The following example shows provisioning db-migration-target.yml
:
For the test and production accounts, repeat the steps 2 and 3, while changing the TargetEnvironment
parameter to test
and prod
respectively. Make sure that you provided the correct account profile with the respective --profile
parameter.
To apply this to your existing environment without three target accounts, apply steps 2 and 3 to the number of target accounts that you have.
4. Push migration scripts to your CodeCommit repo by running the following code:
The following example shows pushing migration scripts to CodeCommit:
Go to the AWS CodePipeline console in the Shared Services account to see that the pipeline is triggered.
Summarizing the process
In this example, you commit and push changes to the CodeCommit repo. This action starts all the activity in the pipeline.
The CodePipeline pipeline consists of the following four stages: source, validate, test, and deploy.
In the source stage, pushes made into the source branch of the CodeCommit repo trigger the CodePipeline pipeline. The process pulls, zips, and stores the CodeCommit repo contents in an S3 bucket. The bucket policy is configured to allow development, test, and production accounts to access the S3 bucket in the Shared Services account. As an example, see the following bucket policy:
In the validate stage, CodeBuild creates a Linux container and its dependencies in the container. CodeBuild then downloads the zipped file that contains the source code from the S3 bucket, unpacks it, and runs validation. If the validation is successful, it triggers the CodePipeline test stage.
The purpose of the validation is to do linting of SQL statements. This post’s example doesn’t implement the linting mechanism because the focus is to demonstrate how to manage database changes in a multi-account setup. The validation exists to showcase different possibilities of the stages in CodePipeline.
In the test stage, CodeBuild creates a Linux container, sets up dependencies (or installs dependencies) in the container, downloads the zipped file that contains the source code from the S3 bucket, unpacks it, and runs tests. If the tests are successful, it triggers the CodePipeline deploy stage.
In the deploy stage, three CodeBuild project configurations in the Shared Services account trigger the CodeBuild projects in the target accounts. The CodeBuild buildspec specifies what to run when the CodeBuild project starts.
The buildspec performs the following steps:
- Uses AWS CLI to assume the IAM role (
flyway-deploy-role*
) in the target account and returns temporary AWS credentials. - Sets the temporary credentials as environment variables, which ensures that all AWS CLI commands invoke in the specific target accounts.
- Uses AWS CLI to trigger the CodeBuild project in the target account.
- Uses AWS CLI to poll the status of the CodeBuild project in the target account until the status shows as failed or succeeded.
CodeBuild projects for target accounts
Each target account’s CodeBuild project and buildspec run the following steps:
- Download the SQL files from the S3 bucket in the Shared Services account. Because you configured cross-account bucket policies, the target accounts have to access the bucket.
- Look up the database username and password stored in Secrets Manager, using the AWS CLI (line 89 in the following code example). The JSON response from the command
aws secretsmanager get-secret-value
is parsed using jq to extract the username and password, which are stored as environment shell variables. - Run the Docker Flyway container, using the docker run command (line 93 in the following code example). In the same command, specify which folders to mount, as well as the required parameters (
username
,password
,jdbcUrl
) to run the Flyway migration.
- Apply the database migration. The CodeBuild project logs output are as follows:
Conclusion
AWS CI/CD services combined with Flyway can support continuous delivery in a multi-account setup for the database lifecycle. You now have a well-defined deployment process for your database changes, and can also re-create your databases easily. You can also apply the same pattern with other CI/CD tools that aren’t specific to AWS.
By using CI/CD automation for databases, you can speed up deployments and increase the agility of your software teams.
About the Author
Pierre Petersson is a DevOps Consultant with Amazon Web Services. Pierre has been writing code since he first touched a Luxor ABC 80, in the mid-eighties. He inspires builders to unlock the value of the AWS Cloud, using his secret blend of passion, enthusiasm, customer advocacy, curiosity and creativity. His interests are about software architectures and developer tools. Pierre has a developer background, working in the field 16+ years and has now shifted his focus in helping customers in their DevOps journey.