AWS Database Blog
Scheduling and running Amazon RDS jobs with AWS Batch and Amazon CloudWatch rules
Database administrators and developers traditionally schedule scripts to run against databases using the system cron on the host where the database is running. As a managed database service, Amazon Relational Database Service (RDS) does not provide access to the underlying infrastructure, so if you migrate such workloads from on premises, you must move these jobs. This post provides an alternate way to schedule and run jobs centrally.
AWS Batch is a managed service that abstracts the complexities of provisioning, managing, monitoring, and scaling your computing jobs, and enables you to easily and efficiently run jobs on AWS. Additionally, AWS Batch enables you to build jobs using the language of your choice and deploy it as a Docker container.
This post demonstrates how to use the combination of AWS Batch and Amazon CloudWatch rules to dynamically provision resources and schedule and run functions or stored procedures on PostgreSQL database. The same process can be used to run job on any Amazon RDS database.
Overview of solution
The following diagram illustrates the architecture of the solution.
Prerequisites
Before you get started, complete the following prerequisites:
- Install Docker Desktop on your machine.
- Install git on your machine.
- Set up and configure AWS CLI. For instructions, see Installing the AWS CLI.
- Provide the comma-separated list of the default subnets and security groups as input parameters in the AWS CloudFormation template.
Walkthrough
The following steps provide a high-level overview of the walkthrough:
- Clone the project from the AWS code samples repository
- Deploy the CloudFormation template to create the required services
- Go to the AWS CloudFormation console and make sure that the resources are created
- Run database scripts and create the required tables and functions
- Build, tag, and push the Docker image to Amazon ECR
- Verify if AWS Batch is running the job successfully based on the CloudWatch rule
This post also includes optional instructions to manage changes to the job and schedule with AWS CodeCommit and AWS CodeBuild.
Cloning source code from AWS samples
Download the files required to set up the environment. See the following code:
Deploying the CloudFormation template
To run the CloudFormation scripts, complete the following steps:
- On the Amazon VPC console, navigate to the Subnets section.
- Record the Subnet IDs of the VPC that you will be using.
- Record the Security group ID that’s attached to the Subnet and VPC you recorded in the above step, as shown in the screenshot preview below.
- Update the comma-separated list of the default Subnets and security groups as input parameters in the
batchenv-cf.yaml
.
Run the CloudFormation template to provision the required services. See the following code:
The template creates the following:
- Docker registry to store the Docker image
- Job definition to define the Docker image, IAM role, and resource requirements for the job
- Queue for jobs until they are ready to run in a compute environment
- Compute environment in which AWS Batch manages the compute resources that jobs use
- PostgreSQL instance
- AWS Secrets Manager with PostgreSQL database login credentials
- CloudWatch rule to run the AWS Batch job based on the schedule
- Roles with appropriate permission
The following are ancillary services, which are required only if you choose to manage changes to the job and schedule rule using CodeCommit and CodeBuild:
- Repository to store buildspec.yml and src folder
- A CodeBuild project to build, tag, and push Docker images to the registry
Services are created with the CloudFormation stack name as a prefix. The following screenshot shows the details of a successful CloudFormation deployment.
Running database scripts
To run the database scripts, complete the following steps:
- On the Amazon RDS console, under Database, choose Connectivity & Security.
- Record the database endpoint URL and port as shown in the screenshot preview below.
- On the Secrets Manager console, under Secrets, choose your secret.
- Choose Retrieve secret value.
- Make a note of your database credentials.
- Connect to the PostgresSQL database that the CloudFormation template provisioned.
- Download the SQL script
CreateSampleDataAndSP.sql
from GitHub. Run this script to create the following objects in your database:
- DEPT – Table
- EMP – Table
- LOW_HIGH_SALARIES – Function
Building, tagging, and pushing the Docker image to Amazon ECR
To build, tag, and push your Docker image to Amazon ECR, complete the following steps:
- In your local machine, navigate to the folder with the downloaded source code. See the following code:
- Open this file and change the following value (if you chose a different stack name and Region when you deployed the CloudFormation template). See the following code:
Note that the Python script (
src/runjob.py
) that connects and runs the database job is configured to look for the database secret name with the prefixbatchjob
inUS-East-1
. - To connect to Amazon ECR, enter the following code:
- To package the Python script and all libraries mentioned in requirements.txt as a Docker container, enter the following code:
- On the Amazon ECR console, under Repositories, choose your repository link.
- To get your environment-specific commands to tag and push, choose View Push Commands.
The following screenshot shows the locations of your push commands.
- Enter the code listed in Step 3 of the preceding screenshot. See the following example code:
- Enter the code listed in Step 4 of the preceding screenshot. See the following example code:
- Verify if you pushed the Docker image from your local machine to Amazon ECR.
The following screenshot shows the uploaded docker image on the Amazon ECR console.
Running the AWS Batch job manually
To run the AWS Batch job manually, complete the following steps.
- On the AWS Batch console, choose Jobs.
- Choose Submit job.
- For Job name, enter a name for the job.
- For Job definition, choose the job definition.
- For Job queue, choose the job queue.
- Modify the vCPU and memory settings (the default settings are sufficient for this walkthrough).
- Choose Submit job.
- Under Jobs, choose the job ID link and view the logs to make sure the job completed successfully.
The following screenshot shows the successful completion of the job.
The AWS Batch job is configured to run the following function and display the highest and lowest salary of employees in the department provided as input (in the runjob.py script):You can validate the jobs using the CloudWatch Logs that AWS Batch produced. The following screenshot shows the job details via the AWS Batch console.
The following screenshot shows the CloudWatch Logs for the job.
Verifying the scheduled run of AWS Batch jobs
The CloudFormation template also creates a CloudWatch rule to run the job on a schedule. You can change the schedule by editing the rule.
- On the CloudWatch console, under Rules, choose your rule.
- From the Actions drop-down menu, choose Edit.
- Change the schedule based on your requirements.
- For the input of the batch job queue, choose Use existing role.
- Choose BatchServiceRole.
- Choose Configure details.
- Choose Update rule.
- Monitor the scheduled job on the Dashboard page.
- To confirm that the job succeeded, on the Jobs page, choose succeeded.
Cleaning up
On the AWS Management Console, navigate to your CloudFormation stack batchjob
and delete it.
Alternatively, enter the following code in AWS CLI:
Managing changes to the job and schedule rule in CodeCommit and CodeBuild
You can use the following steps to manage changes to Docker image instead of building, tagging, and pushing it manually to ECR.
To commit code to the CodeCommit repository, complete the following steps:
- On the console, under Developer Tools, choose CodeCommit.
- Choose Repositories.
- Choose batch-job-codecommit.
- Choose Clone URL.
- Choose Clone HTTPS.
- Clone the
batchjob-codecommit
See the following code: - Copy the
src
folder and buildspec.yml you download from the AWS samples into the repository you cloned from CodeCommit.
Thesrc
folder contains the Python code to connect and run functions. The Python script (src/runjob.py) is configured to look for database secrets with thebatchjob
prefix (for example,secret_name = "batchjob-secret"
). If you chose a different prefix, you must change this value. See the following code:The following screenshot shows the successful code upload to your repository.
Click on the Build run to monitor the build.
The following screenshot shows the build logs.
Successful completion of the build job pushes a Docker image with Python script and other libraries to Amazon ECR.
Conclusion
This post demonstrated how to integrate different AWS services to schedule and run jobs on a PostgresSQL database. You can run jobs or orchestrate complex job workflows on any RDS database with the same solution by including the compatible python adapter in the docker container and importing it in your python script.
Additionally, this solution helps you manage changes to the job and schedules using the CI/CD toolchain provisioned along with AWS Batch and CloudWatch rules.
About the Authors
Udayasimha Theepireddy (Uday) is a Senior Cloud Architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance on their large scale migrations, helping them improving the value of their solutions when using AWS.
Vinod Ramasubbu is a Cloud Application Architect with Amazon Web Services. He works with customers to architect, design, and automate business softwares at scale on AWS cloud