AWS Big Data Blog
Automating DBA tasks on Amazon Redshift securely using AWS IAM, AWS Lambda, Amazon EventBridge, and stored procedures
As a data warehouse administrator or data engineer, you may need to perform maintenance tasks and activities or perform some level of custom monitoring on a regular basis. You can combine these activities inside a stored procedure or invoke views to get details. Some of these activities include things like loading nightly staging tables, invoking views or stopping idle connections, dropping unused tables, and so on.
In this post, we discuss how you can automate these routine activities for an Amazon Redshift cluster running inside a secure private network. For this solution, we use the following AWS services:
- AWS Lambda – To run a specified query and invoke views or stored procedures inside your Amazon Redshift cluster.
- Amazon EventBridge – To schedule running these SQL statements by triggering a Lambda function. The EventBridge rule supplies the Amazon Redshift cluster details as the input parameters. This gives you the flexibility to provide multiple queries or multiple cluster details.
- AWS Identity and Access Management (IAM) – To provide access to the Amazon Redshift cluster using temporarily generated credentials in a secure way. This avoids the need to store access credentials.
- Amazon API Gateway – To securely connect to the Amazon Redshift API service from a private subnet that has no access to the internet.
Solution architecture
The following architecture diagram provides an overview to the solution.
This architecture has the following workflow:
- We create an EventBridge rule with a schedule using the default event bus to invoke a target. The target for this rule is a Lambda function that connects to an Amazon Redshift cluster and runs a SQL statement. The target is configured to provide input parameters as constants. These parameters include an Amazon Redshift cluster identifier, database name, Amazon Redshift user, and the SQL statement to run.
- The rule is triggered at the scheduled time and sends the data to the
RedshiftExecuteSQLFunction
function responsible for running the specified query. - The
RedshiftExecuteSQLFunction
function in Step 4 is connected to the user’s Amazon Virtual Private Cloud (VPC) inside a private subnet that doesn’t access to the internet. However, this function needs to communicate with the Amazon Redshift API service to generate temporary user credentials to securely access the Amazon Redshift cluster. With the private subnet not having access to the internet (no NAT Gateway), the solution uses an Amazon API Gateway with a VPC endpoint to securely communicate with the Amazon Redshift API service. The function passes the Amazon Redshift cluster information inside the VPC through the private subnet to the API Gateway VPC endpoint, which is backed by another function,RedshiftApiLambda
, which is responsible for communicating with the Amazon Redshift API service to generate temporary credentials send them back to theRedshiftExecuteSQLFunction
function securely via your VPC. - The
RedshiftExecuteSQLFunction
function uses the Amazon Redshift cluster endpoint, port, and temporary credentials received in the previous step to communicate with the Amazon Redshift cluster running in a private subnet inside the user’s VPC. It runs the SQL statement submitted in Step 1.
The architecture is scalable to accommodate multiple rules for different DBA tasks and different Amazon Redshift clusters.
Prerequisites
To get started, you need to have an AWS account.
We have provided an AWS CloudFormation template to demonstrate the solution. You can download and use this template to easily deploy the required AWS resources. This template has been tested in the us-east-1
Region.
When you’re logged in to your AWS account, complete the following steps:
- You can deploy the resources by using the template to launch the stack on the AWS Management Console. Alternatively, you can launch the stack from the following link:
- Choose Next.
- On the Specify stack details page, enter the following parameters:
- For Lambda VPC Configuration, choose the VPC and subnets inside the VPC. The template allows you to select multiple subnets; however, it only uses the first two subnets that are selected. Make sure the selected VPC subnets have access to the target Amazon Redshift cluster.
- Choose if you want to create or use an existing VPC endpoint for the API Gateway. For an existing VPC endpoint for API Gateway, you need a DNS-enabled interface endpoint.
- Leave the remaining values at their defaults and choose Next.
- On the Configure stack options page, leave everything at its default and choose Next.
- On the Review page, select I acknowledge that AWS CloudFormation might create IAM resources.
- Choose Create stack.
The CloudFormation template can take approximately 5 minutes to deploy the resources.
- When the stack status shows as
CREATE_COMPLETE
, choose the Outputs tab and record the values forRedshiftExecuteSQLFunction
andRedshiftExecuteSQLFunctionSecurityGroup
.
You need these values later to create EventBridge rules and to allow access to Amazon Redshift cluster.
Amazon Redshift stored procedures and security definer
A stored procedure is a user-created object to perform a set of SQL queries and logical operations. Stored procedures are often used to encapsulate logic for data transformation, data validation, and business-specific logic. You can reduce round trips between your applications and the database by combining multiple SQL steps into a stored procedure.
Amazon Redshift supports stored procedures in the PL/pgSQL dialect and can include variable declaration, control logic, loops, allow the raising of errors. The SECURITY attribute controls who has privileges to access what database objects. By default, only superusers and the owner of the stored procedure have the permission to perform actions. You can create stored procedures to perform functions without giving a user access to the underlying tables with security definer controls. With the security definer concept, you can allow users to perform actions they otherwise don’t have permissions to run. For example, they can drop a table created by another user.
For more information about stored procedures, see Creating stored procedures in Amazon Redshift and Security and privileges for stored procedures.
For this post, we create two DBA tasks in the form of a stored procedure and views inside the Amazon Redshift cluster:
- Drop unused tables
- Clean up idle connections
We then schedule the running of these tasks using EventBridge and Lambda.
To make it easier to track the DBA tasks, such as which table is dropped and how many idle connections are cleaned up, we create a helper table and a stored procedure to track stored procedure run details. You can run the SQL statements against the cluster either using query editor or SQL client tools.
Then you can call this stored procedure in other DBA task stored procedures to log task details. For example, see the following code:
Dropping unused tables
A user might create tables for short-term usage but forget to delete them. Over time, lots of leftover tables can accumulate in the data warehouse, wasting storage space. In this use case, the DBA needs to clean them up regularly.
We can collect table usage data from system tables and identify tables that haven’t been accessed for a certain period. Then we can target large tables or all unused tables and drop them automatically.
Various users could have created those tables. To drop them, we need to run the stored procedure as a superuser. Create the following stored procedure as a superuser and with SECURITY DEFINER on the Amazon Redshift cluster you need to maintain. This allows the DBA team to run the stored procedure to drop a table without being the owner of the table.
Then you can call this stored procedure to delete all unused tables. Adjust the unused_days
input parameter based on your workload pattern. For example, to delete tables that haven’t been accessed in the past two weeks, enter the following code:
Cleaning up idle connections
An idle connection can consume system resources, or even hold a table lock if there is a pending transaction, and impact other workloads. As a DBA, keeping an eye on the idle connections and cleaning them up can help your data warehouse be more performant and stable.
First, find all open connections and identify if they’re active or not based on how long the transactions last. For this post, we use a 60-second threshold. Then you can remove those idle connections. The full script is available to download.
The following code deletes connections that have been idle for more than 30 minutes:
After you test and verify those stored procedures, you may want to run them regularly to clean up your data warehouse automatically. Lambda and EventBridge allow you to run those routine tasks easily.
AWS Lambda
For this post, our Lambda function uses the Python runtime environment with the Amazon Redshift cluster details as input and to generate temporary credentials. Amazon Redshift allows users and applications to programmatically generate temporary database user credentials for an AWS Identity and Access Management (IAM) user or role. The IAM user or role for the function is provided the IAM permission of redshift:GetClusterCredentials
to perform the operation of GetClusterCredentials
with the Amazon Redshift API service. For more information, see Generating IAM database credentials using the Amazon Redshift CLI or API.
This credential is used to make a connection with the Amazon Redshift cluster and run the SQL statement, or stored procedure:
Providing the RedshiftExecuteSQLFunction function access to the Amazon Redshift cluster
You need to grant the RedshiftExecuteSQLFunction
function access to the Amazon Redshift cluster where the queries are to be run. On the CloudFormation Outputs tab for the stack you created earlier, you should have the value for RedshiftExecuteSQLFunctionSecurityGroup
. We use this value to grant access inside the Amazon Redshift cluster’s security group.
For information about managing the Amazon Redshift security group on the EC2-Classic platform, see Amazon Redshift cluster security groups. For instructions on managing security groups on the EC2-VPC platform, see Managing VPC security groups for a cluster.
You can manage the security group via the Amazon VPC console or the Amazon Redshift console. For this post, we use the EC2-VPC platform for our Amazon Redshift cluster and use the Amazon Redshift console to update the security group.
- On the Amazon Redshift console, choose Clusters.
- Choose the Amazon Redshift cluster identifier that you need to grant access to.
- On the Properties tab, in the Network and security section, under VPC security group, find the security group for the cluster.
- Choose the security group starting with
sg-
.
This opens a new window to manage the security group.
- In the new window, choose the security group ID that begins with
sg-
. - On the Inbound rules tab, choose Edit inbound rules to grant access to the Lambda function.
- Choose Add rule.
- For Type, choose Redshift.
This should populate the protocol and port range. If you’re using a custom port for the cluster, choose Custom TCP for the type and manually enter the port number relevant to your cluster.
- Optionally, add a description for the rule.
- Choose Save rules.
For more information about your VPC security group, see Security groups for your VPC.
Creating event rules with EventBridge
For this post, we schedule the DBA task to drop unused tables every 12 hours. We’re using the us-east-1
Region. We start by adding an EventBridge rule with an identifiable name.
- On the EventBridge console, choose Create rule.
- For Name, enter
cluster-1-drop-table-12-hours
. - For Description, add an optional description.
- For Define pattern, select Schedule.
- For Fixed rate every, choose 12 hours.
- In the Select targets section, for Target, choose Lambda function.
- From the Function drop-down menu, choose the function that matches the
RedshiftExecuteSQLFunction
from the CloudFormation stack Outputs - In the Configure input section, select Constant (JSON text).
- Add the following JSON data (replace the values for
Cluster
,Username
,Database
, andExecutionQuery
as appropriate for your cluster). You must provide the cluster identifier forCluster
, not the endpoint address. The code locates the endpoint address and port for the cluster. - Choose Create.
- Follow the same steps to create a second EventBridge rule.
The following rule triggers the DBA task to stop idle connections every 3 hours. The input data used for this method includes the reference for the stored procedure for stopping the idle connection.
- Add the following JSON data in (replacing the values for
Cluster
,Username
,Database
, andExecutionQuery
as appropriate to your use case):
The preceding code should set up two different rules with the same target Lambda function. However, the two rules are running two different stored procedures on separate schedules. We can scale this solution to add multiple rules to run on different Amazon Redshift clusters on a different schedule or to run multiple SQL statements against the same Amazon Redshift cluster on a different schedule.
Cleaning up
Before you remove the CloudFormation stack, you should remove the EventBridge rule.
- On the EventBridge console, choose Rules.
- Select the first rule you added earlier and choose Delete.
- Choose Delete again to confirm.
- Repeat the same steps for the second rule.
Conclusion
In this post, we provided a solution to automate routine DBA tasks against Amazon Redshift clusters in a secure way. The solution is scaleable to support multiple tasks on corresponding schedules on multiple Amazon Redshift clusters. You can extend this solution to handle more routine tasks and simplify your workflow.
About the Authors
Gagan Brahmi is a Specialist Solutions Architect focused on Big Data & Analytics at Amazon Web Services. Gagan has over 15 years of experience in information technology. He helps customers architect and build highly scalable, performant, and secure cloud-based solutions on AWS.
Juan Yu is a Data Warehouse Specialist Solutions Architect at Amazon Web Services, where she helps customers adopt cloud data warehouses and solve analytic challenges at scale. Prior to AWS, she had fun building and enhancing MPP query engines to improve customer experience on Big Data workloads.