AWS Big Data Blog
Centralize near-real-time governance through alerts on Amazon Redshift data warehouses for sensitive queries
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud that delivers powerful and secure insights on all your data with the best price-performance. With Amazon Redshift, you can analyze your data to derive holistic insights about your business and your customers. In many organizations, one or multiple Amazon Redshift data warehouses run daily for data and analytics purposes. Therefore, over time, multiple Data Definition Language (DDL) or Data Control Language (DCL) queries, such as CREATE, ALTER, DROP, GRANT, or REVOKE SQL queries, are run on the Amazon Redshift data warehouse, which are sensitive in nature because they could lead to dropping tables or deleting data, causing disruptions or outages. Tracking such user queries as part of the centralized governance of the data warehouse helps stakeholders understand potential risks and take prompt action to mitigate them following the operational excellence pillar of the AWS Data Analytics Lens. Therefore, for a robust governance mechanism, it’s crucial to alert or notify the database and security administrators on the kind of sensitive queries that are run on the data warehouse, so that prompt remediation actions can be taken if needed.
To address this, in this post we show you how you can automate near-real-time notifications over a Slack channel when certain queries are run on the data warehouse. We also create a simple governance dashboard using a combination of Amazon DynamoDB, Amazon Athena, and Amazon QuickSight.
Solution overview
An Amazon Redshift data warehouse logs information about connections and user activities taking place in databases, which helps monitor the database for security and troubleshooting purposes. These logs can be stored in Amazon Simple Storage Service (Amazon S3) buckets or Amazon CloudWatch. Amazon Redshift logs information in the following log files, and this solution is based on using an Amazon Redshift audit log to CloudWatch as a destination:
- Connection log – Logs authentication attempts, connections, and disconnections
- User log – Logs information about changes to database user definitions
- User activity log – Logs each query before it’s run on the database
The following diagram illustrates the solution architecture.
The solution workflow consists of the following steps:
- Audit logging is enabled in each Amazon Redshift data warehouse to capture the user activity log in CloudWatch.
- Subscription filters on CloudWatch capture the required DDL and DCL commands by providing filter criteria.
- The subscription filter triggers an AWS Lambda function for pattern matching.
- The Lambda function processes the event data and sends the notification over a Slack channel using a webhook.
- The Lambda function stores the data in a DynamoDB table over which a simple dashboard is built using Athena and QuickSight.
Prerequisites
Before starting the implementation, make sure the following requirements are met:
- You have an AWS account.
- The AWS Region used for this post is us-east-1. However, this solution is relevant in any other Region where the necessary AWS services are available.
- Permissions to create Slack a workspace.
Create and configure an Amazon Redshift cluster
To set up your cluster, complete the following steps:
For this post, we use three Amazon Redshift data warehouses: demo-cluster-ou1
, demo-cluster-ou2
, and demo-cluster-ou3
. In this post, all the Amazon Redshift data warehouses are provisioned clusters. However, the same solution applies for Amazon Redshift Serverless.
- To enable audit logging with CloudWatch as the log delivery destination, open an Amazon Redshift cluster and go to the Properties tab.
- On the Edit menu, choose Edit audit logging.
- Select Turn on under Configure audit logging.
- Select CloudWatch for Log export type.
- Select all three options for User log, Connection log, and User activity log.
- Choose Save changes.
- Create a parameter group for the clusters with
enable_user_activity_logging
set as true for each of the clusters. - Modify the cluster to attach the new parameter group to the Amazon Redshift cluster.
For this post, we create three custom parameter groups: custom-param-grp-1
, custom-param-grp-2
, and custom-param-grp-3
for three clusters.
Note, if you enable only the audit logging feature, but not the associated parameter, the database audit logs log information for only the connection log and user log, but not for the user activity log.
- On the CloudWatch console, choose Log groups under Logs in the navigation pane.
- Search for
/aws/redshift/cluster/demo
.
This will show all the log groups created for the Amazon Redshift clusters.
Create a DynamoDB audit table
To create your audit table, complete the following steps:
- On the DynamoDB console, choose Tables in the navigation pane.
- Choose Create table.
- For Table name, enter
demo_redshift_audit_logs
. - For Partition key, enter
partKey
with the data type as String.
- Keep the table settings as default.
- Choose Create table.
Create Slack resources
Slack Incoming Webhooks expect a JSON request with a message string corresponding to a "text"
key. They also support message customization, such as adding a user name and icon, or overriding the webhook’s default channel. For more information, see Sending messages using Incoming Webhooks on the Slack website.
The following resources are created for this post:
- A Slack workspace named
demo_rc
- A channel named
#blog-demo
in the newly created Slack workspace - A new Slack app in the Slack workspace named
demo_redshift_ntfn
(using the From Scratch option) - Note down the Incoming Webhook URL, which will be used in this post for sending the notifications
Create an IAM role and policy
In this section, we create an AWS Identity and Access Management (IAM) policy that will be attached to an IAM role. The role is then used to grant a Lambda function access to a DynamoDB table. The policy also includes permissions to allow the Lambda function to write log files to Amazon CloudWatch Logs.
- On the IAM console, choose Policies in navigation pane.
- Choose Create policy.
- In the Create policy section, choose the JSON tab and enter the following IAM policy. Make sure you replace your AWS account ID in the policy (replace XXXXXXXX with your AWS account ID).
- Choose Next: Tags, then choose Next: Review.
- Provide the policy name
demo_post_policy
and choose Create policy.
To apply demo_post_policy
to a Lambda function, you first have to attach the policy to an IAM role.
- On the IAM console, choose Roles in the navigation pane.
- Choose Create role.
- Select AWS service and then select Lambda.
- Choose Next.
- On the Add permissions page, search for
demo_post_policy
. - Select
demo_post_policy
from the list of returned search results, then choose Next.
- On the Review page, enter
demo_post_role
for the role and an appropriate description, then choose Create role.
Create a Lambda function
We create a Lambda function with Python 3.9. In the following code, replace the slack_hook
parameter with the Slack webhook you copied earlier:
Create your function with the following steps:
- On the Lambda console, choose Create function.
- Select Author from scratch and for Function name, enter
demo_function
. - For Runtime, choose Python 3.9.
- For Execution role, select Use an existing role and choose
demo_post_role
as the IAM role. - Choose Create function.
- On the Code tab, enter the preceding Lambda function and replace the Slack webhook URL.
- Choose Deploy.
Create a CloudWatch subscription filter
We need to create the CloudWatch subscription filter on the useractivitylog
log group created by the Amazon Redshift clusters.
- On the CloudWatch console, navigate to the log group
/aws/redshift/cluster/demo-cluster-ou1/useractivitylog
. - On the Subscription filters tab, on the Create menu, choose Create Lambda subscription filter.
- Choose
demo_function
as the Lambda function. - For Log format, choose Other.
- Provide the subscription filter pattern as
?create ?alter ?drop ?grant ?revoke
. - Provide the filter name as
Sensitive Queries demo-cluster-ou1
. - Test the filter by selecting the actual log stream. If it has any queries with a match pattern, then you can see some results. For testing, use the following pattern and choose Test pattern.
- Choose Start streaming.
- Repeat the same steps for
/aws/redshift/cluster/demo-cluster-ou2/useractivitylog
and/aws/redshift/cluster/demo-cluster-ou3/useractivitylog
by giving unique subscription filter names. - Complete the preceding steps to create a second subscription filter for each of the Amazon Redshift data warehouses with the filter pattern
?CREATE ?ALTER ?DROP ?GRANT ?REVOKE
, ensuring uppercase SQL commands are also captured through this solution.
Test the solution
In this section, we test the solution in the three Amazon Redshift clusters that we created in the previous steps and check for the notifications of the commands on the Slack channel as per the CloudWatch subscription filters as well as data getting ingested in the DynamoDB table. We use the following commands to test the solution; however, this is not restricted to these commands only. You can check with other DDL commands as per the filter criteria in your Amazon Redshift cluster.
In the Slack channel, details of the notifications look like the following screenshot.
To get the results in DynamoDB, complete the following steps:
- On the DynamoDB console, choose Explore items under Tables in the navigation pane.
- In the Tables pane, select
demo_redshift_audit_logs
. - Select Scan and Run to get the results in the table.
Athena federation over the DynamoDB table
The Athena DynamoDB connector enables Athena to communicate with DynamoDB so that you can query your tables with SQL. As part of the prerequisites for this, deploy the connector to your AWS account using the Athena console or the AWS Serverless Application Repository. For more details, refer to Deploying a data source connector or Using the AWS Serverless Application Repository to deploy a data source connector. For this post, we use the Athena console.
- On the Athena console, under Administration in the navigation pane, choose Data sources.
- Choose Create data source.
- Select the data source as Amazon DynamoDB, then choose Next.
- For Data source name, enter
dynamo_db
. - For Lambda function, choose Create Lambda function to open a new window with the Lambda console.
- Under Application settings, enter the following information:
- For Application name, enter
AthenaDynamoDBConnector
. - For SpillBucket, enter the name of an S3 bucket.
- For AthenaCatalogName, enter
dynamo
. - For DisableSpillEncryption, enter
false
. - For LambdaMemory, enter
3008
. - For LambdaTimeout, enter
900
. - For SpillPrefix, enter
athena-spill-dynamo
.
- For Application name, enter
- Select I acknowledge that this app creates custom IAM roles and choose Deploy.
- Wait for the function to deploy, then return to the Athena window and choose the refresh icon next to Lambda function.
- Select the newly deployed Lambda function and choose Next.
- Review the information and choose Create data source.
- Navigate back to the query editor, then choose
dynamo_db
for Data source anddefault
for Database. - Run the following query in the editor to check the sample data:
Visualize the data in QuickSight
In this section, we create a simple governance dashboard in QuickSight using Athena in direct query mode to query the record set, which is persistently stored in a DynamoDB table.
- Sign up for QuickSight on the QuickSight console.
- Select Amazon Athena as a resource.
- Choose Lambda and select the Lambda function created for DynamoDB federation.
- Create a new dataset in QuickSight with Athena as the source.
- Provide the name of the data source name as
demo_blog
. - Choose
dynamo_db
for Catalog,default
for Database, anddemo_redshift_audit_logs
for Table. - Choose Edit/Preview data.
- Choose String in the
sqlTimestamp
column and choose Date.
- In the dialog box that appears, enter the data format
yyyy-MM-dd'T'HH:mm:ssZZ
. - Choose Validate and Update.
- Choose PUBLISH & VISUALIZE.
- Choose Interactive sheet and choose CREATE.
This will take you to the visualization page to create the analysis on QuickSight.
- Create a governance dashboard with the appropriate visualization type.
Refer to the Amazon QuickSight learning videos in QuickSight community for basic to advanced level of authoring. The following screenshot is a sample visualization created on this data.
Clean up
Clean up your resources with the following steps:
- Delete all the Amazon Redshift clusters.
- Delete the Lambda function.
- Delete the CloudWatch log groups for Amazon Redshift and Lambda.
- Delete the Athena data source for DynamoDB.
- Delete the DynamoDB table.
Conclusion
Amazon Redshift is a powerful, fully managed data warehouse that can offer significantly increased performance and lower cost in the cloud. In this post, we discussed a pattern to implement a governance mechanism to identify and notify sensitive DDL/DCL queries on an Amazon Redshift data warehouse, and created a quick dashboard to enable the DBA and security team to take timely and prompt action as required. Additionally, you can extend this solution to include DDL commands used for Amazon Redshift data sharing across clusters.
Operational excellence is a critical part of the overall data governance on creating a modern data architecture, as it’s a great enabler to drive our customers’ business. Ideally, any data governance implementation is a combination of people, process, and technology that organizations use to ensure the quality and security of their data throughout its lifecycle. Use these instructions to set up your automated notification mechanism as sensitive queries are detected as well as create a quick dashboard on QuickSight to track the activities over time.
About the Authors
Rajdip Chaudhuri is a Senior Solutions Architect with Amazon Web Services specializing in data and analytics. He enjoys working with AWS customers and partners on data and analytics requirements. In his spare time, he enjoys soccer and movies.
Dhiraj Thakur is a Solutions Architect with Amazon Web Services. He works with AWS customers and partners to provide guidance on enterprise cloud adoption, migration, and strategy. He is passionate about technology and enjoys building and experimenting in the analytics and AI/ML space.