AWS Database Blog

Monitor and alert on DDL and DCL changes in Amazon RDS for MySQL, Amazon RDS for MariaDB, and Amazon Aurora MySQL

Amazon Relational Database Service (Amazon RDS) monitoring and alerting using Amazon CloudWatch, Amazon RDS Performance Insights, and Amazon RDS Enhanced Monitoring is robust and secure, and often a top choice when troubleshooting Amazon RDS-related issues. You can use CloudWatch alerts for performance metrics like FreeStorageSpace, CPUUtilization, DatabaseConnections, FreeableMemory, NetworkReceiveThroughput, NetworkTransmitThroughput, BufferCacheHitRatio, and more. You can use Enhanced Monitoring and Performance Insights for troubleshooting database performance issues. You can also use Amazon RDS events notifications that go through Amazon Simple Notification Service (Amazon SNS).

A lot comes out of the box, and Amazon RDS alerting covers most focus areas. However, in some cases, you may need to get notified of DDL and DCL changes that can influence the availability and stability of the RDS database instances. There is no readily available mechanism that provide alerts based on the use case.

In this post, we go over setting up logs for Amazon RDS for MySQL, Amazon RDS for MariaDB, and Amazon Aurora MySQL-Compatible Edition for all DDL and DCL changes by integrating with other AWS services such as AWS Lambda and Amazon SNS. Amazon Aurora optionally provides you with database activity streams that you can use to monitor and report activities in your Aurora cluster. For this post, we focus on the Aurora advanced auditing feature.

Solution overview

The MariaDB Audit Plugin is an open-source plugin that provides auditing functionality for the MySQL DB engine. Amazon RDS offers this plugin for MySQL database instances as an option that can be added to the option group. This plugin logs database activity for each client session and records information like who connected to the server, what queries were run, which tables were accessed, and which server variables were changed in a log file. This audit plugin for MySQL has also been open sourced by RDS, and is available on GitHub – audit-plugin-for-mysql.

Aurora MySQL-Compatible uses the advanced auditing feature with a custom DB cluster parameter group to capture database activity. The records stored in the log file are published to CloudWatch logs for processing, and we use Lambda to parse the published database audit logs to send notifications using Amazon SNS.

The following diagram illustrates the architecture of our solution.

To deploy this solution, we perform the following steps:

  1. For Amazon RDS for MySQL or Amazon RDS for MariaDB, turn on the audit plugin using an option group, and for Aurora MySQL-Compatible, turn on advanced auditing using a parameter group.
  2. Configure and enable auditing to capture DDL and DCL database activities.
  3. Publish the MySQL audit logs to CloudWatch logs. Adjust the retention policy for each log group for how long to store log data in a log group.
  4. Create an SNS topic and a subscription.
  5. Create a Lambda function and add a trigger that initiates the Lambda function to parse the database logs.

Prerequisites

Follow the instructions for creating the database on the AWS Management Console:

Follow the instructions for enabling the AWS Command Line Interface (AWS CLI) to interact with AWS services using commands in your command line shell:

Turn on the audit plugin for Amazon RDS for MySQL

The default option group for Amazon RDS for MySQL does not have the audit plugin enabled. Because the default option group can’t be modified, we create a custom option group and add an option to apply it to the RDS instance. To turn on the audit plugin, complete the following steps:

  1. On the Amazon RDS console, choose Option groups in the navigation pane.
  2. Choose Create group.
  3. For Name, enter a name (for example, custom-option-grp-mysql-5-7).
  4. For Description, enter a description.
  5. For Engine, choose mysql.
  6. For Major Engine Version, choose your engine version (5.7).
  7. Choose Create.
  8. On the Option groups page, select your option group and choose Add option.
  9. For Option name, choose MARIADB_AUDIT_PLUGIN.
  10. In the Option settings section, modify the required parameter values as needed:
    1. For Option setting, use SERVER_AUDIT_EVENTS.
    2. For Value, use QUERY_DDL, QUERY_DCL.
  11. For Apply immediately, select Yes.
  12. Choose Add option.

To use the AWS CLI, in a terminal window in the Amazon Linux 2 EC2 instance, run the following command to create an option group:

aws rds create-option-group \
    --option-group-name custom-option-grp-mysql-5-7 \
    --engine-name mysql \
    --major-engine-version 5.7 \
    --option-group-description "MySQL 5.7 Audit Logging enabled"

To add an option to the option group, use the following code:

aws rds add-option-to-option-group \
 --option-group-name custom-option-grp-mysql-5-7 \
 --options '[{"OptionSettings":[{"Name":"SERVER_AUDIT_EVENTS","Value":"QUERY_DDL,QUERY_DCL"}],"OptionName":"MARIADB_AUDIT_PLUGIN"}]' \
 --apply-immediately

Turn on advanced auditing for Aurora MySQL-Compatible

To enable the advanced audit feature in Aurora MySQL-Compatible, you must create a custom DB cluster parameter group and enable advanced auditing. To achieve this, complete the following steps:

  1. On the Amazon RDS console, choose Parameter groups.
  2. Choose Create parameter group.
  3. For Parameter group family, choose aurora-mysql5.7.
  4. For Type, choose DB Cluster Parameter Group.
  5. For Group name, enter a name (for example, custom-aurora-mysql-cluster-5-7).
  6. For Description, enter a description.
  7. Choose Create.
  8. On the Parameter groups page, select your parameter group and choose Edit on the Actions menu.
  9. In the Parameters section, modify the parameter values as needed:
    1. For server audit logging, set the value to 1.
    2. For server audit events, set the value to QUERY_DDL, QUERY_DCL.
  10. Choose Save changes.

To use the AWS CLI, in a terminal window in the Amazon Linux 2 EC2 instance, run the following command to create a parameter group:

aws rds create-db-cluster-parameter-group \
    --db-cluster-parameter-group-name custom-aurora-mysql-cluster-5-7 \
    --db-parameter-group-family aurora-mysql5.7 \
    --description "Aurora MySQL cluster parameter group with audit enabled"

To enable auditing in the parameter group, use the following code:

aws rds modify-db-cluster-parameter-group \
    --db-cluster-parameter-group-name custom-aurora-mysql-cluster-5-7 \
    --parameters "ParameterName=server_audit_logging,ParameterValue=1,ApplyMethod=immediate" \                 "ParameterName=server_audit_events,ParameterValue='QUERY_DDL,QUERY_DCL',ApplyMethod=immediate"

Modify Amazon RDS for MySQL (or MariaDB) or Aurora MySQL-Compatible Edition

You now assign the custom option group to the Amazon RDS for MySQL DB instance or the cluster parameter group to the Aurora MySQL-Compatible DB instance. To achieve this, complete the following steps:

  1. On the Amazon RDS console, choose Databases in the navigation pane.
  2. Select your DB instance and choose Modify.
  3. Under Additional configuration, in the Database options section, modify the following:
    1. For Amazon RDS for MySQL or MariaDB, choose the custom option group you created.
    2. For Aurora MySQL-Compatible, choose the custom parameter group you created.

Note that attaching this new parameter group to a DB instance requires you to reboot the DB instance. For Aurora MySQL-Compatible, it requires that you reboot the cluster member DB instance after you modify the cluster.

  1. Choose Continue.
  2. Under Schedule modifications, select Apply immediately.
  3. Choose Modify DB instance or Modify cluster.
  4. To validate the changes, run the following in a terminal window in the Amazon Linux 2 EC2 instance:
mysql -h <DB_ENDPOINT> -u <USERNAME> -p
Enter password:

MySQL [(none)]> SHOW VARIABLES LIKE '%server_audit_logging%';
+----------------------+-------+
| Variable_name        | Value |
+----------------------+-------+
| server_audit_logging | ON    |
+----------------------+-------+
1 row in set (0.00 sec)

MySQL [(none)]> SHOW VARIABLES LIKE '%server_audit_events%';
+---------------------+---------------------+
| Variable_name       | Value               |
+---------------------+---------------------+
| server_audit_events | QUERY_DDL,QUERY_DCL |
+---------------------+---------------------+
1 row in set (0.00 sec)

To use the AWS CLI, in a terminal window in the Amazon Linux 2 EC2 instance, run the following command to modify Amazon RDS for MySQL or MariaDB to associate with the new option group:

aws rds modify-db-instance \
    --db-instance-identifier rds-mysql-01 \
    --option-group-name custom-option-grp-mysql-5-7 \
    --apply-immediately

Use the following code to validate:

aws rds describe-db-instances \
    --db-instance-identifier rds-mysql-01

To modify Aurora MySQL-Compatible to associate with the new parameter group, use the following code:

aws rds modify-db-cluster \
    --db-cluster-identifier aurora-mysql-01 \
    --db-cluster-parameter-group-name custom-aurora-mysql-cluster-5-7 \
    --apply-immediately

Use the following code to validate:

aws rds describe-db-clusters \
    --db-cluster-identifier aurora-mysql-01

We get the following sample output:

DBCLUSTERMEMBERS        pending-reboot  aurora-mysql-01-instance-1       True

To reboot the DB cluster member for the changes to take effect, use the following code:

aws rds reboot-db-instance \
    --db-instance-identifier aurora-mysql-01-instance-1

Publish the MySQL audit log to CloudWatch logs

After you enable the audit option, you publish the database logs to the CloudWatch log group. To achieve this, complete the following steps:

  1. On the Amazon RDS console, choose Databases in the navigation pane.
  2. Select your DB instance and choose Modify.
  3. Under Additional configuration, in the Log exports section, select Audit log (optionally, select Error log, General log, and Slow query log).
  4. Choose Continue.
  5. Under Schedule modifications, select Apply immediately.
  6. Choose Modify DB instance or Modify cluster.

To use the AWS CLI, in a terminal window in the Amazon Linux 2 EC2 instance, run the following command to modify for Amazon RDS for MySQL:

aws rds modify-db-instance \
    --db-instance-identifier rds-mysql-01 \
    --cloudwatch-logs-export-configuration '{"EnableLogTypes":["audit","error","general","slowquery"]}'

To modify for Aurora MySQL-Compatible, use the following code:

aws rds modify-db-cluster \
    --db-cluster-identifier aurora-mysql-01 \
    --cloudwatch-logs-export-configuration '{"EnableLogTypes":["audit","error","general","slowquery"]}'

Create an SNS topic

An SNS topic acts as a communication channel. To create an SNS topic, complete the following steps:

  1. On the Amazon SNS console, choose Topics in the navigation pane.
  2. Choose Create topic.
  3. For Type, select Standard.
  4. For Name, enter a name (for example, lambda-rds-notification).
  5. Provide any optional details as per your enterprise standards.
  6. Choose Create topic.

To use the AWS CLI, in a terminal window in the Amazon Linux 2 EC2 instance, run the following command:

aws sns create-topic --name lambda-rds-notification
aws sns set-topic-attributes --topic-arn arn:aws:sns:[aws-region]:[your-account-number]:lambda-rds-notification --attribute-name "KmsMasterKeyId" --attribute-value KMS-KEY

Create an SNS subscription

To receive messages published to an SNS topic, you must subscribe an endpoint to the topic. When you subscribe an endpoint to a topic, the endpoint receives messages published to the associated topic. You have multiple endpoint types available to choose from; to integrate with external services and products, use the HTTPS protocol with integration URL. For this post, we use email as the protocol and an email address on the endpoint. To create your subscription, complete the following steps:

  1. On the Amazon SNS console, choose Subscriptions in the navigation pane.
  2. Choose Create subscription.
  3. For Topic ARN, choose the SNS topic you created earlier.
  4. For Protocol, choose Email.
  5. For Endpoint, enter the email address to receive event notifications.
  6. Provide any optional details as per your enterprise standards.
  7. Choose Create subscription.

To use the AWS CLI, in a terminal window in the Amazon Linux 2 EC2 instance, run the following command:

aws sns subscribe \
    --topic-arn arn:aws:sns:us-east-1:00000:lambda-rds-notification \
    --protocol email \
    --notification-endpoint emailuser@domain.com

You should receive an email asking to confirm the subscription.

  1. Choose Confirm Subscription.

Create an IAM policy and role for the Lambda function

A Lambda function’s execution role is an AWS Identity and Access Management (IAM) role that grants the function permission to access AWS services and resources. Here we create an execution role that has permission to access CloudWatch and Amazon SNS. You provide an execution role when you create a function. When you invoke your function, Lambda automatically provides your function with temporary credentials by assuming this role. You don’t have to call sts:AssumeRole in your function code. In order for Lambda to properly assume your execution role, the role’s trust policy must specify the Lambda service principal (lambda.amazonaws.com) as a trusted service. To create your policy and role, complete the following steps:

  1. Create an IAM role with the following trust policy (lambda_trust_RDS_notification.json) to allow Lambda to assume the role:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Principal": {
                "Service": [
                    "lambda.amazonaws.com"
                ]
            }
        }
    ]
}
  1. Create the role:
aws iam create-role \
    --role-name lambdarole_RDS_notification \
    --assume-role-policy file://lambda_trust_RDS_notification.json
  1. Add the following inline policy (lambda_policy_RDS_notification.json) to allow Lambda to access the needed resources:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "sns:Publish"
            ],
            "Resource": [
                "arn:aws:logs:[aws-region]:[your-account-number]:log-group:[AuditLogGroup]:*",
                "arn:aws:sns:[aws-region]:[your-account-number]:[SNS-TOPIC]",
                "arn:aws:lambda:[aws-region]:[your-account-number]:function:*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:[aws-region]:[your-account-number]:log-group:/aws/lambda/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeNetworkInterfaces",
                "ec2:CreateNetworkInterface",
                "ec2:DeleteNetworkInterface",
                "ec2:DescribeInstances",
                "ec2:AttachNetworkInterface"
            ],
            "Resource": "*"
        }
    ]
}
  1. Create the role:
aws iam put-role-policy 
    --role-name lambdarole_RDS_notification \
    --policy-name lambdapolicy_RDS_notification \
    --policy-document file://lambda_policy_RDS_notification.json

Create your Lambda function and add a trigger

The Lambda function here uses a CloudWatch log subscription added as a function trigger that lets us match entries with a particular error pattern in our log. It also uses Amazon SNS to send an email with the match pattern along with the CloudWatch log group and log stream. To create your function and add a trigger, complete the following steps:

  1. On the Lambda console, choose Functions in the navigation pane.
  2. Choose Create function.
  3. For Function name, enter RDS-Notification.
  4. For Runtime, choose Python 3.9 (or a runtime greater than Python v3.7).
  5. For Execution role, select Use an existing role.
  6. For Existing role, choose the role you created (lambdarole_RDS_notification).
  7. Choose Create function.
  8. On the function details page, navigate to the function code.
  9. Delete the sample code and enter the following:
    Here’s an example Lambda function code to parse CloudWatch Logs, filter log events with specific keywords, and send them in batches defined on the log_batch_size to an SNS topic.

    # Code will parse cloudwatch logs
    import json
    import boto3
    import os
    import base64
    import gzip
    
    SNSTopic=os.environ["SNSTopicARN"]
    
    def lambda_handler(event, context):
        outEvent = event['awslogs']['data']
        outEvent = base64.b64decode(outEvent)
        outEvent = gzip.decompress(outEvent)
        cleanEvent = json.loads(outEvent)
        log_events = cleanEvent['logEvents']
        keywords = ['drop', 'create', 'alter', 'grant', 'revoke']
        filtered_logevents = list(filter(lambda event: any(keyword.lower() in event['message'].lower() for keyword in keywords), cleanEvent['logEvents']))
        log_batch_size = 100  # Number of log messages per batch
        log_group = cleanEvent['logGroup']
        log_stream = cleanEvent['logStream']
    
        
        # Parse log events
        grouped_logs = group_logs(filtered_logevents, log_batch_size)
        
        # Send SNS notification for each log batch
        for batch in grouped_logs:
            send_sns_notification(log_group, log_stream, batch)
    
    def group_logs(logs, batch_size):
        grouped_logs = []
        current_batch = []
        
        for log in logs:
            current_batch.append(log)
            if len(current_batch) == batch_size:
                grouped_logs.append(current_batch)
                current_batch = []
        
        if current_batch:
            grouped_logs.append(current_batch)
        
        return grouped_logs
    
    def send_sns_notification(log_group, log_stream, log_batch):
        sns_client = boto3.client('sns')
        
        log_messages = []
        for log in log_batch:
            log_messages.append(log['message'])
        
        message = 'Recorded DDL/DCL changes in Database Logs:\n{}'.format('\n'.join(log_messages))
        subject = 'Database recorded a DDL/ DCL change in {}'.format(log_group)
    
        sns_client.publish(
            TopicArn=SNSTopic,
            Message=message,
            Subject=subject
        )
    
  10. Choose Deploy.
  11. Navigate to the Configuration tab and choose Environment Variables.
  12. Choose Edit and add the following environment variables:
    1. For Key, enter SNSTopicARN.
    2. For Values, enter arn:aws:sns:[aws-region]:[your-account-number]:lambda-rds-notification.
  13. Choose Save.
  14. In the function overview, choose Add trigger.
  15. For Select a source, choose CloudWatch Logs.
  16. For Log group, choose the MySQL audit log group (arn:aws:logs:[aws-region]:[your-account-number:log-group:/aws/rds/cluster/aurora-mysql01/audit:*).
  17. For Filter name, enter MySQL01-logstream.
  18. For Filter pattern, enter the following code based on your database service.Filter patterns are case sensitive and only return the log events that contain the terms you define. For example, the following filter pattern returns all log events where messages contain the word “DROP” or “drop” but not “dROP.” If that’s not acceptable, you can use a generic QUERY filter and Lambda will parse the needed filtering.For Amazon RDS for MySQL and Amazon RDS for Maria DB, use the following code:
    [col1, col2 = *DROP || col2 = *drop || col2 = *create || col2 = *CREATE || col2 = *alter || col2 = *ALTER || col2 = *grant || col2 = *GRANT || col2 = *revoke || col2 = *REVOKE, ...]

    Alternatively, use the generic filter:

    "QUERY"

    When using the generic QUERY filter, all queries recorded in the log will be processed by Lambda. This may be acceptable if the audit log is configured to only record DDL and DCL queries. Due to cost and efficiency reasons, we don’t recommend using the generic filter if your database uses a less restrictive logging configuration (for example, logging all DDLs and DCLs, but also DMLs and SELECTs).

    For Aurora MySQL-Compatible, use the following code:

    [col1, col2 = *DROP || col2 = *drop || col2 = *create || col2 = *CREATE || col2 = *alter || col2 = *ALTER || col2 = *grant || col2 = *GRANT || col2 = *revoke || col2 = *REVOKE, ...]

    Alternatively, use the generic filter:

    "QUERY"
  19. Choose Add.
  20. Navigate to the Code tab and choose Test to test the function.
  21. For Event name, enter TestEvent.
  22. For Template, choose cloudwatch-logs.
  23. Choose Save.
  24. Choose Test again to test the function.

The test simulates a notification using the SNS topic.

Solution demonstration

In the following section, we demonstrate our solution with some SQL commands: DDL statements like drop, create, and alter, and DCL statements like grant and revoke. If a batch operation runs with hundreds of DDLs, it consolidates all those messages into one single notification. If a single statement is run, it captures it as one single notification. It also captures the events from prepared statements. The Lambda function extracts the needed information from the log stream and sends an email notification via Amazon SNS.

The following code is for a batch SQL run:

/* Batch DDL changes on Grant,Revoke,
alter,drop 
in the demo DB tables:*/
GRANT SELECT ON demodb.persons TO 'testuser01'@'%';
GRANT SELECT ON demodb.course TO 'testuser01'@'%';
REVOKE SELECT ON demodb.course FROM 'testuser01'@'%';
GRaNt SELECT ON demodb.persons TO 'testuser01'@'%';
gRANT SELECT ON demodb.course TO 'testuser01'@'%';
rEVOKE SELECT ON demodb.course FROM 'testuser01'@'%';
alTER table course modify column course_title varchar(10) NOT NULL;
ALter table persons modify column LastName varchar(10) NOT NULL;
ReVoKe SELECT ON demodb.persons FROM 'testuser01'@'%';
ALter table course modify column course_title varchar(20) NOT NULL;
GRaNt SELECT ON demodb.persons TO 'testuser01'@'%';
select * from users where username like ('%drop%');
select * from users where username like ('%alter%');
dROP table demodb.persons;
DrOp table demodb.course;

The following code is for a prepared statement run:

SET @tableName = 'course';
SET @columnName = 'course_title';
SET @dataType = 'VARCHAR(50)';
SET @sql = CONCAT('ALTER TABLE ', @tableName, ' MODIFY COLUMN ', @columnName, ' ', @dataType);

PREPARE alter_statement FROM @sql;
EXECUTE alter_statement;
DEALLOCATE PREPARE alter_statement;

For more information on SNS endpoints and quotas, refer to Amazon Simple Notification Service endpoints and quotas.

For a custom message on the SNS notification, you can append any additional text or filter on a particular text to display. You can use the filter() function within the Lambda function to meet the filtering criterion. For example:

filtered_events = filter(lambda event: "IP" in log_event['message'], cleanEvent['logEvents'])

The following screenshots show examples of notification emails.

Clean up

To avoid incurring unexpected charges, delete the AWS resources that are no longer required. For more information about pricing, refer to the Amazon RDS for MySQL, Amazon Aurora, AWS Lambda, Amazon SNS and Amazon CloudWatch pricing pages.

Conclusion

It is imperative to have proper alerting on your databases, specifically for unwanted actions, to help you detect anomalies and trigger notifications. In this post, we walked through how to use a Lambda function to parse CloudWatch logs and send notifications if a specified action has occurred on the database. We encourage you to try this solution and take advantage of all the benefits of using Lambda on Amazon RDS, and let us know what you think in the comments section. We’re always looking forward to your feedback.


About the Authors

Suman Kontham is a Database Consultant with the Professional Services team at Amazon Web Services (AWS). He helps AWS customers to achieve their business needs when migrating or transforming their databases to AWS cloud database solutions.

Shankar Padmanabhuni is a Senior Database Consultant with the Professional Services team at Amazon Web Services (AWS). He helps AWS customers to migrate and modernize their databases to cloud native database on AWS.

Satish Nair is a Database Consultant with AWS Professional Services at Amazon Web Services. Satish specializes in database which includes Amazon RDS and Amazon Aurora. He helps AWS customers adopt AWS Cloud by building scalable and secure solutions in their migration journey.