AWS Database Blog

Monitor custom database metrics in Amazon RDS for SQL Server using Amazon CloudWatch

In this post, we demonstrate how to create custom Amazon RDS for SQL Server CloudWatch metrics. You accomplish this by using SQL Server Agent jobs and CloudWatch Logs integration. We walk through an example of monitoring table size within a SQL Server database, however, this approach works for various other metrics. You can adapt this approach to track row counts, database size, job counts, user sessions, or other metrics.

This solution helps you gain comprehensive visibility into your Amazon RDS for SQL Server databases without additional infrastructure or third-party tools. You can make data-driven decisions based on the metrics you define. The approach scales and adapts as you can modify the T-SQL script to monitor any database metric.

Solution overview

The solution uses native SQL Server and AWS capabilities to create custom CloudWatch metrics for monitoring database-level factors in Amazon RDS for SQL Server. The process includes these steps:

  1. Create a SQL Server Agent job with a T-SQL script that queries the desired metric (such as table size).
  2. Configure the job to write the metric value to the SQL Server error log using the RAISERROR command.
  3. Schedule the SQL Server Agent job to run at regular intervals.
  4. Utilize the automatic publishing of RDS error logs to Amazon CloudWatch Logs.
  5. Create a CloudWatch metric filter to extract the metric value from the log entries.
  6. View and monitor the custom metric in CloudWatch, including setting alarms and creating dashboards.

The following diagram shows the flow from RDS for SQL Server to CloudWatch

You gain comprehensive visibility into database-level metrics with this procedure. This solution bridges the gap between SQL Server’s internal metrics and CloudWatch’s monitoring capabilities, allowing for proactive database management.

In this example, you monitor SQL Server table size, which can help with capacity and cost planning—especially during long-term commitment purchases or when understanding the scale to which a table can grow is useful.

Prerequisites

To implement this solution, you need the following prerequisites:

Create a test database

In SQL Server Management Studio (SSMS), choose New Query. Run the following command to create a test database:

CREATE DATABASE TestDB;
GO
USE TestDB;
GO

Create a test table

In the same query window, run the following command:

CREATE TABLE TestTable (
ID INT PRIMARY KEY IDENTITY(1,1),
Name NVARCHAR(100),
CreatedDate DATETIME DEFAULT GETDATE()
);
GO

Populate the test table

For this demo, you populate this test table with 10,000 rows. In the same query window, run the following command to populate the test table:

-- Insert 10000 rows into the TestTable
DECLARE @Counter INT = 1;
WHILE @Counter <= 10000
BEGIN
INSERT INTO TestTable (Name)
VALUES ('Sample Row ' + CAST(@Counter AS NVARCHAR(10)));
SET @Counter = @Counter + 1;
END
GO

You can confirm that the test table has been populated by running the following command to get the row count:

SELECT COUNT(*) FROM TestTable;
GO

You now have a database and table configured on your Amazon RDS for SQL Server DB instance. In the next section, you create a SQL Server Agent job to monitor the table size.

Create a SQL Server Agent Job

Next, create the SQL Server Agent Job that gathers the table size and writes it to the RDS SQL Server error logs.

  1. In SQL Server Management Studio (SSMS), create a new SQL Server Agent Job.
  2. Provide a Name for the SQL Server Agent Job.
  3. Select Steps.
  4. Provide a Step Name and the TestDB Database Name we previously created, and paste the following script in the Command Box:
    DECLARE @TableSize VARCHAR(100);
    DECLARE @Message VARCHAR(500);
    SELECT @TableSize = CAST((SUM(reserved_page_count) * 8.0) / 1024 AS VARCHAR(50))
    FROM sys.dm_db_partition_stats
    WHERE object_id = OBJECT_ID('dbo.Test');
    SET @Message = 'size of table is ' + @TableSize + ' MB';
    RAISERROR (@Message, 10, 1) WITH LOG;

  5. Select OK.
  6. Select Schedules and New. Provide a name for the job schedule and configure your desired Job Schedule details. For this Demo, set the Frequency to Daily, and have it run every minute.
  7. Select OK, and then OK again to save the New SQL Server Agent Job.

Confirm table size is being published to the error logs

To ensure our script is running correctly, complete the following steps:

  1. Navigate to the RDS Console, select your DB Instance, and then select the Logs & Events Tab.
  2. Scroll Down and select the Log “log/ERROR”.
  3. You should see the table size being printed in the error log, for example:
2025-12-18 15:11:00.49 spid66      size of table is 0.570312 MB
2025-12-18 15:12:00.49 spid63      size of table is 0.570312 MB

Create a CloudWatch Metric Filter

Next, we create a CloudWatch Metric Filter to convert our Table Size in the error logs to a CloudWatch Metric. Review the CloudWatch pricing documentation to understand Custom Metrics costs.

  1. Navigate to the Cloudwatch Console, select Logs, and then select Log Management.
  2. Find your error log group, which should be in the format of /aws/rds/instance/<your-instance-name>/error. Select your log group.
  3. Then, select Metric Filters, and then select Create Metric Filter.
  4. For the Filter pattern, use:
    [timestamp, time, spid, msg="size", of, table, is, tablesize, unit="MB"]
  5. (Optional) If you are monitoring other metrics ensure you edit the above filter pattern appropriately
  6. Leave the Field select criteria empty.
  7. For Select log data to test, you can select your DB Instance node, which will show its log event messages, and then select Test Pattern to test with the filter pattern.
  8. Select Next.
  9. Now, we provide a Filter Name, Metric Namespace, Metric Name, and Metric Value. Provide your desired details. For this Demo, we use TableSizeTracker as the Filter Name, Database as the Metric namespace, and TableSizeMB for the Metric name. For the Metric value, we use $tablesize to retrieve the table size from the logs and publish it to the Metric.
  10. Then select Next and save changes.

View your Custom CloudWatch Metric

Now, we can check and confirm Cloudwatch publishes the metric of the table size value. Wait a few minutes for CloudWatch to generate the namespace and metric.

  1. In the Cloudwatch Console, on the left-hand side, select Metrics, and then All Metrics.
  2. Select your Custom namespace. For this demo, it is Database.
  3. Select Metrics with no dimensions.
  4. And then select your TableSizeMB Metric. Cloudwatch plots the table size.

You can also rerun the command from the Populate the Table section of this post, as well as truncate the table to confirm that the table size is being reflected in Cloudwatch.

You can also implement Cloudwatch Dashboards and Cloudwatch Alarms for additional monitoring of your Table Size Metric. Review the CloudWatch pricing documentation here for the costs associated with Cloudwatch Dashboards and Cloudwatch Alarms.

Clean Up

To clean up the resources created during this demo:

  1. Delete the CloudWatch Metric Filter
  2. Delete the SQL Server Agent Job
    EXEC msdb..sp_delete_job @job_name = 'TestTable_TableSize';
  3. Delete the Table. Note: Ensure the command deletes the correct table before execution.
    DROP TABLE dbo.TestTable;
  4. Delete The Database
    USE master;
    GO
    DROP DATABASE TestDatabase;
    GO
  5. Delete your RDS for SQL Server DB Instance

Conclusion

In this post, we demonstrated a practical approach to monitoring custom database metrics in Amazon RDS for SQL Server using CloudWatch. By combining SQL Server Agent jobs, error log integration, and CloudWatch metric filters, you can gain visibility into critical database factors that aren’t natively exposed by RDS.

This solution provides you with a flexible framework to monitor any database-level metric, whether it’s table size, row counts, database growth, or other custom indicators. The approach requires no additional infrastructure, leverages existing AWS and SQL Server capabilities, and scales across multiple databases and metrics.

We encourage you to adapt this solution to your specific monitoring needs and explore additional metrics that would benefit your database operations.


About the authors

Cade Kettner

Cade Kettner

Cade is a Cloud Support Engineer at AWS. In his role, he provides technical assistance with AWS Services including RDS MySQL, RDS MariaDB, RDS for SQL Server, Aurora MySQL, and AWS DMS to help troubleshoot technical issues and provide tailored solutions for customers.

Nirupam Datta

Nirupam Datta

Nirupam is a Sr. Technical Account Manager at AWS. Nirupam is also a subject matter expert in the Amazon RDS core systems and Amazon RDS for SQL Server. He provides technical assistance to customers, guiding them to migrate, optimize, and navigate their journey in the AWS Cloud.

Ryan Moore

Ryan Moore

Ryan is a Technical Account Manager at AWS supporting ISV customers. He enables ISV’s to build performant, scalable, and secure architectures within the AWS Cloud. Prior to his TAM role he was a database engineer specializing in Aurora MySQL and RDS MySQL.