AWS Database Blog

Improving storage with additional storage volumes in Amazon RDS for SQL Server

If you’re looking to improve your Amazon RDS for SQL Server storage architecture, the additional storage volumes feature can transform how you manage database capacity, performance, and costs. With this capability, you can extend beyond the Amazon Elastic Block Store (Amazon EBS) 64 TiB limit by provisioning multiple volumes. As SQL Server workloads grow on Amazon Relational Database Service (Amazon RDS) for SQL Server, the 64 TiB storage limit can force architectural issues that constrain business growth and create performance bottlenecks when transaction logs compete with data for I/O resources. The additional storage volumes feature in Amazon RDS for SQL Server solves these challenges. You can use Amazon RDS for SQL Server to attach additional storage volumes beyond the root volume, with each volume having different storage classes and performance characteristics.

In this post, you will learn how to use the additional storage volumes feature in Amazon RDS for SQL Server to address these common challenges. You will learn how to implement the following six key use cases for additional storage volumes:

  • Extending capacity beyond 64 TiB
  • Managing dynamic temporary storage
  • Improving performance with tailored IOPS settings
  • Reducing costs through storage class selection
  • Isolating transaction logs
  • Implementing multi-tenant storage isolation

Use cases

First, review this post, which highlights how to add or modify additional storage volumes within RDS for SQL Server. The following sections describe use cases for the multi-volume feature. Note that these use cases might require application-level or database engine re-architecting. To modify the database schema, we provided scripts in the latter half of the post.

Use case 1: Log file isolation for write-intensive workloads

Transaction log performance is critical for database commit latency and overall application responsiveness. Previously, transaction logs shared the same volume as data files, potentially creating I/O contention that could impact both read and write operations. This means that heavy data file activity, such as large table scans or index rebuilds, can slow down transaction commits, affecting the user experience. With additional storage volumes, you can isolate transaction logs onto dedicated volumes designed specifically for sequential write operations by:

  • Placing transaction logs on a dedicated volume with improved throughput settings
  • Configuring independent IOPS and throughput specifically for log write patterns
  • Monitoring log I/O performance separately from data file I/O performance

This approach is recommended for high-transaction Online Transaction Processing (OLTP) systems, Extract, Transform, Load (ETL) workloads, and applications where transaction commit latency directly impacts user experience. By paying only for the performance characteristics that your logs need, you avoid over-provisioning the primary data volume while providing improved write performance. Review Script 3: Create a new database with specific file locations on H: and I: drives to understand how to implement this approach within your environment.

Use case 2: Extending storage capacity

Amazon RDS for SQL Server previously supported a maximum storage capacity of 64 TiB per instance. With the introduction of multi-volume support, you can now attach up to three additional volumes, enabling your database to scale to a total of 256 TiB across all volumes. This capability is especially useful for data warehouses with growing historical data, rapidly expanding datasets, and consolidating multiple databases on a single RDS for SQL Server instance. It also reduces the need for horizontal sharding. Additionally, databases that previously approached the 64 TiB capacity limit on RDS for SQL Server now have ample buffer room to grow without the risk of encountering full storage scenarios. By distributing your data across multiple volumes, you can achieve the total storage capacity your workload demands without migrating to a new instance.

Use case 3: Dynamic temporary storage management

One of the key benefits to multi-volume support is the ability to add and remove volumes as needed. Previously, if you needed to increase storage size for temporary operations, you had to expand the default volume; a change you couldn’t reverse without migrating to a new instance. Now you can:

  • Attach a temporary volume for large batch operations
  • Use the additional space for data imports or transformations
  • Delete the storage volume when it’s no longer needed

This approach saves costs because you pay only for the storage that you need, when you need it. Lower environments benefit particularly from this flexibility, as storage can scale up for testing large datasets and then be removed immediately after testing is complete.

Use case 4: Performance optimization with storage specifications

Different workloads have different performance requirements. With multi-volume support, you can match storage performance to your data access patterns by using different IOPS and throughput settings across multiple volumes. For example, you might configure:

  • High-performance gp3 volumes with elevated IOPS for frequently accessed transactional data
  • Standard gp3 volumes with baseline performance for archival or reporting data

This granular control helps you improve performance where it matters most while avoiding over-provisioning resources for less critical data.

Use case 5: Cost optimization through storage class selection

Beyond performance tuning, you can optimize costs by selecting different storage classes for different data types. Amazon RDS for SQL Server supports multiple storage classes including gp3 and io2. With strategic storage class assignment, you can:

  • Use io2 for I/O intensive and mission-critical databases requiring consistent high performance
  • Use gp3 for general-purpose workloads with balanced price-performance
  • Migrate data between storage classes as access patterns change

To decide which storage class is right for you, review Choose the best Amazon EBS volume type for your self-managed database deployment.

By matching storage costs to data access patterns, you improve both performance and expenses.

Use case 6: Multi-tenant database consolidation with storage isolation

Managing multiple customer databases on a single instance has traditionally made it difficult to track per-tenant storage consumption and costs. When all databases share the same volume, you can’t accurately measure individual customer storage usage or implement fair chargeback models. This lack of visibility complicates capacity planning and makes it challenging to identify which customers are driving storage growth. With additional storage volumes, you can achieve true storage isolation for multi-tenant architectures because you can:

  • Assign each major customer or tenant group to a dedicated volume
  • Track exact storage consumption per tenant through volume-level metrics
  • Scale individual tenant storage independently without affecting others
  • Implement accurate chargeback models based on actual usage

This capability is particularly valuable for software as a service (SaaS) applications serving multiple customers, managed service providers hosting client databases, and enterprise shared services supporting different business units. By aligning storage costs directly with tenant consumption, you create transparent billing and remove cross-tenant storage subsidization.

Data migration approaches

After you’ve configured your additional volumes, you must move data to use the new storage architecture. The following sections provide scripts for three common migration scenarios.

Prerequisites

Before you begin, confirm that you have:

  • An AWS account with permissions to manage RDS instances
  • Basic understanding of SQL Server database administration

Script 1: Moving databases between volumes using Amazon S3

Amazon RDS for SQL Server supports native backup and restore to and from Amazon Simple Storage Service (Amazon S3) using the rds_backup_database and rds_restore_database stored procedures. When restoring a database from S3, the target database name must be unique within the RDS instance. You can’t restore directly over an existing database with the same name. Because of this constraint, the new database restored from Amazon S3 will have a different name and must be renamed with the previous database name. Note that the application must cut over to the new database with a minor outage. This method is suitable when an application outage is acceptable, and the whole database must be moved to new volumes. The previous database can either be deleted or renamed. In this demo, we use the drop option.

-- Switch to master database for administrative operations
use master
Go

-- Create a new test database for S3 backup/restore demonstration
create database test_of_s3
Go

-- Create a native backup of the database and store it in S3
EXEC msdb.dbo.rds_backup_database
@source_db_name='test_of_s3',
@s3_arn_to_backup_to='arn:aws:s3:::amzn-s3-demo-bucket/test_of_s3.bak';

-- Restore database from S3 backup to specific volumes (H: for data, I: for log)
EXEC msdb.dbo.rds_restore_database
@restore_db_name='test_of_s3_restore_bucket_ok',
@s3_arn_to_restore_from='arn:aws:s3:::amzn-s3-demo-bucket/test_of_s3.bak',
@data_file_volume='H:',
@log_file_volume='I:';

-- Query to show physical file locations for databases matching 'test_of%' pattern
SELECT db_name(database_id) as DatabaseName, name, type_desc, physical_name FROM sys.master_files where name like 'test_of%';
DatabaseName type_desc physical_name
test_of_s3 ROWS D:\rdsdbdata\DATA\test_of_s3.mdf
test_of_s3 LOG D:\rdsdbdata\DATA\test_of_s3_log.ldf
test_of_s3_restore_bucket_ok ROWS H:\rdsdbdata\DATA\test_of_s3.mdf
test_of_s3_restore_bucket_ok LOG I:\rdsdbdata\DATA\test_of_s3.log.ldf

To drop the original database, run:

EXECUTE msdb.dbo.rds_drop_database N'test_of_s3'

To rename the newly restored database, run:

EXEC rdsadmin.dbo.rds_modify_db_name N'test_of_s3_restore_bucket_ok', N'test_of_s3'
GO

This approach is recommended when you want to relocate an entire database to a different volume, such as moving a high-traffic database to a higher-performance volume.

Script 2: Moving tables between volumes using a clustered index.

This method is online but is disruptive to the queries that are actively using the index. If you choose this option, we recommend that you do it during business downtime hours.

-- Switch to master database for creating new demo database
use master
Go

-- Create a new database for demonstrating index-based table movement
create database demo_sql
Go
Use demo_sql
Go

-- Create a test table
CREATE TABLE Test2 ( ID INT IDENTITY(1,1) PRIMARY KEY, RandomText VARCHAR(8000), RandomNumber INT, RandomDate DATETIME );

-- Insert approximately 100MB of random test data (13,000 rows)
DECLARE @i INT = 0;
WHILE @i < 13000
BEGIN
INSERT INTO Test2 (RandomText, RandomNumber, RandomDate) VALUES (
REPLICATE(CAST(NEWID() AS VARCHAR(36)), 220),
ABS(CHECKSUM(NEWID())),
DATEADD(DAY, ABS(CHECKSUM(NEWID())) % 3650, '2020-01-01')
);
SET @i = @i + 1;
END;

-- Drop the primary key constraint from Test2 table to remove clustered index
DECLARE @TableName NVARCHAR(128) = 'Test2'
DECLARE @SQL NVARCHAR(MAX) = ' '
SELECT @SQL = @SQL + 'ALTER TABLE [' + @TableName + '] DROP CONSTRAINT [' + kc.name + '];'
FROM sys.key_constraints kc
WHERE kc.parent_object_id = OBJECT_ID(@TableName)
AND kc.type = 'PK'
IF @SQL != ' '
EXEC sp_executesql @SQL

-- Add a new filegroup to the database for archive data
use demo_sql
Go
ALTER DATABASE demo_sql
ADD FILEGROUP ArchiveFileGroupdemo_sql;

-- Add a physical file to the new filegroup on H: drive
ALTER DATABASE demo_sql
ADD FILE (
NAME = 'Archive_Data',
FILENAME = 'H:\rdsdbdata\data\demo_sql_Data.ndf',
SIZE = 1GB,
FILEGROWTH = 100MB
) TO FILEGROUP ArchiveFileGroupdemo_sql;

-- Create a clustered index on the new filegroup to move table data
USE demo_sql
GO
CREATE UNIQUE CLUSTERED INDEX test_CID ON Test2(ID) ON ArchiveFileGroupdemo_sql

-- Query to show tables and their filegroup locations and you will see the table has been moved to the new file group
SELECT
SCHEMA_NAME(t.schema_id) AS SchemaName,
t.name AS TableName,
CASE
WHEN i.name IS NULL THEN 'HEAP (No Clustered Index)'
ELSE i.name
END AS IndexName,
CASE
WHEN i.type = 0 THEN 'Heap'
WHEN i.type = 1 THEN 'Clustered'
ELSE 'Other'
END AS IndexType,
COALESCE(fg.name, 'N/A') AS FileGroupName
FROM sys.tables t
LEFT JOIN sys.indexes i ON t.object_id = i.object_id AND i.type IN (0, 1) -- Heap or Clustered
LEFT JOIN sys.filegroups fg ON i.data_space_id = fg.data_space_id
WHERE t.is_ms_shipped = 0 -- User tables only
ORDER BY SchemaName, TableName;

Script 3: Create a new database with specific file locations on H: and I: drives

The following T-SQL statement creates a new database with explicit file placement across separate EBS volumes. We place the data file (.mdf) in the H: drive and the transaction log file (.ldf) in the I: drive. With this configuration, you can improve I/O performance by isolating data and log operations on independent storage volumes.

CREATE DATABASE database_sql
ON (
NAME = 'database_sql_Data',
FILENAME = 'H:\rdsdbdata\data\database_sql.mdf',
SIZE = 100MB,
FILEGROWTH = 10MB
)
LOG ON (
NAME = 'database_sql_Log',
FILENAME = 'I:\rdsdbdata\data\database_sql_Log.ldf',
SIZE = 10MB,
FILEGROWTH = 10%
)

Script 4: Moving partitioned tables between volumes

For databases using table partitioning, you can distribute partitions across volumes based on access patterns. This keeps recent partitions on high-performance storage while moving earlier partitions to cost-effective volumes. This approach is well-suited for larger tables and we recommend running these operations during non-production hours to minimize impact. The following example demonstrates how to create a partitioned table with data distributed across multiple filegroups and volumes.

CREATE DATABASE Database_Amazon
GO

USE Database_Amazon
GO

ALTER DATABASE Database_Amazon
ADD FILEGROUP ArchiveFileGroup
GO

ALTER DATABASE Database_Amazon
ADD FILE (
  NAME = 'Archive_Data',
  FILENAME = 'H:\rdsdbdata\data\Archive_Data.ndf',
  SIZE = 1GB,
  FILEGROWTH = 100MB
) TO FILEGROUP ArchiveFileGroup
GO

CREATE PARTITION FUNCTION DatePartitionFunction (datetime2)
AS RANGE RIGHT FOR VALUES ('2024-01-01', '2025-01-01')
GO

CREATE PARTITION SCHEME DatePartitionScheme
AS PARTITION DatePartitionFunction
TO ([PRIMARY], [PRIMARY], ArchiveFileGroup)
GO

CREATE TABLE Orders (
  OrderDate datetime2 NOT NULL,
  OrderID int IDENTITY(1,1),
  CustomerID int,
  Amount decimal(10,2)
) ON DatePartitionScheme(OrderDate)
GO

INSERT INTO Orders (OrderDate, CustomerID, Amount) VALUES
('2023-06-15', 101, 100.00),
('2024-06-15', 102, 200.00),
('2025-06-15', 103, 300.00)
GO

SELECT
  p.partition_number,
  fg.name AS filegroup_name,
  p.rows AS row_count,
  rv.value AS boundary_value
FROM sys.partitions p
JOIN sys.allocation_units au ON p.partition_id = au.container_id
JOIN sys.filegroups fg ON au.data_space_id = fg.data_space_id
LEFT JOIN sys.partition_range_values rv ON p.partition_number = rv.boundary_id + 1
WHERE p.object_id = OBJECT_ID('Orders')
ORDER BY p.partition_number;

Clean up

If you created test resources while following this post, remember to delete any volumes that you don’t need to avoid ongoing storage charges.

  1. You can remove volumes through the RDS Console or AWS Command Line Interface (AWS CLI).
  2. You can delete the RDS DB Instance through the RDS Console, AWS CLI, or RDS API.

Conclusion

In this post, you learned how to use the additional storage volumes of Amazon RDS for SQL Server to extend storage capacity, manage temporary storage dynamically, improve performance, and reduce costs. By strategically distributing your data across multiple volumes with different specifications, you can build a more flexible and cost-effective database infrastructure.

To learn more about Amazon RDS for SQL Server storage options, visit the Amazon RDS documentation. For additional database improvement strategies, explore our Database Blog.


 

About the authors

Ryan Moore

Ryan Moore

Ryan is a Technical Account Manager at AWS. He is an Aurora MySQL and RDS MySQL subject matter expert that specializes in enabling customers to build performant, scalable, and secure architectures within the AWS Cloud.

Nirupam Datta

Nirupam Datta

Nirupam is a Sr. Technical Account Manager at AWS. Nirupam is a subject matter expert in the Amazon RDS core systems and Amazon RDS for SQL Server. He provides technical assistance to customers, guiding them to migrate, optimize, and navigate their journey in the AWS cloud.

Cade Kettner

Cade Kettner

Cade is a Cloud Support Engineer at AWS. In his role, he provides technical assistance with AWS Services including RDS MySQL, RDS MariaDB, RDS SQL Server, Aurora MySQL, and AWS DMS to help troubleshoot technical issues and provide tailored solutions for customers.