AWS Database Blog
Improving storage with additional storage volumes in Amazon RDS for SQL Server
If you’re looking to improve your Amazon RDS for SQL Server storage architecture, the additional storage volumes feature can transform how you manage database capacity, performance, and costs. With this capability, you can extend beyond the Amazon Elastic Block Store (Amazon EBS) 64 TiB limit by provisioning multiple volumes. As SQL Server workloads grow on Amazon Relational Database Service (Amazon RDS) for SQL Server, the 64 TiB storage limit can force architectural issues that constrain business growth and create performance bottlenecks when transaction logs compete with data for I/O resources. The additional storage volumes feature in Amazon RDS for SQL Server solves these challenges. You can use Amazon RDS for SQL Server to attach additional storage volumes beyond the root volume, with each volume having different storage classes and performance characteristics.
In this post, you will learn how to use the additional storage volumes feature in Amazon RDS for SQL Server to address these common challenges. You will learn how to implement the following six key use cases for additional storage volumes:
- Extending capacity beyond 64 TiB
- Managing dynamic temporary storage
- Improving performance with tailored IOPS settings
- Reducing costs through storage class selection
- Isolating transaction logs
- Implementing multi-tenant storage isolation
Use cases
First, review this post, which highlights how to add or modify additional storage volumes within RDS for SQL Server. The following sections describe use cases for the multi-volume feature. Note that these use cases might require application-level or database engine re-architecting. To modify the database schema, we provided scripts in the latter half of the post.
Use case 1: Log file isolation for write-intensive workloads
Transaction log performance is critical for database commit latency and overall application responsiveness. Previously, transaction logs shared the same volume as data files, potentially creating I/O contention that could impact both read and write operations. This means that heavy data file activity, such as large table scans or index rebuilds, can slow down transaction commits, affecting the user experience. With additional storage volumes, you can isolate transaction logs onto dedicated volumes designed specifically for sequential write operations by:
- Placing transaction logs on a dedicated volume with improved throughput settings
- Configuring independent IOPS and throughput specifically for log write patterns
- Monitoring log I/O performance separately from data file I/O performance
This approach is recommended for high-transaction Online Transaction Processing (OLTP) systems, Extract, Transform, Load (ETL) workloads, and applications where transaction commit latency directly impacts user experience. By paying only for the performance characteristics that your logs need, you avoid over-provisioning the primary data volume while providing improved write performance. Review Script 3: Create a new database with specific file locations on H: and I: drives to understand how to implement this approach within your environment.
Use case 2: Extending storage capacity
Amazon RDS for SQL Server previously supported a maximum storage capacity of 64 TiB per instance. With the introduction of multi-volume support, you can now attach up to three additional volumes, enabling your database to scale to a total of 256 TiB across all volumes. This capability is especially useful for data warehouses with growing historical data, rapidly expanding datasets, and consolidating multiple databases on a single RDS for SQL Server instance. It also reduces the need for horizontal sharding. Additionally, databases that previously approached the 64 TiB capacity limit on RDS for SQL Server now have ample buffer room to grow without the risk of encountering full storage scenarios. By distributing your data across multiple volumes, you can achieve the total storage capacity your workload demands without migrating to a new instance.
Use case 3: Dynamic temporary storage management
One of the key benefits to multi-volume support is the ability to add and remove volumes as needed. Previously, if you needed to increase storage size for temporary operations, you had to expand the default volume; a change you couldn’t reverse without migrating to a new instance. Now you can:
- Attach a temporary volume for large batch operations
- Use the additional space for data imports or transformations
- Delete the storage volume when it’s no longer needed
This approach saves costs because you pay only for the storage that you need, when you need it. Lower environments benefit particularly from this flexibility, as storage can scale up for testing large datasets and then be removed immediately after testing is complete.
Use case 4: Performance optimization with storage specifications
Different workloads have different performance requirements. With multi-volume support, you can match storage performance to your data access patterns by using different IOPS and throughput settings across multiple volumes. For example, you might configure:
- High-performance gp3 volumes with elevated IOPS for frequently accessed transactional data
- Standard gp3 volumes with baseline performance for archival or reporting data
This granular control helps you improve performance where it matters most while avoiding over-provisioning resources for less critical data.
Use case 5: Cost optimization through storage class selection
Beyond performance tuning, you can optimize costs by selecting different storage classes for different data types. Amazon RDS for SQL Server supports multiple storage classes including gp3 and io2. With strategic storage class assignment, you can:
- Use io2 for I/O intensive and mission-critical databases requiring consistent high performance
- Use gp3 for general-purpose workloads with balanced price-performance
- Migrate data between storage classes as access patterns change
To decide which storage class is right for you, review Choose the best Amazon EBS volume type for your self-managed database deployment.
By matching storage costs to data access patterns, you improve both performance and expenses.
Use case 6: Multi-tenant database consolidation with storage isolation
Managing multiple customer databases on a single instance has traditionally made it difficult to track per-tenant storage consumption and costs. When all databases share the same volume, you can’t accurately measure individual customer storage usage or implement fair chargeback models. This lack of visibility complicates capacity planning and makes it challenging to identify which customers are driving storage growth. With additional storage volumes, you can achieve true storage isolation for multi-tenant architectures because you can:
- Assign each major customer or tenant group to a dedicated volume
- Track exact storage consumption per tenant through volume-level metrics
- Scale individual tenant storage independently without affecting others
- Implement accurate chargeback models based on actual usage
This capability is particularly valuable for software as a service (SaaS) applications serving multiple customers, managed service providers hosting client databases, and enterprise shared services supporting different business units. By aligning storage costs directly with tenant consumption, you create transparent billing and remove cross-tenant storage subsidization.
Data migration approaches
After you’ve configured your additional volumes, you must move data to use the new storage architecture. The following sections provide scripts for three common migration scenarios.
Prerequisites
Before you begin, confirm that you have:
- An AWS account with permissions to manage RDS instances
- Basic understanding of SQL Server database administration
Script 1: Moving databases between volumes using Amazon S3
Amazon RDS for SQL Server supports native backup and restore to and from Amazon Simple Storage Service (Amazon S3) using the rds_backup_database and rds_restore_database stored procedures. When restoring a database from S3, the target database name must be unique within the RDS instance. You can’t restore directly over an existing database with the same name. Because of this constraint, the new database restored from Amazon S3 will have a different name and must be renamed with the previous database name. Note that the application must cut over to the new database with a minor outage. This method is suitable when an application outage is acceptable, and the whole database must be moved to new volumes. The previous database can either be deleted or renamed. In this demo, we use the drop option.
| DatabaseName | type_desc | physical_name |
| test_of_s3 | ROWS | D:\rdsdbdata\DATA\test_of_s3.mdf |
| test_of_s3 | LOG | D:\rdsdbdata\DATA\test_of_s3_log.ldf |
| test_of_s3_restore_bucket_ok | ROWS | H:\rdsdbdata\DATA\test_of_s3.mdf |
| test_of_s3_restore_bucket_ok | LOG | I:\rdsdbdata\DATA\test_of_s3.log.ldf |
To drop the original database, run:
To rename the newly restored database, run:
This approach is recommended when you want to relocate an entire database to a different volume, such as moving a high-traffic database to a higher-performance volume.
Script 2: Moving tables between volumes using a clustered index.
This method is online but is disruptive to the queries that are actively using the index. If you choose this option, we recommend that you do it during business downtime hours.
Script 3: Create a new database with specific file locations on H: and I: drives
The following T-SQL statement creates a new database with explicit file placement across separate EBS volumes. We place the data file (.mdf) in the H: drive and the transaction log file (.ldf) in the I: drive. With this configuration, you can improve I/O performance by isolating data and log operations on independent storage volumes.
Script 4: Moving partitioned tables between volumes
For databases using table partitioning, you can distribute partitions across volumes based on access patterns. This keeps recent partitions on high-performance storage while moving earlier partitions to cost-effective volumes. This approach is well-suited for larger tables and we recommend running these operations during non-production hours to minimize impact. The following example demonstrates how to create a partitioned table with data distributed across multiple filegroups and volumes.
Clean up
If you created test resources while following this post, remember to delete any volumes that you don’t need to avoid ongoing storage charges.
- You can remove volumes through the RDS Console or AWS Command Line Interface (AWS CLI).
- You can delete the RDS DB Instance through the RDS Console, AWS CLI, or RDS API.
Conclusion
In this post, you learned how to use the additional storage volumes of Amazon RDS for SQL Server to extend storage capacity, manage temporary storage dynamically, improve performance, and reduce costs. By strategically distributing your data across multiple volumes with different specifications, you can build a more flexible and cost-effective database infrastructure.
To learn more about Amazon RDS for SQL Server storage options, visit the Amazon RDS documentation. For additional database improvement strategies, explore our Database Blog.