AWS Database Blog

Best practices for Amazon RDS for SQL Server with Amazon EBS io2 Block Express volumes up to 64 TiB

Amazon Relational Database Service (Amazon RDS) for SQL Server now supports Amazon Elastic Block Store (Amazon EBS) io2 Block Express volumes. These volumes are designed to support all your critical database workloads that demand high performance, high throughput, and consistently low latency. io2 Block Express volumes support 99.999% durability, up to 64 TiB storage, up to 4,000 MiB/s throughput, and up to 256,000 Provisioned IOPS for your most demanding database needs, at the same price as EBS io1 volumes.

In this post, we share best practices to use the io2 Block Express volumes with RDS for SQL Server DB instances.

Choosing the right RDS DB instance class with io2 Block Express volumes

The performance of an RDS DB instance is a combination of the instance type and the storage type. Therefore, knowing the maximum IOPS and throughput limits applied at both the RDS DB instance level and EBS volume level is crucial. For high-performance workloads, it’s recommended to pair Provisioned IOPS SSD storage (io2) with a powerful DB instance class type (such as X2iedn, R6i, or R5b). For less demanding workloads, a general-purpose instance class type (m5i/m6i) and storage (gp2/gp3) might suffice.

To determine the optimal combination for your workload, do the following:

  1. Identify your application workload’s IOPS and throughput requirement.
  2. Choose the appropriate RDS DB instance class that meets your CPU, memory, and network performance requirements.
  3. Choose the appropriate storage type based on the IOPS and throughput requirement, which could be either General Purpose or Provisioned IOPS storage.

The following diagram shows RDS DB instance storage decision tree based on your requirement for iops, throughput and capacity.

You should choose io2 volumes if any of the following conditions are true:

  • Your application requires high IOPS (greater than 16,000 IOPS).
  • You need low latency and high throughput (above 1,000 MiB/s) for consistent performance.
  • Your storage capacity requirement exceeds 16 TiB.
  • Your application demands high durability and availability (with a 99.999% durability SLA).
  • You’re running mission-critical databases or applications that require sustained high performance.

Visit the documentation to check the availability of the io2 Block Express volumes in your AWS region.

The performance of an io2 Block Express volume combination varies based on the selected RDS DB instance class and size. The following table shows the largest RDS DB instance size across different supported RDS instance classes and the corresponding io2 Block Express performance limits.

DB Instance Class Max throughput limit (MiB) Max IOPS limit (16 KiB I/O) Max DB instance size
X2iedn 10,000 260,000 db.x2iedn.32xlarge
R5b 7,500 260,000 db.r5b.24xlarge
R6i/M6i 5,000 160,000 db.r6i.32xlarge
R5d/M5d/Z1d 2,375 80,000

db.r5d.24xlarge

db.m5d.24xlarge

db.z1d.12xlarge

R5/M5 2,375 80,000

db.r5.24xlarge

db.m5.24xlarge

To find the supported DB instance classes for Amazon RDS for SQL Server, refer to DB instance class support for Microsoft SQL Server. To learn more about the supported instance-level baseline throughput, maximum IOPS, and maximum throughput, refer to Amazon EBS-optimized instances.

For any supported instances and volume combinations, always refer to the latest AWS documentation, because these limits may change with new AWS updates and enhancements.

Migrating your SQL Server database to Amazon RDS with the io2 Block Express volume

The migration process to RDS for SQL Server DB instances with io2 Block Express volumes is very similar to other volumes, such as gp2, gp3, and io1. The main difference is that with io2 volumes, you’re now able to migrate large database workloads up to 64 TiB. The SQL Server native backup and restore using Amazon Simple Storage Service (Amazon S3) is the most common method to migrate SQL Server databases running on premises or on Amazon Elastic Compute Cloud (Amazon EC2) instances to Amazon RDS for SQL Server. To learn more, refer to Backup and Restore Using Amazon S3.

Restoring large databases can be a time-consuming process. However, you can optimize the restore time by using a full\differential\log backup strategy, which can help reduce the cutover time. Additionally, you can use backup compression and multiple backup files to improve the backup and restore performance (for more details, see Improve native backup and restore performance in Amazon RDS for SQL Server). To improve the restore performance, make sure you monitor the network and EBS bandwidth usage of the RDS DB instance. If you notice a bottleneck, consider scaling up instance size or provisioning additional storage IOPS and throughput accordingly to complete the restore faster.

Modifying the volume types on Amazon RDS for SQL Server to io2 Block Express

Amazon RDS for SQL Server provides you with the flexibility to change between General Purpose (gp2/gp3) and Provisioned IOPS (io1/io2) volumes to suit your changing performance needs. You can switch between volume types to balance cost and performance as your application demands change. You can modify the storage type of an RDS for Server instance from gp2, gp3, or io1 to io2 Block Express through the AWS Management Console or AWS Command Line Interface (AWS CLI).

However, there is one important thing to consider when using io2 volumes. If you’re using io2 volumes and your storage size exceeds 16 TiB, you won’t be able to change back to gp2, gp3, or io1. In this scenario, to revert back to gp2, gp3, or io1, you will need to create a new instance with a gp2, gp3, or io1 volume, and migrate your database using SQL Server native backup and restore. Refer to Working with storage for Amazon RDS DB instances for technical details and instructions when modifying the existing RDS for SQL Server instances’ volume type.

Managing automated backups with large io2 volumes

The first RDS DB snapshot captures a complete backup of the storage volume for your DB instance. Subsequent backups of the same instance are incremental DB snapshots, meaning they only save the data that has changed since the most recent backup. In most cases, the daily backup of your DB instance completes quickly because these daily backups are incremental. However, in certain situations, if automatic backups have been disabled and there have been heavy workloads or multiple days without backups, the next backup can take an extended time to complete due to the accumulated data in the incremental DB snapshot. When the data changes reach 64 TiB, the backup process can take up to 4 days to finish. During this time, the instances remain fully functional and available; however, instance-level modifications, such as changing the instance type or converting instances from Single-AZ to Multi-AZ, will be restricted.

To reduce the chances of long-running DB snapshots for large databases, you can take the following steps:

  • Consider keeping automated backups always enabled on RDS DB instance. This ensures that incremental data changes do not accumulate for too long since the last DB snapshot.
  • The daily automated backup snapshot on a large-sized RDS DB instance can take longer to complete. Therefore, it is recommended to set the backup retention period to at least 7 days to provide point-in-time recovery (PITR). This ensures the availability of a DB instance backup even if you miss the next backup window due to a longer backup completion time.
  • For Multi-AZ instances, it is recommended to perform a scheduled manual failover every 1–2 months. This proactive measure helps maintain the health of your Multi-AZ deployment and avoids prolonged backup times that might occur after an unexpected failover. It minimizes the risk of extended backup times that can occur if an unexpected failover happens, and the new primary needs to catch up on changes. Performing regular failovers also ensures that the process is smooth and efficient, validating your failover strategy and giving you confidence in your high availability setup.

Optimizing automated backup time and PITR with manual snapshots

Taking a manual snapshot immediately after a large data load helps ensure the next automated backup only needs to capture changes since the last snapshot. This proactive step can significantly reduce the time required for subsequent automated backups, because it minimizes the amount of data that needs to be backed up. By reducing the overall backup duration, you help maintain the performance and availability of your RDS instance during peak operations. For detailed instructions on creating manual snapshots, refer to Creating a DB snapshot for a Single-AZ DB instance.

Additionally, we have observed that this practice also improves PITR performance. You can restore your RDS for SQL Server instance with an EBS io2 Block Express volume to a specific time using the PITR feature.

Restoring from a DB snapshot

You can restore an RDS DB instance from a DB snapshot using the console, AWS CLI, or Amazon RDS API. When the status of the restored DB instance is available, you can start using it immediately. However, your restored DB instance continues to load data in the background. This process is known as lazy loading.

If you access data that hasn’t been loaded yet, the DB instance will immediately download the requested data from Amazon S3 and then continue loading the rest of the data in the background. During this lazy loading process, you may experience elevated latency and performance impacts until the volume is fully hydrated from Amazon S3.

For more information about lazy loading and workarounds like initiating full-table scans (such as SELECT *), refer to Restoring from a DB snapshot.

Planning a scheduled maintenance window to scale storage on large volumes

If you need to scale storage capacity, you can scale up the storage of an existing DB instance. This can be done using the Amazon RDS console, the Amazon RDS API, or the AWS CLI. However, during the storage scaling process, Amazon EBS performs storage optimization, which can take a long time depending on the volume size. Therefore, we recommend scheduling the storage scaling manually during a planned maintenance window and minimizing the frequency of auto scaling storage operations.

Optimizing large-sized read replicas for SQL Server on Amazon RDS

Depending on the workload on the primary instance, creating read replicas for SQL Server in Amazon RDS may take a long time to complete. If you anticipate a heavy workload, we suggest taking a manual snapshot prior to the creation of the read replica. This proactive measure can minimize the time required to create the read replica.

Converting a single-AZ DB instance to a Multi-AZ DB Instance

You can convert RDS for SQL Server instances with io2 Block Express from Single-AZ to Multi-AZ. The time required to convert a Single-AZ instance to a Multi-AZ instance varies depending on the load on the primary node of your RDS DB instance. If you expect a heavy workload, we recommend taking a manual snapshot after the workload is complete. This makes sure the incremental DB snapshot is up to date before converting to Multi-AZ, minimizing the conversion time.

Conclusion

By understanding the IOPS to storage ratio limits and how IOPS and throughput scale, you can appropriately size RDS for SQL Server instances with io2 Block Express volumes for your performance needs. AWS Nitro System instances provide higher maximum IOPS and throughput capabilities.

For RDS for SQL Server instances with large io2 volumes, we recommend the following:

  • Keep automatic backups enabled, even during migrations, to prevent the accumulation of DB snapshots exceeding 20 TiB.
  • Set the backup retention period to at least 7 days to make sure a sufficient DB snapshot history is available in case the backup takes multiple days to complete.
  • Take manual DB snapshots after heavy workloads to minimize the incremental DB snapshot size captured during automated daily backups.
  • For Multi-AZ configurations under heavy workloads, perform failovers every 1–2 months to avoid extremely large snapshots after unplanned failovers.
  • After restoring from a DB snapshot, run application scripts to expedite hydration (lazy loading), similar to other volume types.
  • Perform storage scaling manually during a scheduled maintenance window, and minimize the frequency of auto scaling storage operations, because optimizing a 64 TiB volume can take a long time.

By following these recommendations, you can effectively manage and optimize the performance, backup, and maintenance of your RDS for SQL Server instances with large volumes using EBS io2 Block Express volumes. We highly recommend performing thorough performance testing and benchmarking for your SQL Server workload on RDS for SQL Server DB instances with io2 Block Express volumes before deciding to use them for production use. Please feel free to share your feedback in the comment section.


About the authors

Sudhir Amin is a Sr. Solutions Architect at Amazon Web Services. In his role based out of New York, he provides architectural guidance and technical assistance to enterprise customers across different industry verticals, accelerating their cloud adoption. He is a big fan of snooker, combat sports such as boxing and UFC, and loves traveling to countries with rich wildlife reserves where he gets to see world’s most majestic animals up close.

Toby Xu is a Senior Product Manager on the Amazon Web Services RDS team. Before joining AWS, he led Solution Architect teams at Oracle and Informatica. With over 16 years of experience working with database and middleware technologies, Toby possesses a wealth of knowledge in this field. He holds a Master’s degree in Software Engineering. Toby is passionate about cloud technologies and dedicated to assisting customers in achieving their goals through leveraging these innovative solutions.

Swarndeep Singh is a Senior Solutions Architect at AWS. He works with the Amazon RDS team, focusing on commercial database engines and SQL Server. He enjoys taking on technical challenges in Amazon RDS and is passionate about working with AWS Customers, building customized solutions and sharing knowledge with his teammates.