AWS Database Blog

Migrate a multi-TB SQL Server database to Amazon RDS Custom for SQL Server using Amazon S3 and Amazon EBS

This is the first in a two-part series of posts on how to migrate a multi-TB database to Amazon Relational Database Services (Amazon RDS) Custom for SQL Server.

Amazon RDS Custom for SQL Server is a managed database service that automates setup, operation, backups, high availability, and scaling of databases while granting access to the database and underlying operating system (OS). The database administrator can use this access to enable native features such as SQL Common Language Runtime (CLR), configure OS settings, and install drivers, in order to migrate legacy, custom, and packaged applications.

Overview

Amazon RDS Custom for SQL Server has a default data drive (D:) that allows for up to 16TB of storage. For Information on Amazon RDS Custom for SQL Server storage constraints and how to adjust storage refer to Modifying the storage for an RDS Custom for SQL Server DB instance.

A common challenge the customers have when migrating to Amazon RDS custom for SQL Server is when the database size plus the backup files are bigger than 16 TB. In this post, we show you how to successfully migrate a multi-TB database using Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Block Store (Amazon EBS).

Avoid storage overprovisioning

You can also use this approach to migrate a multi-TB database even if the sum of the database and its backup files is less than 16TB. It prevents storage over-provisioning and lowers costs because you can reclaim the backup files space after database restore.

For example, if you have a database that is 10TB and the backup files adds up to 5TB, it is possible to provision an Amazon RDS Custom for SQL Server with 15TB allocated to it. Then you can copy the backup files to the D: drive and restore the database. However, choosing that approach would incur higher cost because you cannot reclaim the 5 TB of storage allocated for the backup files. This implies you will be paying for 5TB that are not being used. How much that costs can be calculated using the AWS Pricing Calculator.

The following table shows the current difference in storage cost for an Amazon RDS Custom for SQL Server in the us-east-1 Region:

RDS Custom SQL Server Storage Cost (Single-AZ)
Provisioned IOPS SSD (io1)
Provisioning IOPs: 1000
Storage Amount Monthly Cost
10 TB $ 1,277.60
15 TB $ 1,866.40

Solution overview

One of the methods to migrate a multi-TB on-premises SQL Server database to Amazon RDS Custom for SQL Server is by using Amazon S3 File Gateway and an optional EBS volume. Amazon S3 File Gateway provides an easy way to store and back up data in the cloud. Once it is configured, users can access data stored in Amazon S3 from their local file system. We use an S3 bucket to store the on-premises backup files and an EBS volume as staging storage for Amazon RDS Custom for SQL Server.

You should not host database files on the additionally attached EBS volume to avoid RDS Custom for SQL Server from going into an unsupported configuration state. To understand more about the RDS Custom support perimeter and unsupported configuration monitoring, please refer to link RDS Custom support perimeter and unsupported configurations.

The following is a high-level architecture diagram of the migration.

The high-level workflow is:

  1. Back up the on-premises SQL Server database directly to the S3 File Gateway file share.
  2. Add an optional EBS volume to Amazon RDS Custom for SQL Server.
  3. Download the backup files from S3 File Gateway to the EBS volume.
  4. Restore the backup file on Amazon RDS Custom for SQL Server.
  5. Remove the EBS volume.

Prerequisites

We assume that you have the following prerequisites:

Because this solution involves setting up and using AWS resources, it will incur costs in your account. Refer to AWS Pricing for more information. We strongly recommend that you set this up in a non-production instance and run end-to-end validations before you implement this solution in a production environment.

Back up your on-premises SQL Server database to the S3 File Gateway file share

We back up our very large on-premises database to multiple backup files for performance gain and to keep each file size less than 5 TB due to S3 object size limitations.

In our example, we back up to the following files:

  • SampleTest_FullBackupCompressed01.bak
  • SampleTest_FullBackupCompressed02.bak
  • SampleTest_FullBackupCompressed03.bak
  • SampleTest_FullBackupCompressed04.bak

Add an optional EBS volume to Amazon RDS Custom for SQL Server

Create a storage volume large enough to hold the backup files and make sure it’s in the same Availability Zone (AZ) as the RDS Custom for SQL Server database.

If your RDS Custom for SQL Server was deployed on a Single-AZ, create the EBS volume on the same availability zone as the RDS Custom.

If your RDS Custom for SQL Server was deployed on a Multi-AZ, to determine on which availability zone the RDS Custom is running on follow these steps (this can also be used for Single-AZ deploys):

  1. Retrieve the RDS Custom for SQL Server endpoint using AWS CLI. The endpoint will be under “Endpoint” / “Address” as shown below:
aws rds describe-db-instances --db-instance-identifier <INSTANCE_NAME> --region <REGION>

In our example:

aws rds describe-db-instances --db-instance-identifier rdscustommultitb --region us-east-1

The endpoint can also be found in the AWS RDS console in the “Connectivity & security” tab after selecting your database instance.

  1. Retrieve the IP address for the RDS Custom for SQL Server. From a box that has access to the RDS Custom for SQL Server (you might need a bastion server if the RDS was created on a private subnet as suggested per best practices), run the following command:
nslookup <RDSEndpoint>

In our example:

nslookup rdscustommultitb.<address>.us-east-1.rds.amazonaws.com

  1. Compare the retrieved IP address with the address of the 2 EC2 created for the RDS Custom for SQL Server to determine which EC2 is the active one and thus being able to determine the availability zone it is running on. This post assumes that the RDS Custom for SQL Server database was created on the us-east-1a Availability Zone.

You can use AWS Console or the AWS CLI to create the EBS volume. For step-by-step instructions, see Create an empty volume.

  1. On the Amazon EBS console, create your volume. For this post, we added the name tag rdscustombackupstorage to our EBS volume.

To create an EBS volume using AWS CLI, issue a command in PowerShell similar to the following:

aws ec2 create-volume --volume-type gp3 --size 6000 --iops 3000 --throughput 125 --availability-zone us-east-1a --tag-specifications 'ResourceType=volume,Tags=[{Key=Name,Value=rdscustombackupstorage}]'

When the storage volume is available, you need to attach it to the RDS Custom for SQL Server instance.

  1. Select the EBS volume that you created and on the Actions menu, choose Attach volume.

  1. Choose the RDS Custom for SQL Server instance.
  2. Choose Attach volume.

To attach an EBS volume to the instance using AWS CLI, issue a command in PowerShell similar to the following:

aws ec2 attach-volume --volume-id vol-<volumeid> --instance-id i-<instanceid> --device /dev/sdf

After you attach an EBS volume to your instance, it is exposed as a block device, and appears as a removable disk in Windows.

  1. You can format the volume with any file system and then mount it.

Because the EBS volume is greater than 2 TiB, you must use a GPT partitioning scheme to access the entire volume. Refer to About partition styles – GPT and MBR for information concerning GPT and MBR partitions.

  1. To make the EBS volume available to use, you need to RDP into the RDS Custom database instance. Refer to Connecting to your RDS Custom DB instance using RDP for detailed steps.
  2. After you RDP into your RDS Custom for SQL Server instance, you can attach the disk by either using PowerShell, the DiskPart command line tool, or the Disk Management utility. For more information, refer to Make an Amazon EBS volume available for use on Windows.

In our example, we use the Disk Management utility.

After the disk is initialized, you must create a volume and assign a drive letter to it.

  1. Right-click on the unallocated space and choose New Simple Volume.

  1. Choose Next on the welcome page.
  2. Define the new volume size, then choose Next.

  1. Assign a drive letter, then choose Next.

  1. Format the partition, then choose Next.

  1. Review your settings and choose Finish.

Make sure the E: drive is available either through the file explorer or through the Disk Management utility.

To make an EBS volumes with raw partitions available to use with Windows using PowerShell, issue a command similar to the following:

Stop-Service -Name ShellHWDetection

Get-Disk | Where PartitionStyle -eq 'raw' | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -AssignDriveLetter  -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel "E" -Confirm:$false

Start-Service -Name ShellHWDetection

Download the backup files from S3 File Gateway to the EBS volume

When the disk is online and available to use, transfer the backups from the S3 bucket to the new EBS volume after installing the AWS CLI. Refer to Migrate on-premises SQL Server to Amazon RDS Custom for SQL Server using native backup and restore and Amazon S3 for the permissions needed to download backup files from Amazon S3 to Amazon RDS Custom for SQL Server.

Run the following commands on the target RDS Custom for SQL Server instance to download all the backup files to the Backup folder created on the E: drive:

aws s3 cp s3://multitbrdscustom/SampleTest_FullBackupCompressed01.bak E:\BACKUP 
aws s3 cp s3://multitbrdscustom/SampleTest_FullBackupCompressed02.bak E:\BACKUP 
aws s3 cp s3://multitbrdscustom/SampleTest_FullBackupCompressed03.bak E:\BACKUP 
aws s3 cp s3://multitbrdscustom/SampleTest_FullBackupCompressed04.bak E:\BACKUP

Restore the backup file on Amazon RDS Custom for SQL Server

After you download all the backup files, use the following native SQL Server command to restore the database, pointing to the EBS volume added and the location to the backup files (in our case, E:\Backup). The database files must reside in D:\rdsdbdata\DATA

USE [master]
RESTORE DATABASE [SampleTest] FROM 
     DISK = N'E:\Backup\SampleTest_FullBackupCompressed01.bak', 
     DISK = N'E:\Backup\SampleTest_FullBackupCompressed02.bak', 
     DISK = N'E:\Backup\SampleTest_FullBackupCompressed03.bak', 
     DISK = N'E:\Backup\SampleTest_FullBackupCompressed04.bak' 
WITH FILE = 1, 
     MOVE N'SampleTest_Data001' TO N'D:\rdsdbdata\DATA\SampleTest_Data001.mdf', 
     MOVE N'SampleTest_Data002' TO N'D:\rdsdbdata\DATA\SampleTest_Data002.ndf',  
     MOVE N'SampleTest_Data003' TO N'D:\rdsdbdata\DATA\SampleTest_Data003.ndf', 
     MOVE N'SampleTest_Data004' TO N'D:\rdsdbdata\DATA\SampleTest_Data004.ndf', 
     MOVE N'SampleTest_Data005' TO N'D:\rdsdbdata\DATA\SampleTest_Data005.ndf', 
     MOVE N'SampleTest_Data006' TO N'D:\rdsdbdata\DATA\SampleTest_Data006.ndf',  
     MOVE N'SampleTest_Data007' TO N'D:\rdsdbdata\DATA\SampleTest_Data007.ndf', 
     MOVE N'SampleTest_Data008' TO N'D:\rdsdbdata\DATA\SampleTest_Data008.ndf', 
     MOVE N'SampleTest_Data009' TO N'D:\rdsdbdata\DATA\SampleTest_Data009.ndf', 
     MOVE N'SampleTest_Data010' TO N'D:\rdsdbdata\DATA\SampleTest_Data010.ndf', 
     MOVE N'SampleTest_Data011' TO N'D:\rdsdbdata\DATA\SampleTest_Data011.ndf',  
     MOVE N'SampleTest_Data012' TO N'D:\rdsdbdata\DATA\SampleTest_Data012.ndf',  
     MOVE N'SampleTest_Data013' TO N'D:\rdsdbdata\DATA\SampleTest_Data013.ndf', 
     MOVE N'SampleTest_Data014' TO N'D:\rdsdbdata\DATA\SampleTest_Data014.ndf', 
     MOVE N'SampleTest_Data015' TO N'D:\rdsdbdata\DATA\SampleTest_Data015.ndf', 
     MOVE N'SampleTest_Data016' TO N'D:\rdsdbdata\DATA\SampleTest_Data016.ndf', 
     MOVE N'SampleTest_Log01' TO N'D:\rdsdbdata\DATA\SampleTest_Log01.ldf',  
     NOUNLOAD, 
     STATS = 5
GO

Furthermore, using advanced backup options like BLOCKSIZE, MAXTRANSFERSIZE, and BUFFERCOUNT can considerably boost backup and restoration throughput.

Remove the EBS volume

After the database is restored, if you don’t intend to use the volume anymore, make sure to first unmount and detach the volume from the EC2 instance and then delete it to avoid additional costs.

To unmount the volume:

  1. Start the Disk Management utility.
    • (Windows Server 2012 and later) On the taskbar, right-click the Windows logo and choose Disk Management.
    • Windows Server 2008) Choose StartAdministrative ToolsComputer ManagementDisk Management.
  2. Right-click the disk and then choose Offline. Wait for the disk status to change to Offline before opening the Amazon EC2 console.

Detach the volume

You can use the AWS console to detach the volume. Select volume, click on Actions and select Detach volume.

To detach an EBS volume using AWS CLI, issue a command similar to the following in Powershell:

aws ec2 detach-volume --volume-id vol-<volumeid>

Delete the volume

You can use AWS Console to delete the volume. Select the volume, click on Actions and select Delete volume.

To delete an EBS volume using AWS CLI, issue a command similar to the following in Powershell:

aws ec2 delete-volume --volume-id vol-<volumeid>

Summary

In this post, we demonstrated how to successfully migrate a database when the total size of the database and backup exceeds 16 TB. This method also lets you avoid overprovisioning storage for Amazon RDS Custom for SQL Server while transferring very large databases, thereby reducing costs.

In part 2 of this series, we’ll show you how to migrate a multi-TB SQL Server database to Amazon RDS Custom for SQL Server using Amazon FSX for Windows File Server to store your backup files.


About the Authors

Jose Amado-Blanco is a Sr. Consultant on Database Migration with over 25 years of experience working with AWS Professional Services. He helps customers on their journey to migrate and modernize their database solutions from on-premises to AWS.

Priya Nair is a Database consultant at AWS. She has 18 plus years of experience working with different database technologies. She works as database migration specialist to help Amazon customers to move their on-premises database environment to AWS cloud database solutions.

Suprith Krishnappa C is a Database Consultant with the Professional Services team at Amazon Web Services. He works with enterprise customers, offering technical support and designing customer solutions on database projects, as well as assisting them in migrating and modernizing their existing databases to the AWS cloud.