AWS Database Blog

Use native SQL Server log shipping and PowerShell scripts to synchronize data to Amazon RDS for SQL Server

July 2023: This post was reviewed for accuracy.

Enterprises are migrating their Microsoft SQL Server workloads to AWS fully managed database services like Amazon Relational Database Service (Amazon RDS) for SQL Server, which makes it easy to set up, operate, and scale SQL Server deployment in the cloud. There are a few native Microsoft SQL Server tools available if you want to migrate your SQL Server databases from on-premises or Amazon Elastic Compute Cloud (Amazon EC2) SQL Server to Amazon RDS for SQL Server. SQL Server native backup and restore is one of the easiest ways. If your migration requires low Recovery Time objective (RTO) and Recovery Point Objective (RPO), then you can use SQL Server replication with Amazon RDS for SQL Server as a subscriber. If you want to migrate the SQL Server databases along with the users and minimize the outage window during the cutover, then consider using log shipping.

In this post, we walk you through the process of migrating your SQL Server databases to Amazon RDS using the native log shipping and PowerShell scripts.

Solution overview

This solution involves running custom scripts and setting up log shipping to send transaction log backups from your on-premises or Amazon EC2 SQL Server primary instance, to an Amazon RDS for SQL Server standby instance. The transaction log backups are then applied to the databases on the standby instance. After that, you can perform the cutover and redirect application connections to the new databases.

This solution can be deployed in any SQL Server versions that are supported in Amazon RDS for SQL Server.

The following diagram illustrates our solution architecture.

To set up the log synchronization between the primary instance to the standby instance, you must complete the following steps:

  1. Set up log shipping on your on-premises or Amazon EC2 SQL Server instance.
  2. Configure the Amazon RDS for SQL Server instance for native backup and restore.
  3. Set up a restore job on the Amazon RDS for SQL Server instance.

Prerequisites

Before you get started, make sure you have the following prerequisites:

The cost components associated with deploying this solution include Amazon RDS for SQL Server instance, database storage, Amazon S3 Bucket, AWS Secrets Manager and AWS Storage Gateway. You can estimate the cost using the AWS Pricing Calculator.

Set up log shipping

First connect to your on-premises or Amazon EC2 SQL Server primary instance. Then use SQL Server Management Studio (SSMS) to check the database recovery mode for all the databases you want to include for log shipping.

  1. Make sure the database is set to full recovery model.
    USE [master]
    GO
    ALTER DATABASE [mydatabase] SET RECOVERY FULL WITH NO_WAIT
    GO
  2. Download the scripts for this solution from the GitHub repository and copy them to a local disk location in the primary SQL Server.
    The repo contains the following files:

    • ConfigFile.txt
    • Logshipping_Full_Backup.ps1
    • Logshipping_Restore.ps1
    • Full_Backup_Job.sql
    • Restore_Job.sql
  3. Create the folder C:\LogshippingScripts\ and save all these files with the same name in that location.
    Note: If you change the file names or file locations, you must also update the file locations in the PowerShell and SQL scripts.
  4. Provide input parameters in the config file.
    The full backup and restore method uses PowerShell scripts, which use the input parameters from the config file. Open ConfigFile.txt and update the details of your environment as follows and save it.

    • primarySqlServer – The source SQL Server name with its port number.
    • primaryServerBackupFolder – The Storage Gateway shared drive location for primary backups files.
    • secondarySqlServer – The RDS for SQL Server endpoint with port number.
    • databases – The databases for this configuration with comma-separated values.
    • s3BucketName – The S3 bucket that has the backup files.
    • s3BackupFolder – The S3 folder that stores the backups.
    • awsSecretName – The Secrets Manager secret name that has the account information to connect to Amazon RDS.
    • awsRegion – The AWS Region of the secrets.
    • restoreJobLogfile – The log file location to save the restore job run details.
    • backupLogFile – The log file location to save the backup job run details.
    • timeoutinSeconds – The timeout interval in seconds. If the restore doesn’t complete within this interval, the scripts time out. You can check the restore status in the logs, and the task status in Amazon RDS. The default value is 600. If you’re restoring larger files, set this to a higher value.
    • verifyIntervalSeconds –The interval in which the script verifies the status of each task before its timeout. The default value is 30.

    The following is a sample config file:

    {
        "primarySqlServer":"EC2AMA,1433",
        "primaryServerBackupFolder”:"X:\LogShippingBackup\",
        "secondarySqlServer”:"test.cbfnzqwozz4n.us-east-1.rds.amazonaws.com,1433",
        "databases”: ["LogshipTest_1","LogshipTest_2"],
        "s3BucketName”: "migratesqltest",
        "s3BackupFolder” :"DB_Logship",
        "awsSecretName" :"LogshipCred",
        "awsRegion" :"us-east-2",
        "restoreJobLogfile" :"C:\Restore\Logship_Log",
        "backupLogFile" : C:\Backup\Logship_Log",
        "timeoutinSeconds" :600,
        "verifyIntervalSeconds" :30
    }

    If the files are saved in C:\LogshippingScripts\ with the same names, you can skip the next two steps. Otherwise, update the file locations as shown.

  5. To update Logshipping_Full_Backup.ps1, edit the PowerShell script and provide the config file location that you used earlier (line 4) and save the file.
  6. Open Full_Backup_Job.sql and provide the full backup PowerShell file location (line 11) and save the file.

    You are now ready to create your full backup job. For this post, we take a full backup of the database to Amazon S3 using Storage Gateway. For more information, refer to Store SQL Server backups in Amazon S3 using AWS Storage Gateway. To automate the full backup of all databases using PowerShell, you run the Full_Backup_Job.sql script on your source SQL Server instance as in the following step.
  7. Connect to the primary SQL server instance in SSMS, open a new query window, and run the Full_Backup_Job.sql file.
    The script creates a full backup job called Logshipping_FullBackup_Primary.
  8. Run the job to create a full backup for all the databases provided in the config file.
  9. Right-click on the database in SSMS and go to the database properties and make sure Enable this as primary database in log shipping configuration is selected.
  10. Configure your backup settings and add the Storage Gateway’s file share path as the Network path for backup folder and local path to the folder. For this post, testlogshipping is the S3 bucket used by Storage Gateway and LogShippingBackup is the folder within the S3 bucket that stores the log files.
  11. Set the interval of the SQL Server log backup schedule. For this post, we set it to 15 minutes.
  12. Click OK to save the log shipping backup configuration settings.
  13. Repeat steps 9-13 to enable log shipping for all user databases that you want to restore to Amazon RDS for SQL Server.
  14. Verify that the SQL Server jobs are created for log shipping by checking under SQL Server Agent jobs in SSMS.
  15. Open the folder location for the Storage Gateway file share and verify the files are created successfully.

Configure Amazon RDS for SQL Server for native backup and restore

To perform native restores on Amazon RDS, we must add an option group that has the SQLSERVER_BACKUP_RESTORE option on the Amazon RDS database instance.

  1. On the Amazon RDS console, choose Option Groups in the navigation pane.
  2. Choose Create group.
  3. Enter the name, description, engine, and engine version of your server.
  4. Choose Create.
  5. Select the option group that you created, then choose Add Option.
  6. Choose SQLSERVER_BACKUP_RESTORE.
  7. Associate the option group with the DB instance by choosing Databases in the navigation pane, then choose the instance to back up.
  8. Choose Actions, then choose Modify.
  9. Under Database Options, choose the option group that you created, then choose Apply Immediately and Continue.
  10. Review the information, then choose Modify DB Instance.

This option group modification has no downtime because it doesn’t require an instance reboot.

Set up an Amazon RDS for SQL Server restore job

So far you have configured the backup that takes continuous transaction log backups to Amazon S3. Also, the Amazon RDS instance is configured for native backup and restore. Now you can set up the restore process to initialize the database and apply log backups continuously.

The restore method uses a PowerShell script on the primary instance to verify whether or not the database already exists in the secondary database instance. If it is not present, the script scans through the backup files in Amazon S3 and restores the full backup. It then scans through the backup log history system views in the primary database and identifies the transaction log backup files that must be restored, in the order based on the log sequence number and the backup time. It then checks if the files are available in Amazon S3 and restores those files to the database in the Amazon RDS instance with no recovery.

To set up the restore job, you must have the following prerequisites:

  • Network connectivity established between the source instances to the AWS environment.
  • A login with minimum database restore permissions in Amazon RDS for SQL Server and credentials stored in Secrets Manager. For more information, refer to Create and manage secrets with AWS Secrets Manager.
  • An IAM role with permissions to access the S3 bucket where the backups are stored and to retrieve the credentials of the secret stored in Secrets Manager. For a sample policy statement, see Bucket policy examples.
    • The role must be attached to the EC2 instance by modifying the instance. For your on-premises SQL server, you can associate an IAM service role using AWS Systems Manager.
  • Ensure the account that you use to run the SQL Agent job has permissions to assume the role that is configured in on-premises or Amazon EC2 SQL Server.
  • The same account should have also read/write access on the backup folder and log file location folder.
  • The full backup needs to be created with the database name. For example, if the database name is TestLogshipping, the full backup needs to be TestLogshipping.Bak.
  • The transactional log backup file should be prefixed with the database name. For example, if the database name is TestLogshipping, the transaction log backup file should be TestLogshipping_xxxxxx.trn.

Perform the following steps to set up the restore in the secondary server.

If the scripts are not saved in the C:\LogshippingScripts\ location, update the file location details in the restore scripts.

Update the Logshipping_Restore.ps1 script with the config file location (line 4).

Update the Restore_Job.sql script with the Logshipping_restore file location (line 11).

To create the restore job, connect to the primary SQL Server instance, open a new query window, and run the Restore_Job.sql file. It creates a restore job called Logshipping_RDS_Secondary_Restore. The job is configured to run every 15 minutes. The job runs the PowerShell script Logshipping_Restore.ps1 with the input parameters from the config file. It restores the full backup with no recovery in the RDS instance for the given databases in the initial run and applies successive log backups every 15 minutes. You can edit the schedule under the SQL Server job properties and run in intervals that best suit your RTO and RPO requirements.

Validation and troubleshooting

The job generates a log file in the location restoreJobLogfile, which is specified in the config file. The log file has details about the restore, any runtime errors, and references that you can check in the Amazon RDS instance.

For any further troubleshooting on the restore, run the following command on the RDS instance:

exec msdb.dbo.rds_task_status @db_name =’DatabaseName’

The output includes details of the restore tasks for a specific database.

Cutover

To complete the cutover, follow these steps:

  1. Find an approved outage window for the application and stop the application.
  2. Open SSMS and connect to the primary SQL Server instance.
  3. Wait for the last transaction log backup to complete, or you can run it manually at the primary SQL Server instance, and then disable the log shipping job.
  4. Open SSMS and connect to the secondary SQL Server instance.
  5. Open a New Query window and run the following command at the secondary SQL Server instance after replacing the database name and name of last transaction backup file:
    exec msdb.dbo.rds_restore_log 
    @restore_db_name='TestLogShipping_1', 
    @s3_arn_to_restore_from='arn:aws:s3:::testlogshipping/LogShippingBackup/TestLogShipping_xxxx.trn, @with_norecovery=0
    
    exec msdb.dbo.rds_restore_log 
    @restore_db_name='TestLogShipping_2', 
    @s3_arn_to_restore_from='arn:aws:s3:::testlogshipping/LogShippingBackup/TestLogShipping_xxxx. trn, @with_norecovery=0
  6. Check the status again using the same stored procedure
    exec msdb.dbo.rds_task_status @db_name =’DatabaseName’
  7. Run the following command finalizes the restore and brings the databases online:
    exec msdb.dbo.rds_finish_restore @db_name='database_name';
  8. Transfer the logins and passwords from the primary SQL Server instance to the secondary SQL Server instance.
  9. Enable Multi-AZ for your production workload to meet your high availability requirements.
  10. Start your application after updating the connection string in your application with the RDS for SQL Server instance endpoint.

To recover the database, whenever you are ready for final restore, you can restore the last database log file using the script from Step 5, with the with_norecovery=1 parameter.

Cleanup

To remove all the AWS resources created while testing this solution and avoid future charges, complete the following steps:

  1. Sign in to the console.
  2. Choose the Region where your EC2 instance and RDS for SQL Server instances reside.
  3. On the Amazon RDS console, choose Databases.
  4. Select the RDS for SQL Server instances.
  5. On the Actions menu, choose Delete.
  6. On the Amazon EC2 console, choose Instances.
  7. Select the EC2 instance.
  8. On the Instance state menu, choose Terminate instance.
  9. On the Amazon S3 console, select the bucket.
  10. On the menu, choose Delete.
  11. On the AWS Console ,delete the file storage and storage gateway.

If you have deployed this solution on existing primary and secondary SQL Server instances, to remove only custom log shipping, disable the job for log shipping on the primary SQL Server instance and then delete the backup files from the S3 bucket.

Summary

In this post, you learned how to deploy and configure native SQL Server log shipping to synchronize data from on-premises or Amazon EC2 SQL Server to Amazon RDS for SQL Server.

In this solution, Storage Gateway is used as a backup destination for the primary SQL Server instance. With Storage Gateway, you can take advantage of the virtually unlimited, cost-effective, and durable storage that Amazon S3 provides, making it available to SQL Server for backups, without having to change your backup and restore workflows. Finally, a PowerShell script is used to automate your log shipping and restore SQL Server logs from Amazon S3.

To learn more about using a file gateway for SQL Server backups, check out the webinar Storage Gateway Deep Dive – New SMB Support & SQL Server Backup Demo (24:07–32:42).

To learn more about the solution in this post, see Migrating SQL Server to Amazon RDS using native backup and restore.

To learn more about automate your SQL Server migration to Amazon RDS for SQL Server using custom log shipping using TSQL, check out Automate on-premises or Amazon EC2 SQL Server to Amazon RDS for SQL Server migration using custom log shipping.


About the Authors

Yogi Barot is Microsoft Specialist Senior Solution Architect at AWS, she has 22 years of experience working with different Microsoft technologies, her specialty is in SQL Server and different database technologies. Yogi has in depth AWS knowledge and expertise in running Microsoft workload on AWS.

Gopakumar Gopalakrishna Pillai is a Database consultant at AWS. He has 15 years of experience  working with different database technologies. He helps several customers in their cloud adoption journey by providing optimal database solutions in the cloud.