AWS Partner Network (APN) Blog

How Zen Saved 75% on Licensing By Migrating MS SQL Server Workloads to Amazon RDS

By Chris Howarth, Sr. Database Administrator at Zen
By Katherine Pugh, Propositions Manager – Cloud & Hosting at Zen
By Prasad Rao, Sr. Partner Solution Architect at AWS

Connect with Zen-1

Many enterprises with applications that use Microsoft SQL Server workloads feel constrained by the punitive licensing costs. These enterprises are looking to save on MS SQL Server license commitments, and reduce their operational overhead and complexity.

One of Zen‘s ISV customers faced a similar situation. This customer provides a fully-managed warehouse stock management and fulfilment system to support ecommerce logistics, predominantly to customers in Europe, but also globally.

Their warehouse management platform relied heavily on MS SQL Server workloads, their customer environments were hosted within a third-party public cloud, and their workloads spanned data centers in the United Kingdom, France, and Canada. The customer environments varied from small to very large, and they shared database resources.

In other words, multiple customer environments were running on a common set of database servers.

Unreliable infrastructure and the limited scalability options of the third-party cloud provider, plus escalating MS SQL licensing costs, led the customer to evaluate and migrate to Amazon Web Services (AWS). Their objective was to reduce licensing costs by moving 80+ databases from MS SQL Enterprise Edition to Amazon RDS for SQL Server Standard Edition.

A key challenge was how to migrate those databases while retaining the required high availability. The customer was also looking to reduce the operational overhead of managing the MS SQL workloads.

To address this challenge, the ISV partnered with Zen, an AWS Advanced Consulting Partner with AWS Microsoft Workloads Competency. Zen is also a member of the AWS Well-Architected Partner Program, and as an AWS service validation for Amazon EC2 for Microsoft Windows.

In this post, we’ll explore how Zen helped their customer save 75 percent on licensing costs by migrating SQL Server workload to Amazon Relational Database Service (Amazon RDS). We’ll walk you through the technical challenges, solution architecture, key processes, and individual steps of the migration.

Technical Challenges During Migration

High availability was a must-have feature for the customer’s business-critical applications. However, after migrating several on-premises SQL Server databases to Amazon SQL Standard Edition, they encountered errors when enabling multi-Availability Zone (Multi-AZ) deployments.

One workaround was to switch to Amazon RDS SQL Enterprise Edition, which would have allowed high availability in their situation. But it meant a four-fold increase in their licensing cost.

Troubleshooting by Zen led to the discovery that the issue was being caused by the In-Memory OLTP feature of SQL Enterprise Edition. This feature relies on memory-optimized tables and natively compiled stored procedures.

The main benefit of memory-optimized tables is that rows in the table are read from and written to memory, which results in fast non-blocking transactions. Natively compiled stored procedures are transact-SQL stored procedures compiled to machine code, rather than interpreted by the query execution engine.

After discussions with the customer’s development team, Zen figured out this feature was enabled in each of their 80+ databases due to a historical requirement, and was no longer relevant. The in-memory table was used in an attempt to improve the performance of a custom record-locking process. The change didn’t have the desired impact on performance, however, and so the overall process was refactored. This made the in-memory objects redundant.

In Amazon RDS SQL Server Standard Edition, high availability is implemented with native database mirroring, which doesn’t support the file structures required by in-memory OLTP usage. By contrast, Amazon RDS SQL Server Enterprise Edition uses native always-on technology, which does support the required file structures.

As explained in the AWS documentation for Multi-AZ Deployments for Microsoft SQL Server, if the database instance is running Database Mirroring (DBM)—instead of Always On Availability Groups—you may need to disable in-memory optimization before you add Multi-AZ.

In other words, the customer would have to disable in-memory optimization with DBM before it could add Multi-AZ, if their database instance ran SQL Server 2014, 2016, or 2017 Enterprise Edition, and had in-memory optimization enabled. If their database instance was running Availability Groups, it didn’t require this step.

On further investigation, Zen found a limitation of the in-memory OLTP feature. Once this feature is enabled and used, it makes changes to the database’s file structure. Once it does that, the file structure cannot be reversed even if the database objects that use in-memory OLTP are removed.

The solution to this limitation was to find a way to manipulate each of the 80+ databases to allow the downgrade of their multiple database instances to Standard Edition, and to therefore allow Multi-AZ high availability to be used. Also, the solution had to be repeatable and robust so each database could be migrated inside its specific maintenance window by the customer’s team in a way that suited their own customers.

Solution Architecture

To migrate their databases to AWS in a way that avoided the preceding limitations, the customer had to perform these high-level steps in sequence for each database:

  • Create a new target database in Amazon RDS Standard Edition.
  • Copy the source database’s table structure (excluding the redundant in-memory OLTP table) to the new Amazon RDS database.
  • Copy the source database’s data to the new Amazon RDS database.
  • Copy all other objects (such as stored procedures, views, foreign key constraints, and users) from the source database to the new Amazon RDS database.
  • Perform validation to ensure a complete copy.

Each database had to be independently evaluated and migrated, as databases varied in schema and programmable objects. Though it was accepted that a few hours of outage for each database would be necessary during the migration, it was imperative to script out and automate the process as much as possible.

The following diagram shows the high-level architecture of the process Zen designed to migrate each database.


Figure 1 – High-level architecture of migration process.

Key Processes

Before walking you through each step in the migration, here is the summary of two of the key processes:

  • Process to copy and restore native backup data.
  • Custom migration process.

Process to Copy and Restore Native Backup Data

Production data was not allowed to leave the customer’s environment. Given the volume of data travelling between the SQL Server instances, it was considered necessary to keep the data flow within the AWS infrastructure during the migration.

To do this, the migration was performed from a dedicated Amazon Elastic Compute Cloud (Amazon EC2) instance running an on-premises installation of SQL Server.

The data restore into Amazon EC2 was performed by uploading a full database backup to Amazon Simple Storage Service (Amazon S3) in advance of the migration. Then, Zen took frequent transaction log backups and uploaded to them to S3 before being restoring them onto the EC2 instance. A final transaction log backup was taken at cut-off, and the original database was taken offline to prevent further changes.

Custom Migration Process

For the custom migration process, Zen developed PowerShell scripts for each step, using Redgate SQL Compare and native BCP (Bulk Copy Program).

Zen had used the Redgate tools in multiple projects, so they had proven their worth and were a natural fit for rapid development. Zen used SQL Compare to copy the tables and programmable objects to the target database, and used BCP to copy the data.

PowerShell was used to orchestrate and perform the steps and sub-tasks contained within. PowerShell is flexible, robust, and efficient, and it was well-suited to this use case and is also available by default on the migration Amazon EC2 instance.

Each migration step had its own separate PowerShell script so it could be easily launched and re-run, if necessary, as the migration progressed.

Zen defined an XML configuration file to contain migration-specific parameters that were common to each of the PowerShell scripts. These included source and target server and database names, user credentials, and other shared parameters.

Comprehensive logging, error handling, and validation were included in the PowerShell scripts to ensure complete migration and to minimize time spent on troubleshooting errors. It was critically important to ensure the data and structure were migrated in their entirety. Any discrepancies should prevent the process from continuing without corrective action being taken.

Migration Steps

The migration process was broken down into the following detailed steps, each of which marked a milestone in the migration process.

Create Database

  • Create a new database on the target server.
  • Copy all database settings from source to target.
  • Examine the source database, and set suitable data and log file sizes, and growth rates, in the target database.
  • Set the target Recovery Model to SIMPLE to minimize transaction log growth during the migration.

Copy Schema

  • Generate a new SQL Compare configuration file based on a predefined template specific to this step.
  • Populate the configuration file with the names of the source and target servers, and databases specific to the particular migration.
  • Launch SQL Compare with the configuration file, and synchronize the source and target schema in accordance with the template. Synchronize all objects apart from users, table indexes, Foreign Key constraints, and in-memory OLTP objects.

Export Data

  • Examine the source database and create an internal list of tables that have >0 data rows.
  • Loop around the list of tables and use the native BCP utility to export the entire contents of each table to its own file on the Amazon EC2 instance.

Import Data

  • Examine the source database and create an internal list of tables with >0 data rows.
  • Loop around the list of tables and uses the native BCP utility to import the data from the relevant file on the Amazon EC2 instance into the existing empty table in the target database.

Add Foreign Keys and Indexes

  • Generate a new SQL Compare configuration file based on a predefined template specific to this step.
  • Populate the configuration file with the names of the source and target servers and databases specific to the particular migration.
  • Launch SQL Compare with the configuration file and synchronize the source and target schemas. Synchronize all objects apart from the database users table and in-memory OLTP objects. At this stage only, Foreign Key constraints should be synchronized.

Re-Seed Tables

  • Examine the source database and create an internal list of tables that have a column with the IDENTITY property set. This includes populated and empty tables.
  • Copy the current identity value from each source table to its equivalent target.

Set Recovery Model

  • Change the recovery model of the target database to ‘Full’ to ensure that high availability can be enabled in Amazon RDS.

Apply Permissions

Apply user permissions that have been predefined in the shared configuration file.

Final Cutover

Zen provided the scripts along with the documentation and remote training sessions to their customer, who in turn migrated all of their 80+ databases over a period of 12 weeks, fitting in with their own schedule and their customers’ schedules.

Issues were reported in migrating two of the databases. Both of these issues were raised by the validation checks and error handling that were part of the PowerShell scripts:

  1. A Service Broker activation procedure had been dropped from a source database prior to the migration, but the Service Broker components had been left behind. This was resolved by removing the redundant Service Broker objects from the source database and re-running the ‘Copy Schema’ step. After that, the migration continued.
  2. A table’s row count differed between the source and target databases following migration, which was due to a SQL Server metadata inaccuracy in the source database. This was resolved by rebuilding the table’s clustered index before re-running the ‘Import Data’ step.

After Zen resolved these issues, the customer decommissioned the on-premises SQL Servers.


Zen used their technical expertise to help their ISV customer resolve challenges faced in migrating their SQL workloads to AWS. The results of the engagement were:

  • Reduced costs – Zen optimized the database infrastructure to a point where Amazon RDS running MS SQL Standard Edition met the customer’s needs better than Enterprise Edition did. Resources were freed up resources from maintaining database infrastructure, further reducing costs and saving them 75 percent on the Microsoft licensing alone.
  • Rapid migration – Once this approach and methodology was proven, Zen was able to provide scripts that enabled the customer to migrate the remainder of their customer base smoothly and efficiently.
  • High availability – The customer no longer had service outages due to failure of single infrastructure components. Failover testing was proven during the course of the project.

If you are paying more for Microsoft SQL licensing than you think you should be, contact Zen for help optimizing your license commitments.


Zen – AWS Partner Spotlight

Zen is an AWS Microsoft Workloads Competency Partner and UK-based technology service provider that is focused on delivering high-quality professional and managed service cloud solutions.

Contact Zen | Partner Overview

*Already worked with Zen? Rate the Partner

*To review an AWS Partner, you must be a customer that has worked with them directly on a project.