AWS Database Blog

Migrate from SQL Server to Amazon DynamoDB with data transformation using a staging table

With the evolution of microservice-based architectures, organizations are increasingly adopting purpose-built databases. Occasionally, businesses need guidance on which cloud service and solutions are best for them, as well as a plan for helping with the migration. When performing heterogeneous database migrations, you might encounter problems with attribute patterns on NoSQL compared to traditional relational database management systems (RDBMS) like restructuring traditional table and column attributes from your self-managed SQL Server to Amazon DynamoDB access patterns.

The key motivation for migrating to DynamoDB is to reduce costs while simplifying operations and improving performance at scale. DynamoDB enables you to free up resources to contribute to growing top-line revenue and value for your customers. Its serverless architecture enables you to save costs with capabilities like pay-per-request, scale-to-zero, and no up-front costs. On the operational side, not having to deal with scaling resources, maintenance windows, or major version upgrades saves significant operational hours and removes undifferentiated heavy lifting. Overall, DynamoDB provides a cost-optimized approach to build innovative and disruptive solutions that provide differentiated customer experiences.

Every application has a specific requirement pattern to fulfill the business use case, whether it’s in NoSQL, traditional SQL (RDBMS), or both. Workloads from applications communicating with database servers fall into one of these types (read or write). In this post, we discuss access patterns that emphasize writing over reading. We go over the design of access patterns and successfully migrate a conventional SQL Server table (relational) to DynamoDB (non-relational) for a microservice application using AWS Database Migration Service (AWS DMS).

Any microservice application that wants to go from a small or large monolithic table in an RDBMS to DynamoDB (NoSQL) can use this solution.

Solution overview

Our solution involves using a reference table created on a self-managed SQL Server on Amazon Elastic Compute Cloud (Amazon EC2). The reference table is built by analyzing the data access patterns and performing attribute mapping in DynamoDB. After the design pattern is established, we provision the DynamoDB table with the proper primary key and sort key to map the reference table. We then use AWS DMS to migrate data and set up continuous replication from the source endpoint mapped to this reference table in SQL Server to the target DynamoDB table.

The following diagram illustrates the architecture.

The solution workflow consists of the following steps:

  1. Connect to your self-managed SQL Server instance and view the relational source tables that will be moved to a DynamoDB table.
  2. Because the SQL source table adheres to traditional SQL standards and doesn’t match the DynamoDB attributes, we use a custom script to convert the source table into a staging table that does match the DynamoDB qualities. A new staging table is created in SQL Server and data is added using customized scripts as follows:
    1. One-time load (full load) – You can run the ad hoc script to populate the staging table if you only want to do a one-time data transfer from the source table.
    2. Change data capture (CDC) – When tracking real-time changes on the source table, a SQL native trigger (INSERT) is generated on the source table to insert the changes into the staging table. This post talks about the INSERT operation on the source table only. In other use cases, if there are conditions to address INSERT, UPDATE, or DELETE, you should have the triggers created accordingly.
  3. The staging table is mapped as the source from SQL Server.
  4. Create a table in DynamoDB with a partition key and sort key that correspond to the attributes on the SQL staging table.
  5. Use AWS DMS to migrate data from the source SQL Server to the target DynamoDB. To migrate the data, map the SQL Server staging table as the source and the DynamoDB table as the target. With the aid of SQL native triggers, we use both full load and CDC in our scenario to obtain real-time data from the SQL Server source staging table.

We use AWS Secrets Manager for storing the SQL credentials in AWS DMS and use Amazon CloudWatch for monitoring the AWS DMS task.

The steps to implement the solution are as follows:

  1. Configure an AWS Identity and Access Management (IAM) policy and roles.
  2. Configure DynamoDB access patterns and a global secondary index (GSI).
  3. Create a DynamoDB table.
  4. Configure SQL Server table mapping.
  5. Migrate data using AWS DMS.
  6. Verify the migrated data in DynamoDB.
  7. Monitor the AWS DMS migration task.

Prerequisites

To follow along with the post, you must have the following prerequisites:

Because this post explores access patterns and attribute mapping to DynamoDB (NoSQL) from SQL Server (RDBMS), a basic understanding of DynamoDB core concepts and its design patterns is needed. Additionally, knowledge of AWS DMS with reference to configuring the SQL Server source and DynamoDB target is recommended.

Configure IAM policy and roles

Complete the following steps to configure an IAM policy and roles for DynamoDB:

  1. On the IAM console, choose Policies in the navigation pane.
  2. Choose Create policy.
  3. Select JSON and use the following policy as a template, replacing the DynamoDB ARN with the correct Region, account number, and DynamoDB table name:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
    "dynamodb:PutItem",
                "dynamodb:CreateTable",
                "dynamodb:DescribeTable",
                "dynamodb:DeleteTable",
                "dynamodb:DeleteItem",
                "dynamodb:UpdateItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:<region>:<accountnumber>:table/<DDB Table name>",
                "arn:aws:dynamodb:<region>:<accountnumber>:table/awsdms_apply_exceptions",
                "arn:aws:dynamodb:<region>:<accountnumber>:table/awsdms_full_load_exceptions"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "dynamodb:ListTables",
            "Resource": "arn:aws:dynamodb:<region>:<accountnumber>:table/<DDB Table name>"
        }
    ]
}
  1. In the navigation pane, choose Roles.
  2. Choose Create role.
  3. Select AWS DMS, then choose Next: Permissions.
  4. Select the policy you created.
  5. Assign a role name and then choose Create role.

Configure DynamoDB access patterns and a GSI

DynamoDB tables are often created based on the primary key’s uniqueness to satisfy business or application access patterns. The partition key and sort key are combined to form a primary key.

Access patterns

In the current use case, we use the SQL table (tbl_StatusHistory_log) as the source to be moved to DynamoDB. Because we emphasize on the write (INSERT) workload, the partition key should be based on the write access pattern compared to read patterns. The following screenshot shows the source table columns with RowID as the primary key.

We must develop a pattern that satisfies the microservice application because RowID won’t offer a distinctive pattern for applications on DynamoDB. The partition key has a combination of columns (Usr_ID, account_id, and silo_id) to give more meaningful access patterns on DynamoDB, but the combination may not be unique because the table (tbl_StatusHistory_log) can have multiple account_id and silo_id values for the same user.

To make the DynamoDB primary key unique, it would be preferable to combine a partition key with a sort key. Because RowID is already unique in the source table, we use it as the sort key. The following table shows how we can expand an access pattern on application (query need) arguments as long as the table’s uniqueness is preserved. Additional columns (Status_Code, Reason_Code, and Status_Date) are added to the partition key to allow application searches to expand the properties.

Input Arguments Returned Attributes Frequency Operation Table
Usr_ID, Account_ID, Silo_ID All attributes More Insert/Query Base
Usr_ID, Account_ID, Silo_ID, Status_Code All attributes More Insert/Query Base
Usr_ID, Account_ID, Silo_ID, Reason_Code All attributes More Insert/Query Base
Usr_ID, Account_ID, Silo_ID, Status_Date All attributes More Insert/Query Base
Usr_ID, Silo_ID, Status_Code All attributes Less Query GSI
Usr_ID, Silo_ID, Reason_Code All attributes Less Query GSI
Usr_ID, Silo_ID, Status_Date All attributes Less Query GSI

The table summarizes the following information:

  • Input arguments – Ideal scenarios on how an application can provide input arguments to DynamoDB for querying
  • Returned attributes – If the DynamoDB table will provide all attributes as results or specific columns (based on application query)
  • Frequency – How often application requests come in for the arguments
  • DynamoDB operation – The most common use cases for application requests
  • Table – The table options (base or GSI)

The following screenshot provides a visual depiction of the partition key and sort key, which make up the primary key. Additionally, the Other attributes section lists all columns from the original source that will be part of the results output in DynamoDB.

Global secondary index

The GSI-PK attribute was added to the base table to improve read workload query performance because the GSI index can span all of the data in the base table across all partitions. GSI-PK is a combination of columns (Usr_Id and silo_Id) that is comparable to the partition key of the base table but doesn’t include account_Id. Another benefit of GSIs is that you can choose to not include all of the anticipated properties in the result. In the following design model, we provide the GSI partition key and sort key along with attributes as an example.

Create a DynamoDB table

Use the following code to create a DynamoDB table using the AWS CLI. We call the table DDB_StatusHistory_Logs. PK and SK are defined as attribute names for the partition key and sort key, respectively.

Amazon DynamoDB create-table \
    --table-name DDB_StatusHistory_Logs \
    --attribute-definitions \
        AttributeName=PK,AttributeType=S \
        AttributeName=SK,AttributeType=S \
    --key-schema \
        AttributeName=PK,KeyType=HASH \
        AttributeName=SK,KeyType=RANGE \
    --provisioned-throughput \
        ReadCapacityUnits=25,WriteCapacityUnits=25 \
    --table-class STANDARD

After you run the commands, wait for the table to be created. You can verify the DynamoDB table has been created via the AWS CLI.

Configure SQL Server table mapping

In this section, we walk through the steps to configure SQL Server table mapping. We create the source and staging SQL tables, populate the source table for testing, create a script for a one-time load into the staging table, and create a SQL table INSERT trigger.

Create the source and staging SQL tables

For our reference example, we have two SQL Server tables to perform the migration. In the following code, we create a new database called dmsload. We create the source table tbl_StatusHistory_log, which acts as the base data for the AWS DMS load, and the staging table DDB_StatusHistory_DMSLoad, which we use as a reference table for AWS DMS to map the key attributes defined in DynamoDB.

create database dmsload
go

USE dmsload
GO
CREATE TABLE [dbo].[tbl_StatusHistory_log](
	[RowID] [int] IDENTITY (1,1) NOT NULL,	
	[Usr_ID] [int] NOT NULL,	
	[account_id] [varchar](40) NOT NULL,
	[silo_id] [int] NOT NULL,
	[status_code] [varchar](12) NOT NULL,
	[status_desc] [varchar](250) NOT NULL,
	[status_date] [datetime] NOT NULL,
	[reason_code] [varchar](12) NULL,
	[reason_desc] [varchar](250) NULL,
	[reason_date] [datetime] NULL,
	[update_date] [datetime] NOT NULL,
	[username] [varchar](15) NOT NULL	
 CONSTRAINT [PK_tbl_StatusHistory_log] PRIMARY KEY CLUSTERED 
(
	[RowID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[DDB_StatusHistory_DMSLoad](
	[PK] [varchar](50) NOT NULL,
	[SK] [varchar](50) NOT NULL,
	[GSI-PK] [varchar](50) NOT NULL,
	[SRC] [varchar](5) NOT NULL,
	[account_id] [varchar](40) NOT NULL,
	[status_code] [varchar](12) NOT NULL,
	[status-desc] [varchar](250) NOT NULL,
	[status-date] [datetime] NOT NULL,
	[reason-note] [varchar](12)  NULL,
	[reason-desc] [varchar](250)  NULL,
	[reason-date] [datetime]  NULL,
	[update-date] [datetime] NOT NULL,
	[username] [varchar](15) NOT NULL	
 CONSTRAINT [PK_DDB_StatusHistory_DMSLoad] PRIMARY KEY CLUSTERED 
(
	[PK] ASC,
	[SK] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO

Because the staging table acts as the data feed to DynamoDB, normal attributes are copied from the base table to the staging table. PK, SK, and GSI-PK are the key columns that form a unique access pattern on DynamoDB. The following scripts illustrate the staging table data load.

Populate the source table for testing

Use the following code to populate the source table (tbl_StatusHistory_log) with test data on SQL Server:

use dmsload
go
Insert into dbo.[tbl_StatusHistory_log]
(Usr_ID,account_id,silo_id,status_code,status_desc,status_date,reason_code,reason_desc,reason_date,update_date,username)
values (101,'123321456','200',1,'claim initiated',getdate()-1,'RC200','claim from the stockbacklog',getdate()-1,getdate(),'KJ')
,(101,'123321456','300',1,'claim processed',getdate()-1,'RC300','claim initiated to process from backlog',getdate()-1,getdate(),'KJ')
,(104,'789654321','100',1,'New Remit initiated',getdate()-1,'RC100','Remit logs',getdate()-1,getdate(),'lpr')

Create a script for a one-time load into the staging table

We use an ad hoc script to populate the staging table (DDB_StatusHistory_DMSLoad) with data in order to match the access pattern for DynamoDB. In the code section, you can find scripts that map the DynamoDB access pattern using the UNION clause referencing the source table (tbl_StatusHistory_log).

use dmsload
go
-- PK Data population
Insert into dbo.DDB_StatusHistory_DMSLoad ([PK],[SK],[GSI-PK],[account_id],[status_code],[status-desc],[status-date],[update-date],[username],[SRC],[reason-note],[reason-desc],[reason-date])    
-- PK with base
select convert(varchar(5),Usr_ID)+'#'+convert(varchar(25),account_id)+'#'+convert(varchar(5),silo_id) as PK
,RowID,convert(varchar(5),Usr_ID)+'#'+convert(varchar(5),silo_id)
,[account_id],[status_code],status_desc,status_date,update_date,username,'DMS',reason_code,reason_desc,reason_date
from dbo.[tbl_StatusHistory_log] with (nolock)
union
-- PK with StatusCode
select convert(varchar(5),Usr_ID)+'#'+convert(varchar(25),account_id)+'#'+convert(varchar(5),silo_id)+'#'+'SC-'+isnull(nullif(status_code,''),'SCNULL') as PK
,RowID,convert(varchar(5),Usr_ID)+'#'+convert(varchar(5),silo_id)+'#'+'SC-'+isnull(nullif(status_code,''),'SCNULL')
,[account_id],[status_code],status_desc,status_date,update_date,username,'DMS',reason_code,reason_desc,reason_date
from dbo.[tbl_StatusHistory_log] with (nolock)
union
-- PK with ResonCode
select convert(varchar(5),Usr_ID)+'#'+convert(varchar(25),account_id)+'#'+convert(varchar(5),silo_id)+'#'+'RC-'+isnull(nullif(reason_code,''),'RCNULL') as PK
,RowID,convert(varchar(5),Usr_ID)+'#'+convert(varchar(5),silo_id)+'#'+'RC-'+isnull(nullif(reason_code,''),'RCNULL')
,[account_id],[status_code],status_desc,status_date,update_date,username,'DMS',reason_code,reason_desc,reason_date
from dbo.[tbl_StatusHistory_log] with (nolock)
union
-- PK with StatusDate
select convert(varchar(5),Usr_ID)+'#'+convert(varchar(25),account_id)+'#'+convert(varchar(5),silo_id)+'#'+'SD-'+isnull(convert(varchar(150),convert(date,[status_date])),'SDNULL') as PK
,RowID,convert(varchar(5),Usr_ID)+'#'+convert(varchar(5),silo_id)+'#'+'SD-'+isnull(convert(varchar(150),convert(date,[status_date])),'SDNULL')
,[account_id],[status_code],status_desc,status_date,update_date,username,'DMS',reason_code,reason_desc,reason_date
from dbo.[tbl_StatusHistory_log] with (nolock)

The output from the staging table (DDB_StatusHistory_DMSLoad) shows the data been transformed to map the access pattern on DynamoDB. You can observe the PK and GSI-PK pattern formation.

PK GSI-PK
Usr_ID#Account_ID#Silo_id Usr_ID#Silo_id
Usr_ID#Account_ID#Silo_id#SC-200 Usr_ID#Silo_id#SC-200
Usr_ID#Account_ID#Silo_id#RC-RC300 Usr_ID#Silo_id#RC-RC300
Usr_ID#Account_ID#Silo_id#SD-2023-06-02 Usr_ID#Silo_id#SD-2023-06-02

Create a SQL table (INSERT) trigger

If you have applications that can’t have a longer downtime and need to track the real-time changes on the source table, you can create a SQL native trigger (INSERT) on the source table (tbl_StatusHistory_log) to insert the changes into the staging table (DDB_StatusHistory_DMSLoad). The table trigger (after INSERT) follows a similar access pattern approach for the full load (ad hoc) scripts. After you complete the entire load through AWS DMS, you can drop the triggers on the source table in SQL Server.

The following code illustrates the table trigger:

USE dmsload
GO
CREATE or ALTER TRIGGER [dbo].[trii_StatusHistory_DMSInsert] ON [dbo].[tbl_StatusHistory_log]
    after INSERT
AS
BEGIN
    DECLARE @silo int,@RowID int,@Usr_ID int,@accid varchar(40),@StatusCode varchar(12),@statusdesc varchar(250),@StatusDate datetime,@reasoncode varchar(12),@reasondesc varchar(250),@reasondate datetime,@updatedate datetime,@usrname varchar(15)
    
    Declare dms_cursor cursor for
    Select RowID,Usr_ID,account_id,silo_id,[status_code],[status_desc],[status_date],[reason_code],[reason_desc],[reason_date],[update_date],[username]
    from inserted i 
 
    SET NOCOUNT ON;
    Open dms_cursor
    FETCH NEXT FROM dms_cursor into @RowID,@Usr_ID,@accid,@silo,@StatusCode,@statusdesc,@StatusDate,@reasoncode,@reasondesc,@reasondate,@updatedate,@usrname
    While @@FETCH_STATUS = 0
    BEGIN TRY
        BEGIN TRAN
            -- PK Data population
            Insert into dbo.DDB_StatusHistory_DMSLoad ([PK],[SK],[GSI-PK],[account_id],[status_code],[status-desc],[status-date],[update-date],[username],[SRC],[reason-note],[reason-desc],[reason-date])    
			-- PK with base
			select convert(varchar(5),@Usr_ID)+'#'+convert(varchar(25),@accid)+'#'+convert(varchar(5),@silo) as PK
			,@RowID,convert(varchar(5),@Usr_ID)+'#'+convert(varchar(5),@silo)
			,@accid,@StatusCode,@statusdesc,@StatusDate,@updatedate,@usrname,'DMS',@reasoncode,@reasondesc,@reasondate
			from dbo.[tbl_StatusHistory_log] with (nolock)
			union
			-- PK with StatusCode
			select convert(varchar(5),@Usr_ID)+'#'+convert(varchar(25),@accid)+'#'+convert(varchar(5),@silo)+'#'+'SC-'+isnull(nullif(@StatusCode,''),'SCNULL') as PK
			,@RowID,convert(varchar(5),@Usr_ID)+'#'+convert(varchar(5),@silo)+'#'+'SC-'+isnull(nullif(@StatusCode,''),'SCNULL')
			,@accid,@StatusCode,@statusdesc,@StatusDate,@updatedate,@usrname,'DMS',@reasoncode,@reasondesc,@reasondate
			from dbo.[tbl_StatusHistory_log] with (nolock)
			union
			-- PK with ResonCode
			select convert(varchar(5),@Usr_ID)+'#'+convert(varchar(25),@accid)+'#'+convert(varchar(5),@silo)+'#'+'RC-'+isnull(nullif(@reasoncode,''),'RCNULL') as PK
			,@RowID,convert(varchar(5),@Usr_ID)+'#'+convert(varchar(5),@silo)+'#'+'RC-'+isnull(nullif(@reasoncode,''),'RCNULL')
			,@accid,@StatusCode,@statusdesc,@StatusDate,@updatedate,@usrname,'DMS',@reasoncode,@reasondesc,@reasondate
			from dbo.[tbl_StatusHistory_log] with (nolock)
			union
			-- PK with StatusDate
			select convert(varchar(5),@Usr_ID)+'#'+convert(varchar(25),@accid)+'#'+convert(varchar(5),@silo)+'#'+'SD-'+isnull(convert(varchar(150),convert(date,@StatusDate)),'SDNULL') as PK
			,@RowID,convert(varchar(5),@Usr_ID)+'#'+convert(varchar(5),@silo)+'#'+'SD-'+isnull(convert(varchar(150),convert(date,@StatusDate)),'SDNULL')
			,@accid,@StatusCode,@statusdesc,@StatusDate,@updatedate,@usrname,'DMS',@reasoncode,@reasondesc,@reasondate
			from dbo.[tbl_StatusHistory_log] with (nolock)

            FETCH NEXT FROM dms_cursor into  @RowID,@Usr_ID,@accid,@silo,@StatusCode,@statusdesc,@StatusDate,@reasoncode,@reasondesc,@reasondate,@updatedate,@usrname
        COMMIT TRAN
    END TRY
    BEGIN CATCH
        if (XACT_STATE()) = -1
            ROLLBACK TRAN
 
        if (XACT_STATE()) = -1
            COMMIT TRAN
    END CATCH
CLOSE dms_cursor
DEALLOCATE dms_cursor     
END
GO

To illustrate the real-time data, find the sample INSERT statement run on the source table (tbl_StatusHistory_log), which will trigger the corresponding access pattern in the staging table (DDB_StatusHistory_DMSLoad):

use dmsload
go
Insert into [tbl_StatusHistory_log] (Usr_ID,account_id,silo_id,status_code,status_desc,status_date,update_date,username)
select 110,'289654323',1,'211','claim initiated',getdate()-1,getdate(),'kJ'

You can find the output on the staging table from the insert on the source table, as shown in the following screenshot.

Migrate the data using AWS DMS

Now that we have the SQL Server staging table set up, you can migrate the data from the source SQL Server staging table to the target DynamoDB table.

To start working with AWS DMS, create the AWS DMS replication instance. This instance must have enough memory and processing power to migrate data from the source database to the target database. For information on choosing an appropriate instance, refer to Working with an AWS DMS replication instance.

Create the AWS DMS endpoints

Next, create AWS DMS endpoints for the source and target database. AWS DMS endpoints provide the connection, data store type, and location information about your data store.

Source endpoint

Complete the following steps to create your source endpoint:

  1. On the AWS DMS console, create a source endpoint.
  2. For Endpoint identifier, enter a name.
  3. For Source engine, choose Microsoft SQL Server.
  4. For Access to endpoint, select Provide access information manually.
  5. For Server name, enter the SQL Server hostname.
  6. For Port, enter the port number being used.
  7. Enter the appropriate user name and password.
  8. Enter the appropriate database name.
  9. Test the endpoint, then complete endpoint creation.

Target endpoint

Repeat the previous steps with the following parameters for the target endpoint:

  1. For Endpoint identifier, enter a name.
  2. For Target engine, choose Amazon DynamoDB.
  3. For Amazon Resource Name (ARN) for service access role, enter the IAM role.
  4. Test the endpoint, then complete endpoint creation.

Create the AWS DMS migration task

An AWS DMS task is where all the work happens. This is where you configure what database objects to migrate, logging requirements, error handling, and so on. Complete the following steps to create your task:

  1. On the AWS DMS console, create a new migration task.
  2. For Task identifier, enter an identifiable name.
  3. For Replication instance, choose the replication instance you created.
  4. For Source database endpoint, choose the SQL Server endpoint you created.
  5. For Target database endpoint, choose the DynamoDB endpoint you created.
  6. For Migration type, choose Migrate existing data and replicate ongoing changes.
  7. Enable CloudWatch Logs under the task settings so you can debug issues.
  8. Under Table mappings, for Editing mode, select JSON editor.
  9. Use the following JSON to create the table mappings:
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "Load",
            "object-locator": {
                "schema-name": "dbo",
                "table-name": "DDB_StatusHistory_DMSLoad"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "object-mapping",
            "rule-id": "2",
            "rule-name": "DynamoDBDMSLoad",
            "rule-action": "map-record-to-record",
            "object-locator": {
                "schema-name": "dbo",
                "table-name": "DDB_StatusHistory_DMSLoad"
            },
            "target-table-name": "DDB_StatusHistory_Logs",
            "mapping-parameters": {
                "partition-key-name": "PK",
                "sort-key-name": "SK",
                "attribute-mappings": [
                    {
                        "target-attribute-name": "PK",
                        "attribute-type": "scalar",
                        "attribute-sub-type": "string",
                        "value": "${PK}"
                    },
                    {
                        "target-attribute-name": "SK",
                        "attribute-type": "scalar",
                        "attribute-sub-type": "string",
                        "value": "${SK}"
                    }
                ]
            }
        }
    ]
}

The JSON has two sections (selection and mapping). The SQL table (DDB_StatusHistory_DMSLoad) acts as the source selection and object mapping is done with the target table as the DynamoDB table (DDB_StatusHistory_Logs) with PK (partition key) and SK (sort key) as the key mapping parameters. In our scenario, the attribute values are matched with the key value of SQL source table ${PK} and ${SK}.

  1. Leave everything else as default and choose Create task.

As soon as the task is created, it’s in Creating status. After a few seconds, it changes to Ready status. The migration (and CDC if enabled) starts automatically.

Verify the migrated data in DynamoDB

After the load is complete, navigate to the Table statistics tab. The following screenshot shows the full load was completed successfully.

You can also check the data for CDC by inserting records in the base table dbo.tbl_StatusHistory_log on the SQL Server database:

Insert into dbo.tbl_StatusHistory_log (Usr_ID,account_id,status_code,status_desc,status_date,update_date,username)
values (111,'1994454320','510','claim cancelled',getdate()-1,getdate(),'doe');

The reference data was inserted successfully on the base table tbl_StatusHistory_log and staging table DDB_StatusHistory_DMSLoad.

Navigate to the Table statistics tab as shown in the following screenshot. Here we see the total rows count and inserts count increased.

Now go to DynamoDB and check if the records were inserted successfully and correctly.

The AWS CLI shows that one of the item records has been inserted into the DynamoDB table.

Monitor the AWS DMS replication tasks

In particular for big migrations, monitoring is crucial to preserving the consistency, usability, and performance of AWS DMS.

AWS offers different types of tools to monitor your AWS DMS tasks. For example, you can use CloudWatch to collect, track, and monitor AWS resources using metrics. It’s crucial to ensure that the tasks are generated with the necessary mapping rules across the schemas and database objects. Refer to the following resources for further details on monitoring techniques and things to watch out for when monitoring AWS DMS tasks:

Clean up

To avoid unnecessary charges, clean up any resources that you built as part of this architecture that are no longer in use. This includes stopping the EC2 instance, deleting the IAM policy, and deleting DynamoDB tables and AWS DMS resources like the replication instance you created as a part of this post. Additionally, after the migration is complete, if you don’t need to retain the source SQL Server database or tables, you can drop them.

Conclusion

In this post, you learned how to migrate data from SQL Server to DynamoDB using AWS DMS with data transformation using a staging table. The approach we described to create the SQL Server staging table is specific to the data access patterns for our use case example. You can use this approach to create any RDBMS table and define attributes according to your application’s required data access pattern on DynamoDB. You can then use AWS DMS to migrate data from the source SQL Server staging table to the target DynamoDB table and set up continuous replication to minimize the downtime in the migration.

If you have any questions or suggestions, leave your feedback in the comments section.


About the Authors

Karthick Jayavelu is a Senior Database Consultant at Amazon Web Services. He works with AWS customers offering technical support and designing customer solutions on database projects, as well as helping them on their journey to migrate and modernize their database solutions from on premises to AWS.

Poulami Maity is a Database Specialist Solutions Architect at Amazon Web Services. She works with AWS customers to help them migrate and modernize their existing databases to the AWS Cloud.

Vanshika Nigam is a Solutions Architect with the Database Migration Accelerator team at Amazon Web Services and has over 5 years of Amazon RDS experience. She works as an Amazon DMA Advisor to help AWS customers accelerate migrations of their on-premises, commercial databases to AWS Cloud database solutions.