AWS Database Blog

Migrating Oracle databases with near-zero downtime using AWS DMS

Do you have critical Oracle OLTP databases in your organization that can’t afford downtime? Do you want to migrate your Oracle databases to AWS with minimal or no downtime? In today’s fast-paced world with 24/7 application and database availability, some of your applications may not be able to afford significant downtime while migrating on-premises databases to the cloud.

This post discusses a solution for migrating your on-premises Oracle databases to Amazon Relational Database Service (RDS) for Oracle using AWS Database Migration Service (AWS DMS) and its change data capture (CDC) feature to minimize downtime.

Overview of AWS DMS

AWS DMS is a cloud service that helps you to migrate databases to AWS. AWS DMS can migrate relational databases, data warehouses, NoSQL databases, and other types of data stores into the AWS Cloud. AWS DMS supports homogeneous and heterogenous migrations between different database platforms. You can perform one-time migrations and also replicate ongoing changes to keep source and target databases in sync. To use AWS DMS, at least one database end should be in AWS, either the source or target database.

When you replicate data changes only using AWS DMS, you must specify a time or system change number (SCN) from which AWS DMS begins to read changes from the database logs. It’s important to keep these logs available on the server for a period of time to make sure that AWS DMS has access to these changes.

Migrating LOBs

If your source database has large binary objects (LOBs) and you have to migrate them over to the target database, AWS DMS offers the following options:

  • Full LOB mode – AWS DMS migrates all the LOBs from the source to the target database regardless of their size. Though the migration is slower, the advantage is that data isn’t truncated. For better performance, you should create a separate task on the new replication instance to migrate the tables that have LOBs larger than a few megabytes.
  • Limited LOB mode – You specify the maximum size of LOB column data, which allows AWS DMS to pre-allocate resources and apply the LOBs in bulk. If the size of the LOB columns exceeds the size that is specified in the task, AWS DMS truncates the data and sends warnings to the AWS DMS log file. You can improve performance by using Limited LOB mode if your LOB data size is within the Limited LOB size.
  • Inline LOB mode – You can migrate LOBs without truncating the data or slowing the performance of your task by replicating both small and large LOBs. First, specify a value for the InlineLobMaxSize parameter, which is available only when Full LOB mode is set to true. The AWS DMS task transfers the small LOBs inline, which is more efficient. Then, AWS DMS migrates the large LOBs by performing a lookup from the source table. However, Inline LOB mode only works during the full load phase.

Solution overview

This post uses an Amazon EC2 for Oracle DB instance as the source database assuming your on-premises database and the Amazon RDS for Oracle database as the target database. This post also uses Oracle Data Pump to export and import the data from the source Oracle database to the target Amazon RDS for Oracle database and uses AWS DMS to replicate the CDC changes from the source Oracle database to the Amazon RDS for Oracle database. This post assumes that you’ve already provisioned the Amazon RDS for Oracle database in your AWS Cloud environment as your target database.

The following diagram illustrates the architecture of this solution.

The solution includes the following steps:

  • Provision an AWS DMS replication instance with the source and target endpoints
  • Export the schema using Oracle Data Pump from the on-premises Oracle database
  • Import the schema using Oracle Data Pump into the Amazon RDS for Oracle database
  • Create an AWS DMS replication task using CDC to perform live replication
  • Validate the database schema on the target Amazon RDS for Oracle database

Prerequisites

Based on the application, after you determine which Oracle database schema to migrate to the Amazon RDS for Oracle database, you must gather the few schema details before initiating the migration, such as the schema size, the total number of objects based on object types, and invalid objects.

To use the AWS DMS CDC feature, enable database-level and table-level supplemental logging at the source Oracle database. After you complete the pre-requisites, you can provision the AWS DMS instances.

Provisioning the AWS DMS instances

Use the DMS_instance.yaml AWS CloudFormation template to provision the AWS DMS replication instance and its source and target endpoints. Complete the following steps:

  1. On the AWS Management Console, under Services, choose CloudFormation.
  2. Choose Create Stack.
  3. For Specify template, choose Upload a template file.
  4. Select Choose File.
  5. Choose the DMS_instance.yaml file.
  6. Choose Next.
  7. On the Specify stack details page, edit the predefined parameters as needed:
  • For stack name, enter your stack name.
  • Under AWS DMS Instance Parameters, enter the following parameters:
    • DMSInstanceType – Choose the required instance for AWS DMS replication instance.
    • DMSStorageSize – Enter the storage size for the AWS DMS instance.
  • Under source Oracle database configuration, enter the following parameters:
    • SourceOracleEndpointID – The source database server name for your Oracle database
    • SourceOracleDatabaseName – The source database service name or SID as applicable
    • SourceOracleUserName – The source database username. The default is system
    • SourceOracleDBPassword – The source database username’s password
    • SourceOracleDBPort – The source database port
  • Under Target RDS for Oracle database configuration, enter the following parameters:
    • TargetRDSOracleEndpointID – The target RDS database endpoint
    • TargetRDSOracleDatabaseName – The target RDS database name
    • TargetRSOracleUserName – The target RDS username
    • TargetRDSOracleDBPassword – The target RDS password
    • TargetOracleDBPort – The target RDS database port
  • Under VPC, subnet, and security group configuration, enter the following parameters:
    • VPCID – The VPC for the replication instance
    • VPCSecurityGroupId – The VPC Security Group for the replication instance
    • DMSSubnet1 – The subnet for Availability Zone 1
    • DMSSubnet2 – The subnet for Availability Zone 2
  1. Choose Next.
  2. On the Configure Stack Options page, for Tags, enter any optional values.
  3. Choose Next.
  4. On the Review page, select the check box for I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  5. Choose Create stack.
    The provisioning should complete in approximately 5 to 10 minutes. It is complete when the AWS CloudFormation Stacks console shows Create Complete.
  6. From the AWS Management Console, choose Services.
  7. Choose Database Migration Services.
  8. Under Resource management, choose Replication Instances.
    The following screenshot shows the Replication instances page, which you can use to check the output.
  9. Under Resource management, choose Endpoints.
    The following screenshot shows the Endpoints page, in which you can see both the source and target endpoints.
    After the source and target endpoints shows status as Active, you should test the connectivity. Choose Run test for each endpoint to make sure that the status shows as successful.
    You have now created AWS DMS replication instances along with its source and target endpoints and performed the endpoint connectivity test to make sure they can make successful connections.

Migrating the source database schema to the target database

You can now migrate the Oracle database schema to the Amazon RDS for Oracle database by using Oracle Data Pump. Oracle Data Pump provides a server-side infrastructure for fast data and metadata movement between Oracle databases. It is ideal for large databases where high-performance data movement offers significant time savings to database administrators. Data Pump automatically manages multiple parallel streams of unload and load for maximum throughput.

Exporting the data

When the source database is online and actively used by the application, start the data export with Oracle Data Pump from the source on-premises Oracle database. You must also generate the SCN from your source database to use the SCN in the data pump export for data consistency and in AWS DMS as a starting point for change data capture.

To export the database schema, complete the following steps:

  1. Enter the following SQL statement to generate the current SCN from your source database:
    SQL> SELECT current_scn FROM v$database;
    			
    CURRENT_SCN
    -----------
     7097405
  2. Record the generated SCN to use when you export the data and for AWS DMS.
  3. Create a parameter file to export the schema. See the content of the parameter file:
    # Use the generated SCN in step#1 for the flashback_scn parameter and create the required database directory if default DATA_PUMP_DIR database directory is not being used. 
    
    
    $ cat export_sample_user.par 
    userid=dms_sample/dms_sample
    directory=DATA_PUMP_DIR
    logfile=export_dms_sample_user.log
    dumpfile=export_dms_sample_data_%U.dmp
    schemas=DMS_SAMPLE
    flashback_scn=7097405
  4. Execute the export using the expdp utility. See the following code:
    $ expdp parfile=export_sample_user.par
    Export: Release 12.2.0.1.0 - Production on Wed Oct 2 01:46:05 2019
             Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
             Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
    FLASHBACK automatically enabled to preserve database integrity.
    Starting "DMS_SAMPLE"."SYS_EXPORT_SCHEMA_01":  dms_sample/******** parfile=export_sample_user.par 
    .
    .
    .
    .
    Master table "DMS_SAMPLE"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for DMS_SAMPLE.SYS_EXPORT_SCHEMA_01 is:
      /u03/app/backup/expdp_dump/export_dms_sample_data_01.dmp
    Job "DMS_SAMPLE"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Oct 2 01:47:27 2019 elapsed 0 00:01:20

Transferring the dump file to the target instance

There are multiple ways to transfer your Oracle Data Pump export files to your Amazon RDS for Oracle instance. You can transfer your files using either the Oracle DBMS_FILE_TRANSFER utility or the Amazon S3 integration feature.

Transferring the dump file with DBMS_FILE_TRANSFER

You can transfer your data pump files directly to the RDS instance by using the DBMS_FILE_TRANSFER utility. You must create a database link between the on-premises and the Amazon RDS for Oracle database instance.

The following code creates a database link ORARDSDB that connects to the RDS master user at the target DB instance:

$ sqlplus / as sysdba

SQL> create database link orardsdb connect to admin identified by "xxxxxx" using '(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = database-1.xxxxxxxx.us-east-1.rds.amazonaws.com)(PORT = 1521))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))';

Database link created.

Test the database link to make sure you can connect using sqlplus. See the following code:

SQL> select name from v$database@orardsdb;

NAME
---------
ORCL

To copy the dump file over to Amazon RDS for Oracle database, you can either use the default DATA_PUMP_DIR directory or you can create your own directory using the following code:

exec rdsadmin.rdsadmin_util.create_directory(p_directory_name => ‘TARGET_PUMP_DIR’);

The following script copies a dump file named export_dms_sample_data_01.dmp from the source instance to a target Amazon RDS for Oracle database using the database link named orardsdb.

[oracle@ip-172-31-45-39 ~]$ sqlplus / as sysdba
SQL> 
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object       => 'DATA_PUMP_DIR',
source_file_name              => 'export_dms_sample_data_01.dmp',
destination_directory_object  => 'TARGET_PUMP_DIR’',
destination_file_name         => 'export_dms_sample_data_01.dmp', 
destination_database          => 'orardsdb' 
);
END;
/ 

PL/SQL procedure successfully completed.

After the above PL/SQL procedure completes, you can list the data dump file in the Amazon RDS for Oracle database directly with the following code:

SQL> select * from table (rdsadmin.rds_file_util.listdir(p_directory => ‘TARGET_PUMP_DIR’));

Transferring the dump file with S3 integration

With S3 integration, you can transfer your Oracle Data dump files directly to your Amazon RDS for Oracle instance. After you export your data from your source DB instance, you can upload your data pump files to your S3 bucket, download the files from your S3 bucket to the Amazon RDS for Oracle instance, and perform the import. You can also use this integration feature to transfer your data dump files from your Amazon RDS for Oracle DB instance to your on-premises database server.

The Amazon RDS for Oracle instance must have access to an S3 bucket to work with Amazon RDS for Oracle S3 integration and S3. Create an IAM policy and an IAM role. Grant your IAM policy with GetObject, ListBucket, and PutObject. Create the IAM role and attach the policy to the role.

To use Amazon RDS for Oracle integration with S3, your Amazon RDS for Oracle instance must be associated with an option group that includes the S3_INTEGRATION option.

To create the Amazon RDS option group, complete the following steps:

  1. On the Amazon RDS console, under Options group, choose Create
  2. Under Option group details, for name, enter the name of your option group.
    This post enters rds-oracle12r2-option-group.
  3. For Description, enter a description of your group.
  4. For Engine, choose the engine for the target Amazon RDS for Oracle database to migrate.
    This post chooses oracle-ee.
  5. For Major engine version, choose the engine version.
    This post chooses 12.2.
  6. Choose Create.

After the option group is created, you must add the S3_Integration option to the option group. Complete the following steps:

  1. On the RDS console, choose Option Group.
  2. Choose the group that you created.
  3. Choose Add option.
  4. For Option, choose S3_INTEGRATION.
  5. For Version, choose 1.0.
  6. For Apply Immediately, select Yes.
  7. Choose Add Option.

After you add S3_Integration to the option group, you must modify your target Amazon RDS for Oracle database to use the new option group. Complete the following steps to add the option group to your existing Amazon RDS for Oracle database:

  1. On the RDS console, under Databases, choose the DB instance that you want to modify.
  2. Choose Modify.
    The Modify DB Instance page appears.
  3. Under Database options, for Option Group, select the newly created option group that you created.
  4. Choose Continue.
  5. Under Scheduling of modifications, choose Apply immediately.
  6. Choose Modify DB Instance.

When the Amazon RDS for Oracle database reflects the new option group, you must associate your IAM role and S3_Integration features with your DB instance. Complete the following steps:

  1. On the RDS console, choose your DB instance.
  2. Under the Connectivity and Security tab, choose Manage IAM roles.
  3. For Add IAM role to this instance, choose RDS_S3_Integration_Role (the role that you created).
  4. For Features, choose S3_INTEGRATION.
  5. Choose Add role.

After the IAM role and S3 integration feature are associated with your Amazon RDS for Oracle database, you are done integrating S3 with the Amazon RDS for Oracle database. You can now upload the data dump files from your on-premises Oracle database instance to S3 with the following code:

$ aws s3 cp export_dms_sample_data_01.dmp s3://mydbs3bucket/dmssample/
	
upload: ./export_dms_sample_data_01.dmp to s3:// mydbs3bucket/dmssample//export_dms_sample_data_01.dmp

After you upload the data dump files to the S3 bucket, connect to your target database instance and download the data pump files from S3 to the DATA_PUMP_DIR of your target instance. See the following code:

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'mydbs3bucket',
      p_s3_prefix      =>  'dmssample/export_dms_sample_data_01',
      p_directory_name =>  'DATA_PUMP_DIR') 
AS TASK_ID FROM DUAL;  

This gives you the task ID 1572302128132-3676. Verify the status of the file you uploaded to the Amazon RDS for Oracle instance with the following SQL query:

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1572302364019-3676.log'));

After the above SQL query output shows file downloaded successfully, you can list the data pump file in Amazon RDS for Oracle database  with the following code:

SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR(‘DATA_PUMP_DIR’)) order by mtime;

Starting the import

After the data dump file is available, create the roles, schemas and tablespaces onto the target Amazon RDS for Oracle database before you initiate the import.

Connect to the target Amazon RDS for Oracle database with the RDS master user account to perform the import. Add the Amazon RDS for Oracle database tns-entry to the tnsnames.ora and using the name of the connection string to perform the import.

You can add a remap of the tablespace and schema accordingly if you want to import into another tablespace or with another schema name.

Start the import into Amazon RDS for Oracle from on-premises using the connection string name as shown in following code:

$ impdp admin@orardsdb directory=DATA_PUMP_DIR logfile=import.log dumpfile=export_dms_sample_data_01.dmp

Import: Release 12.2.0.1.0 - Production on Wed Oct 2 01:52:01 2019
Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
     Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_01":  admin/********@orardsdb directory=DATA_PUMP_DIR logfile=import.log dumpfile=export_dms_sample_data_01.dmp 
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
.
.
.
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Post-import checks and validation

To validate that the import has completed successfully, review the import log file for any errors. Also, compare details such as the source and target database objects, row count, and invalid objects and recompile if there are any invalid objects.

After the import has completed successfully, to avoid data inconsistency, disable triggers and foreign keys on the target Amazon RDS for Oracle database for the relevant schema, to prepare the target database for the AWS DMS replication.

Creating the AWS DMS migration task

Create the AWS DMS migration task with the following steps:

  1. On the AWS DMS console, under Conversion & migration, choose Database migration task.
  2. Under Task configuration, for Task identifier, enter your task identifier.
  3. For Replication Instance, choose the DMS replication instance that you created.
  4. For Source database endpoint, choose your source endpoint.
  5. For Target database endpoint, choose your target Amazon RDS for Oracle database.
  6. For Migration type, choose Replicate data changes only.
  7. Under Task settings, select Specify log sequence number.
  8. For System change number, enter the Oracle database SCN that you generated from the source Oracle database.
  9. Select Enable validation.
  10. Select Enable CloudWatch Logs.
    This allows you to validate the data and Amazon CloudWatch Logs to review the AWS DMS replication instance logs.
  11. Under Selection rules, complete the following:
  • For Schema, choose Enter a schema.
  • For Schema name, enter DMS_SAMPLE.
  • For Table name, enter %.
  • For Action, choose Include.
  1. Under Transformation rules, complete the following:
  • For Target, choose Table.
  • For Scheme name, choose Enter a schema.
  • For Schema name, enter DMS_SAMPLE.
  • For Action, choose Rename to.
  1. Choose Create task.

After you create the task, it migrates the CDC to the Amazon RDS for Oracle database instance from the SCN that you provided under CDC start mode. You can also verify by reviewing the CloudWatch Logs. The following screenshot shows the log details of your migration.

Data validation

AWS DMS does data validation to confirm that your data successfully migrated the source database to the target. You can check the Table statistics page to determine the DML changes that occurred after the AWS DMS task started. During data validation, AWS DMS compares each row in the source with its corresponding row at the target, and verifies that those rows contain the same data. To accomplish this, AWS DMS issues the appropriate queries to retrieve the data.

The following screenshot shows the Table statistics page and its relevant entries.

You can also count and compare the number of records in the source and target databases to confirm that the CDC data is replicated from the source to the target database.

During the planned maintenance window, you can turn off all the applications pointing to the source database and enable the triggers and foreign key constraints using the following code:

-- Run the below statement to generate list of triggers to be enabled

select 'alter trigger '||owner||'.'||trigger_name|| ' enable;' from dba_triggers where owner='DMS_SAMPLE';
-- Run the below statement to generate list of constraints to be enabled

select 'alter table '||owner||'.'||table_name||' enable constraint '||constraint_name ||';' from dba_constraints

where owner='DMS_SAMPLE' and constraint_type='R';

As DMS does not replicate incremental sequence numbers during CDC from source database, you will need to generate the latest sequence value from the source for all the sequences and apply it on the target Amazon RDS for Oracle database to avoid sequence value inconsistencies.

Now, point the application to the target Amazon RDS for Oracle database by modifying the connection details. After you bring up the application, you should see that all application connections are now established on the target Amazon RDS for Oracle database. After you confirm that connections no longer exist on the source database, you can stop the source database.

Summary

This post demonstrated how to migrate an on-premises Oracle database to an Amazon RDS for Oracle database by using the Oracle Data Pump and AWS DMS with minimal to no downtime. You can migrate and replicate your critical databases seamlessly to Amazon RDS by using AWS DMS and its CDC feature.

We encourage you to try this solution and take advantage of all the benefits of using AWS DMS with Oracle databases. For more information, see Getting started with AWS Database Migration Service and Best Practices for AWS Database Migration Service. For more information on Oracle Database Migration, refer to the guide Migrating Oracle Databases to the AWS Cloud.

Please feel free to reach out with questions or requests in the comments. Happy migrating!

 


About the Authors

 

Sagar Patel is a Database Specialty Architect with the Professional Services team at Amazon Web Services. He works as a database migration specialist to provide technical guidance and help Amazon customers to migrate their on-premises databases to AWS. 

 

 

 

Sharath Lingareddy is Database Architect with the Professional Services team at Amazon Web Services. He has provided solutions using Oracle, PostgreSQL, Amazon RDS. His focus area is homogeneous and heterogeneous migrations of on-premise databases to Amazon RDS and Aurora PostgreSQL.

 

 

 

Jeevith Anumalla is an Oracle Database Cloud Architect with the Professional Services team at Amazon Web Services. He works as database migration specialist to help internal and external Amazon customers to move their on-premises database environment to AWS data stores.