AWS Database Blog

How to Migrate from Oracle ASM to AWS using AWS DMS

The Oracle Automatic Storage Management (ASM) database framework provides a volume manager and a file system for Oracle database files. It supports single-instance Oracle Database and Oracle Real Application Clusters (Oracle RAC) configurations. ASM has tools to manage file systems and volumes directly inside a database. These tools enable database administrators (DBAs) to control volumes and disks with familiar SQL statements in standard Oracle environments.

Managing database frameworks is a basic piece of maintaining an effective business. Until recently, DBAs had to do significant work to acquire the required equipment and licenses before fabricating a database framework. With cloud computing and innovations in the open source framework world, many companies are moving from proprietary (OLTP) database engines and frameworks to open-source frameworks.

Migrating one database framework to another requires proper planning and effort. Despite the fact that you’re moving known OLTP workloads, moving these to another database engine can be challenging. Each step in the process also requires a time-consuming effort with testing and validation.

The AWS Schema Conversion Tool (AWS SCT) helps in converting Oracle dictionaries and secondary objects to any supported database target with minimal work. AWS SCT creates a migration assessment report that recognizes objects and code that require manual change or rework.

In this blog post, we talk about using change data capture (CDC) in AWS Database Migration Service (AWS DMS) to work with an Oracle source endpoint when the storage infrastructure is Oracle ASM.

AWS DMS methods for reading Oracle source logs
When using Oracle as a source, for replicating on-going changes AWS DMS offers two methods for reading the logs. These are Oracle LogMiner and Oracle Binary Reader.

By default, AWS DMS uses Oracle LogMiner for change data capture (CDC). Alternatively, you can use the Oracle Binary Reader. The Oracle Binary Reader bypasses LogMiner and reads the logs directly. The advantages to using Binary Reader with AWS DMS, instead of LogMiner, include the following:

  • For migrations with a high volume of changes, LogMiner might have some I/O or CPU impact on the computer hosting the Oracle source database. Binary Reader has less chance of having I/O or CPU impact.
  • For migrations with a high volume of changes, CDC performance is usually much better when using Binary Reader than when using Oracle LogMiner.
  • Binary Reader supports CDC for LOBS in Oracle version 12c. LogMiner does not.
  • Binary Reader can be used with Oracle sources that use Oracle transparent data encryption (TDE).
  • Binary Reader supports the following HCC compression types for both full load and continuous replication (CDC):
    • QUERY HIGH
    • ARCHIVE HIGH
    • ARCHIVE LOW
  • Binary Reader supports the QUERY LOW compression type only for full load migrations.

Binary Reader provides greatly improved performance and reduces the load on the Oracle server when compared with LogMiner.

AWS DMS uses Binary Reader by creating a temporary directory in the file system where the archive logs are generated and stored. In most situations, this is the directory location indicated by USE_DB_RECOVERY_FILE_DEST. This directory is deleted when the migration task is STOPPED.

For more information about ASM architecture, see Overview of Oracle Automatic Storage Management (ASM) on the Oracle website.

Setting up AWS DMS
In contrast to Oracle ASM architecture, the DMS setup must create a temporary directory to configure Binary Reader for change processing. Because the disk groups and the extent of the data are managed by a separate ASM instance, AWS DMS needs to connect to this instance also. Additionally, this ASM instance should also accept traffic from the DMS replication instance. AWS DMS creates a related directory for migration.

As we discuss in our documentation on setting up Oracle as source for AWS DMS, you use the following extra connection attributes to configure Binary Reader for change processing with ASM:

useLogminerReader=N;useBfile=Y

When using the preceding attributes with an Oracle ASM source, the test connection is successful, but the migration isn’t. In this case, you see the following mentioned errors in the migration task logs:

00031609: 2016-12-05T22:21:05 [SOURCE_CAPTURE ]E: OCI error 'ORA-12162: TNS:net service name is incorrectly specified' [122307] OCI error. (oradcdc_redoutil.c:362)
00031609: 2016-12-05T22:21:05 [SOURCE_CAPTURE ]E: Cannot attach OCI server [120414] Failed to connect to database. (oradcdc_redoutil.c:363)
00031609: 2016-12-05T22:21:05 [SOURCE_CAPTURE ]E: Cannot connect to server ''(user '', thread 1) [120414] Failed to connect to database. (oradcdc_io.c:380)

The extra connection attributes field for an Oracle ASM source should be the following:

useLogminerReader=N;useBfile=Y;asm_user=<asm_username>;asm_server=<server_address>/+ASM

The server­_address value is the same as the one provided for the server name when creating the related endpoint.

Additionally, when you’re working with an ASM instance, for the password field you have to provide the password of the Oracle instance to migrate. You also provide the password to connect to your ASM instance. You separate the two passwords in the field with a comma:

my_oracle_pw,my_asm_pw

Getting the best CDC processing speed
When the redo logs are stored in Oracle ASM, the redo log access method is often slow. This slow speed occurs because the API operation responsible reads only up to 32 KB blocks in a single call. Additionally, when you set up ASM with AWS DMS using LogMiner for ongoing replication, performance degradation issues can create high latency. Also, sometimes migration can’t recover properly if there are connectivity issues or the source instance goes down.

To increase the speed and make the migration more robust for an Oracle ASM source with AWS DMS, we recommend using Binary Reader with the DMS copy functionality. In this method, DMS basically copies the archived logs from ASM to a local directory on the Oracle server. DMS can then read the copied logs using Binary Reader.

The only requirement here is that the local directory should be located on the underlying Linux file system and not on the ASM instance.

Setting up Binary Reader to work with DMS copy
Following, you can find detailed steps to set up Binary Reader access with DMS copy functionality:

  • Choose a temporary folder on the Oracle machine. If the Oracle is set up with as Oracle RAC, all RAC nodes should be able to access that folder. Also, that folder should have permissions for the Oracle user and Oracle group (for example, dba) to read, write, and delete.
  • Give the following permissions for the user that DMS uses to connect to Oracle. The last permission listed following enables the archived redo logs to be copied to the temporary folder using the COPY_FILE method.
    CREATE SESSION
    SELECT ON v_$transportable_platform
    EXECUTE ON DBMS_FILE_TRANSFER

    If you want AWS DMS to create and manage the Oracle directories, you need to grant the following privilege. As mentioned preceding, AWS DMS needs to use directory objects for copying redo logs to and deleting them from the temporary folder. If you grant the following privilege, AWS DMS creates the Oracle directories with the DMSREP_ prefix. If you don’t grant this privilege, you need to create the corresponding directories manually. The directories should point to the archived redo logs and temporary folder paths. If you create the directories manually, you don’t need to append the DMSREP_ prefix to the directory names.

    CREATE ANY DIRECTORY

    If the DMS user specified while creating the Oracle source endpoint isn’t the user that created the Oracle directories, the following additional permissions are required:

    • READ on the Oracle directory object specified as the source directory (that is, the ASM archived redo logs path and the temporary folder if you use the BFILE method to read from the temporary folder).
    • WRITE on the directory object specified as the destination directory in the copy process (that is, the temporary folder).

      GRANT READ, WRITE ON DIRECTORY bfile_dir TO userName;
    GRANT SELECT ON DBA_FILE_GROUPS to DMS_user;
    GRANT EXECUTE ON SYS.DBMS_FILE_GROUP to DMS_user;
    EXECUTE DBMS_FILE_GROUP.GRANT_SYSTEM_PRIVILEGE (DBMS_FILE_GROUP.MANAGE_ANY_FILE_GROUP, 'DMS_user', FALSE);

    To prevent old redo logs from accumulating in the temporary folder, AWS DMS should be configured to delete the redo log files from the temporary folder when they have been processed. The delete operation is performed using Oracle file groups and the Oracle DBMS_FILE_GROUP package.

    Note: We assume that the DMS_user specified preceding is the same user specified in your source endpoint.

  • AWS DMS requires ASM access permissions to read the online redo logs from ASM (that is, SYSASM or SYSADM privilege). This permission is required because reading the online redo logs from ASM is performed using a function from the Oracle DBMS_DISKGROUP package, which requires the SYSASM or SYSADM privileges. You can also validate ASM account access by opening a command prompt and issuing the following statements:
    sqlplus asmuser/asmpassword@+asmserver as sysasm
    
    ------------OR-----------------------------
    sqlplus asmuser/asmpassword@+asmserver as sysdba

    Note: Starting with Oracle 11g release 2 (11.2.0.2), the AWS DMS user must be granted the SYSASM privilege to access the ASM account. For older supported versions, granting the SYSDBA privilege should be sufficient.

  • Choose an archived redo logs destination identifier. This number is the destination ID of the archived redo logs that replication should read. The value should be the same as the DEST_ID number in the v$archived_log Then add the following attribute on your source endpoint:
    archivedLogDestId=nn

    where nn is the destination ID for the connection string in DMS.

  • Set up the Oracle ASM source endpoint so that the extra connection attribute field looks like this:
    useLogminerReader=N;copyToTempFolder=/dms_temp;archivedLogDestId=nn;accessTempFolderDirectly=N;useBfile=Y;asm_user=asmuser;asm_server=asmserver.company.com/+asmserver;deleteProccessedArchiveLogs=Y

    Example:

    useLogminerReader=N;copyToTempFolder=/backups/dms;archivedLogDestId=1;accessTempFolderDirectly=N;useBfile=Y;asm_user=bjanshego;asm_server=10.61.4.41/+ASM;deleteProccessedArchiveLogs=Y;archivedLogsOnly=Y

A few other important pointers
Your Oracle database must be able to write to the temporary folder and delete from it. If the Oracle source database is part of a RAC cluster, the temporary folder must be located on a file share that is accessible to all of the nodes in the Oracle Server RAC. If you chose the BFILE method for accessing the temporary folder, the folder doesn’t need to be a shared network folder.

If multiple tasks use the same temporary folder, conflicts can arise, such as one task needing to access a redo log that another task has already deleted. To prevent such conflicts, only one DMS task should access the same temporary folder at any given time. However, you can create a different subfolder under the same root folder for each task. For example, instead of specifying /mnt/dms as the temporary folder for both tasks, you can specify /temp/dms/task1 for one task and /temp/dms/task2 for the other.

When working with the temporary folder, make sure that the archived redo log files processed by DMS are deleted. Deleting these ensures that the redo logs don’t accumulate in the temporary folder. It’s important to make sure that in the file system where the temporary folder is created there’s enough space. If multiple tasks are running, multiply the disk space required by the number of tasks, as each task has its own temporary folder.

Summary
In this post, we highlighted how to configure and use Oracle ASM as a source with AWS DMS. We also discussed a way of improving the change processing speed when migrating this Oracle infrastructure.

If you have questions or suggestions, please leave a comment below.

Good luck and happy migrating!


About the Author

Abhinav Singh is a database engineer in Database Migration Service at Amazon Web Services. He works with our customers to provide guidance and technical assistance on database migration projects, helping them improve the value of their solutions when using AWS.