AWS Database Blog

Amazon RDS for Oracle Transportable Tablespaces using RMAN

Customers are migrating their on-premises Oracle databases to Amazon Relational Database Service (Amazon RDS) for Oracle using logical replication, such as Oracle GoldenGate or AWS Database Management Service (AWS DMS) or using Oracle Data Pump. Logical replication may be complicated to set up, leading to potential configuration errors, data integrity errors, or even data corruption. This is especially true for large database workloads.

Amazon RDS for Oracle now offers physical data migration via Oracle Recovery Manager cross-platform Transportable Tablespaces (RMAN XTTS). The new feature reduces the migration burden for large databases and datasets that present difficulties for logical migration, including large object (LOB) data, usually associated with the logical migration solutions. This new capability allows you to perform data ingestion to RDS for Oracle databases faster, with less effort, and with less application downtime than the native Oracle Data Pump import/export functionality. Oracle Database Enterprise Edition (EE) is required in order to use the RMAN XTTS feature in Amazon RDS.

The downtime required for migration can be significantly reduced when you need to migrate one or more large tablespaces. With the use of RMAN incremental backups, the downtime can be further reduced. There are several advantages of using this method over logical migration methods:

  • Reduced downtime – Transportable tablespaces offer lower downtime compared to Oracle Data Pump by using RMAN incremental backup.
  • Data integrity – Because the transportable tablespace feature only copies physical files, it reduces the risk of data integrity errors and logical corruption that can occur during logical migration.
  • Platform flexibility – You can migrate tablespaces across different platforms and endianness types, such as migrating from an Oracle Solaris platform to Linux. However, note that transporting tablespaces to and from Windows servers is not supported.

In this post, we show you how you can use the RMAN XTTS functionality to migrate from an Oracle database hosted on Amazon Elastic Compute CLoud (Amazon EC2) to Amazon RDS for Oracle. Combined with Amazon Elastic File System (Amazon EFS) integration, XTTS can help reduce the complexity of your migration strategy, reduce the number and copies of data and backups required (as well as associated storage space consumption), and reduce the application downtime associated with completing the migration of your data.

Solution overview

This post provides the detailed steps involved in transporting tablespaces from an Oracle database hosted on Amazon EC2 to Amazon RDS for Oracle.

In order to perform the RMAN XTTS migration using incremental backups, several phases are involved:

  1. Initial setup phase:
    • Copy the transport tablespaces script provided by Oracle in the following Oracle Support note (2471245.1) and set up configuration files.
  2. Prepare phase:
    • Mount the EFS file system on the source.
    • Perform the full backup (level 0) of tablespaces to be transported on the source system.
    • Mount the EFS file system to the target (via Amazon EFS integration) and restore the first full (level 0) backup.
  3. Import tablespace and roll forward phase:
    • Perform the incremental backup of tablespaces on the source system.
    • Mount the EFS file system to the target (via Amazon EFS integration) and restore the incremental backup. This phase can be run multiple times as needed and allows you to perform the cutover during your preferred application maintenance window.
  4. Transport the tablespace phase:
    • Perform a final incremental backup on the source system, mount the EFS file system to the target (via Amazon EFS integration), and perform the last incremental backup restore with or without endianness migration on the target.
    • Export the metadata of objects inside the transported tablespaces from the source system via Oracle Data Pump.
    • Transfer the backup and export files to your target DB instance
    • Import the metadata on the target database. After completion of this phase, the tablespaces on the target system will be available in read-only mode.
  5. Validation phase:
    • Check the integrity of the tablespaces and put the tablespace in read/write mode.
  6. Cleanup phase:
    • Clean up all backup and configuration files.

In this post, we use the Oracle 19c database hosted on Amazon EC2 to RDS for Oracle 19c. The following diagram illustrates our solution workflow.

scope of solution

Transportable tablespace architecture diagram

Prerequisites

The following prerequisites must be completed before you start:

  1. Review the requirements for transportable tablespaces described in the following documents in My Oracle Support:
  2. You can mount Amazon EFS to both Amazon EC2 as well as your on-premises Oracle databases, provided you are able to connect to your target VPC via VPN connection or AWS Direct Connect.
  3. Download and set up the required script from the Oracle Support note (2471245.1) on your source database server.
  4. Review the to limitations for transportable tables.
  5. Create an RDS for Oracle DB instance in the AWS Cloud running the same or higher Oracle version as the source. For details, refer to Compatibility Considerations for Transporting Data. The data transport operation fails if the compatibility level of the source database is higher than the compatibility level of the target database.
  6. Verify that the transportable tablespace feature is enabled on your target RDS for Oracle instance. The feature is enabled only if you don’t get an ORA-20304 error when you run the following query. Log in to the database using SQL Plus or SQL Developer and run the following command:
SQL>SELECT * FROM TABLE(rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files);
no rows selected

If the preceding command runs successfully, the transportable tablespace feature is enabled. If you receive an error ORA-00904, then the transportable tablespace feature isn’t enabled. In this case, reboot your DB instance to enable it. For more information, see Rebooting a DB instance.

  1. Mount the EFS file system on the source and the target.

If you are using transportable tablespaces, you can move the data between source and target using either Amazon Simple Storage Service (Amazon S3) or Amazon EFS:

  • When you use Amazon S3, upload the data to your S3 bucket and then download them to Amazon Elastic Block Store (Amazon EBS) storage attached to your DB instance. The files remain in your Amazon EBS storage during the import. After the import, you can free up this space, which remains allocated to your DB instance. For more information, see Amazon S3 integration.
  • When you use Amazon EFS, your backups remain in the EFS file system for the duration of the import. You mount the same Amazon EFS on the source and the target to share the EFS file system. Amazon EFS makes sharing the files easy and you don’t need to add additional EBS storage on the DB instance to hold these temporary files. For this reason, we recommend using Amazon EFS instead of Amazon S3. For more information, see Amazon EFS integration.

Initial setup

For the initial setup, complete the following steps:

  1. Log in to your source database host as the owner of your Oracle home. Make sure that your ORACLE_HOME and ORACLE_SID environment variables point to your source database.
  2. Copy the transport tablespaces script provided by Oracle in the following Oracle Support note (2471245.1) to the source system.
  3. Make sure the source tablespaces are self-contained. To determine whether a set of tablespaces is self-contained, log in to your source database and invoke the TRANSPORT_SET_CHECK procedure in the Oracle-supplied package DBMS_TTS. You must have been granted the EXECUTE_CATALOG_ROLE role (initially granted to SYS user) to run this procedure.
  4.  After invoking this PL/SQL procedure, you can see any violations by selecting from the TRANSPORT_SET_VIOLATIONS view. If the set of tablespaces is self-contained, this view is empty. Refer to the Oracle document for more details.
    SQL>EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('HR,CO', TRUE);
    SQL>SELECT * FROM TRANSPORT_SET_VIOLATIONS;
    No Rows Found
  5. Verify that the time zone version, DB character set, and national character set are the same or a subset of the target. For conditions that need to be met for the character set, refer to General Limitations on Transporting Data.
    SQL>SELECT * FROM V$TIMEZONE_FILE;
    SQL>SELECT * FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER IN ('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET'); 
  6. Find the Platform_id using the following SQL (this is needed in later steps when running the XTTS):
    SQL>SELECT PLATFORM_ID FROM V$DATABASE;
    
    PLATFORM_ID
    
    -----------
    
             13

Prepare the full tablespace backup on the source

In this phase, we perform the full backup (level 0) of the tablespaces to be transported on the source system. We copy the backups and the configuration files directly to the EFS file system.

Configure the xtt.properties file

We first mount Amazon EFS to transfer the files between the source and target in this post. We are using /home/ssm-user/efs-mount-point as the EFS mount point.

In the following step, we unzip the tablespaces script provided by Oracle to the EFS mountpoint and create the directories required for scratch_location and TMPDIR.

We recommend backing up directly to Amazon EFS; you can modify the following paths to point to your EFS mount point:

  • Backup pathsrc_scratch_location=/home/ssm-user/efs-mount-point/xtts/xtts_scratch
  • Temp pathTMPDIR=/home/ssm-user/efs-mount-point/xtts/xtts_tmp

Log in to your source database host as the owner of your Oracle home:

$sudo su – oracle

[oracle@~]$mkdir /home/ssm-user/efs-mount-point/xtts
[oracle@~]$cd /home/ssm-user/efs-mount-point/xtts
[oracle@~]$unzip rman_xttconvert_VER4.3.zip
[oracle ~]$mkdir xtts_scratch xtts_tmp

Edit the properties file with your source database settings:

  • Platformid – Use the source database platform ID, obtained from V$DATABASE.PLATFORM_ID.
  • Tablespaces – Use a comma-separated list of tablespaces to transport from the source database to the destination database. This must be a single line; any subsequent lines will not be read.
  • Src_scratch_location – Specify the location where the backup will be generated.
  • Usermantransport – Set this to 1 if using 12c or higher.

For a complete list of properties that you can set, refer to the Oracle document Description of Parameters in Configuration File xtt.properties.

[oracle ~]$cat xtt.properties
#linux system
platformid=13
#list of tablespaces to transport
tablespaces=HR,CO
#location where backup will be generated
src_scratch_location=/home/oracle/xtts/xtts_scratch
#RMAN command for performing backup
#This should be set if using 12c or higher.
usermantransport=1
EOF

Back up the tablespaces on your source host

In this step, you use the xttdriver.pl script to make a full backup of your tablespaces. The output of xttdriver.pl is stored in the TMPDIR environment variable.

The tablespaces that you want to transport should be in read/write mode. Log in to your source database as a user with the ALTER TABLESPACE privilege, and ensure they’re in read/write mode first; no need to alter them if they’re already in read-write.

  1. The following example places tablespaces HR and CO in read/write mode:
    SQL> select tablespace_name,status from dba_tablespaces where tablespace_name in ('HR','CO');
    TABLESPACE_NAME                STATUS
    ------------------------------ ---------
    CO                             READ ONLY
    HR                             READ ONLY
    
  2. If the status column in the preceding query shows ONLINE, then the tablespaces are already in read/write mode and the following commands can be ignored:
    SQL>ALTER TABLESPACE HR READ WRITE;
    SQL>ALTER TABLESPACE CO READ WRITE;
  3. Back up your tablespaces using the xttdriver.pl script. Optionally, you can specify --debug to run the script in debug mode:
    [oracle@~]$export TMPDIR=/home/oracle/xtts/xtts_tmp
    [oracle@~]$cd /home/oracle/xtts
    [oracle@~]$$ORACLE_HOME/perl/bin/perl xttdriver.pl –backup

When it’s complete, you will see the backup files created in the scratch location, and res.txt in the tmp location:

[oracle@~]$ls -ltr /home/oracle/xtts/xtts_scratch
total 3257688
-rw-r-----. 1 oracle dba 3331391488 Jun 20 16:47 HR_8_171v6mmk_1_1.bkp
-rw-r-----. 1 oracle dba 4481024 Jun 20 16:50 CO_9_181v6mqt_1_1.bkp

[oracle@~]$ls -ltr /home/oracle/xtts/xtts_tmp/res*
-rw-r—r--. 1 oracle dba 122 Jun 20 16:50 res.txt

Transfer the backup files to your target DB instance

Because we’re using EFS mount points for the data transfer, all we need to do is mount the same EFS mount point that we used on the source and mount that to the RDS for Oracle DB instance on the target. For instructions, refer to Amazon EFS integration.

If you need to stage your backups to an S3 bucket, complete the following steps. For more information, see Uploading objects.

After you have mounted Amazon EFS on the RDS for Oracle DB instance, to transfer files between an RDS for Oracle instance and an EFS file system, create at least one Oracle directory and configure EFS file system permissions to control DB instance access. For instructions, refer to Transferring files between RDS for Oracle and an Amazon EFS file system.

In the following example, we create an Oracle directory DATA_PUMP_DIR that points to the /xtts_scratch directory on the EFS file system. The file system path value for the p_path_on_efs parameter is prefixed with the string /rdsefs-.

BEGIN
rdsadmin.rdsadmin_util.create_directory_efs(
p_directory_name => 'DATA_PUMP_DIR',
p_path_on_efs => '/rdsefs-fs-1234567890abcdef0/xtts_scratch');
END;
/

Import the tablespace and roll forward

In this step, you import the transportable tablespace into your RDS for Oracle DB instance using the procedure rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces. For cross-platform migration, make sure you provide the p_platform_id value that you collected earlier in the import_xtts_tablespaces procedure. During the operation, the status of the import is shown in the table rdsadmin.rds_xtts_operation_info.

Import the tablespaces on your target DB instance

To import the tablespaces on your target DB instance, log in to your target RDS for Oracle DB instance as ADMIN using SQL Plus or SQL Developer:

SQL> var x clob;
SQL> exec :x :=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('HR,CO','DATA_PUMP_DIR',13);

PRINT task_id

1687189621613-1268

--View the log of the completed import by using the task ID from the previous step.
SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP', 'dbtask-1687189621613-1268.log'));

--Sample output from the log file
RMAN-08624: channel ORA_DISK_1: restoring foreign file 00009
RMAN-08003: channel ORA_DISK_1: reading from backup piece /rdsdbdata/datapump/CO_9_0u1v424f_1_1.bkp
RMAN-08626: channel ORA_DISK_1: restoring foreign file 9 to /rdsdbdata/db/RDSS3_A/datafile/CO_9.dbf
RMAN-08627: channel ORA_DISK_1: foreign piece handle=/rdsdbdata/datapump/CO_9_0u1v424f_1_1.bkp
RMAN-08023: channel ORA_DISK_1: restored backup piece 1
RMAN-08180: channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
RMAN-03091: Finished restore at 19-JUN-23

RMAN-08624: channel ORA_DISK_1: restoring foreign file 00008
RMAN-08003: channel ORA_DISK_1: reading from backup piece /rdsdbdata/datapump/HR_8_0t1v4239_1_1.bkp
RMAN-08626: channel ORA_DISK_1: restoring foreign file 8 to /rdsdbdata/db/RDSS3_A/datafile/HR_8.dbf
RMAN-08627: channel ORA_DISK_1: foreign piece handle=/rdsdbdata/datapump/HR_8_0t1v4239_1_1.bkp
RMAN-08023: channel ORA_DISK_1: restored backup piece 1
RMAN-08180: channel ORA_DISK_1: restore complete, elapsed time: 00:00:16
RMAN-03091: Finished restore at 19-JUN-23

[2023-06-20 17:17:55] [RDSS3] [import_xtts] stderr:
[2023-06-20 17:17:55] [RDSS3] [import_xtts] exitcode: 0
[2023-06-20 17:17:55] [RDSS3] [import_xtts] Completed import_xtts

2023-06-20 17:17:55.492 UTC [INFO ] The task finished successfully.

Optionally, monitor progress by querying the table rdsadmin.rds_xtts_operation_info. The xtts_operation_state column shows the value EXECUTING, COMPLETED, or FAILED.

SQL> SELECT * FROM rdsadmin.rds_xtts_operation_info;

19-JUN-23 19-JUN-23 1687189621613-1268 COMPLETED XTTS_RESTORE

Apply incremental changes

In this step, you make and transfer incremental backups periodically while the source database is active. This helps reduce the downtime by reducing the size of your final tablespace backup. If you take multiple incremental backups, you must copy the res.txt file after the last incremental backup before you can apply it on the target instance.

If you activate the BLOCK CHANGE TRACKING feature on the source system, you can greatly reduce the time of the incremental backup.

Follow the steps from Phase 1 to back up the tablespaces on your source host.

Transport the tablespace

During this phase, you make the source data read-only and make the destination data files consistent with the source database by creating and applying a final incremental backup. After the destination data files are consistent, you perform the normal transportable tablespace steps to export object metadata from the source database and import it into the destination database. The data being transported is accessible only in read-only mode until the end of this procedure.

Perform a final incremental backup

Put the transportable tablespaces in read-only mode on the source and perform the last incremental backup on the source:

  1. Log in to your source Oracle DB instance using SQL Plus or SQL Developer:
    SQL> alter tablespace tbs1 read only;
    SQL> alter tablespace tbs2 read only;
  2. Repeat the steps to back up the tablespaces in the source.

There is one key difference: you place your tablespaces in read-only mode before backing up your tablespaces for the final incremental backup.

  1. Log in to your source host as the owner of your Oracle home.
  2. Make sure that your ORACLE_HOME and ORACLE_SID environment variables point to your source database:
    [oracle@~]$$ORACLE_HOME/perl/bin/perl /home/oracle/xtts/xttdriver.pl —backup

NOTE: As the tablespaces are in READ ONLY mode, the following warning received can be ignored:

Prepare newscn for Tablespaces: ‘HR’

DECLARE*
ERROR at line 1:
ORA-20001: TABLESPACE(S) IS READONLY OR,
OFFLINE JUST CONVERT, COPY
ORA-06512: at line 284

####################################################################
Warning:
Warnings found in executing /home/oracle/xtts/xtts_tmp/backup_Jun20_Tue_17_42_22_377//xttpreparenextiter.sql
####################################################################
Prepare newscn for Tablespaces: 'CO'
DECLARE*
ERROR at line 1:
ORA-20001: TABLESPACE(S) IS READONLY OR,
OFFLINE JUST CONVERT, COPY
ORA-06512: at line 284

The file incrbackups.txt in the tmp location provides the file names of the incremental backup files that are required as the last backup set:

[oracle@~]$ cat /home/oracle/xtts/xtts_tmp/incrbackups.txt
/home/oracle/xtts/xtts_scratch/191v6q07_1_1
/home/oracle/xtts/xtts_scratch/1a1v6q3c_1_1

Export your tablespace metadata from the source Oracle DB instance

Perform the tablespace transport by running transportable mode Data Pump export on the source database to export the object metadata being transported into a dump file:

[oracle@~]$expdp system \
dumpfile=xttdump.dmp \
directory=DATA_PUMP_DIR \
statistics=NONE \
transport_tablespaces=HR,CO \
transport_full_check=y \
logfile=tts_export.log
--sample output
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/app/oracle/admin/vcloud19c/dpdump/xttdump.dmp
Datafiles required for transportable tablespace CO:
/u01/oradata/VCLOUD19C/co01.dbf
Datafiles required for transportable tablespace HR:
/u01/oradata/VCLOUD19C/hr01.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Tue Jun 20 17:51:18 2023 elapsed 0 00:00:25

Transfer the backup and export files to your target DB instance

The following are the list of files that are required for this final step. These can be exported directly to Amazon EFS, or you can follow steps for Amazon S3.

/u01/app/oracle/admin/vcloud19c/dpdump/xttdump.dmp
/home/oracle/xtts/xtts_scratch/191v6q07_1_1
/home/oracle/xtts/xtts_scratch/1a1v6q3c_1_1
/home/oracle/xtts/xtts_temp.res.txt

In case of Amazon S3 integration, download the files from Amazon S3 to the RDS for Oracle data_pump_dir location.

Import the final backup and tablespace metadata to your target DB instance

Before you import metadata, we strongly recommend that you confirm that a manual DB snapshot was successfully created after you imported your tablespaces. The snapshot will contain all imported tablespaces, so if the import fails, you don’t need to repeat the backup and import process.

  1. To import the Data Pump metadata into your RDS for Oracle DB instance, you first apply the last incremental backup to the destination data files. Verify that you have all the files on the data_pump_directory location in your target Oracle instance. You can use the following command to check the list of files in the data_pump_dir location:
    SQL> select * from table(rdsadmin.rds_file_util.listdir(p_directory => 'DATA_PUMP_DIR'));
    --Confirm the files are present and then you can proceed with the next steps.
    
    SQL> exec UTL_FILE.FREMOVE ('DATA_PUMP_DIR', 'res.txt');
    SQL> var x clob;
    SQL> exec :x :=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('HR,CO','DATA_PUMP_DIR',13);

Now you can run Data Pump import using the export file on the destination to plug in the tablespaces.

  1. Start SQL*Plus or Oracle SQL Developer and log in to your target Oracle DB instance.
  2. Create the users that own schemas in your transported tablespaces, if these users don’t already exist:
    SQL> CREATE USER HR IDENTIFIED BY password;
    SQL> CREATE USER CO IDENTIFIED BY password;
  3. Import metadata from your source database:
    SQL> exec rdsadmin.rdsadmin_transport_util.import_xtts_metadata('xttdump.dmp','DATA_PUMP_DIR');
    
    SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => 'BDUMP')) order by mtime desc;

–Check the name of the logfile with the format import_xtts_metadata*log

SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file(p_directory => 'BDUMP',p_filename => 'rds-xtts-import_xtts_metadata-2023-06-20.18-06-52.597051000.log'));

–sample output

input datafile file number=00007 name=/rdsdbdata/db/RDSS3_A/datafile/HR_8.dbf
channel d1: validation complete, elapsed time: 00:00:07
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
7 OK 0 3073 435201 49441992
File Name: /rdsdbdata/db/RDSS3_A/datafile/HR_8.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 405644
Index 0 0
Other 0 26483

Finished validate at 2023-06-19 17:55:00

released channel: d1

Recovery Manager complete.

input datafile file number=00006 name=/rdsdbdata/db/RDSS3_A/datafile/CO_9.dbf
channel d1: validation complete, elapsed time: 00:00:07
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
6 OK 0 179 262145 49441986
File Name: /rdsdbdata/db/RDSS3_A/datafile/CO_9.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 74
Index 0 148
Other 0 261743

Finished validate at 2023-06-19 17:55:08

released channel: d1

Recovery Manager complete.

PL/SQL procedure successfully completed.

–optionally you can also check status using

input datafile file number=00007 name=/rdsdbdata/db/RDSS3_A/datafile/HR_8.dbf
channel d1: validation complete, elapsed time: 00:00:07
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
7 OK 0 3073 435201 49441992
File Name: /rdsdbdata/db/RDSS3_A/datafile/HR_8.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 405644
Index 0 0
Other 0 26483

Finished validate at 2023-06-19 17:55:00

released channel: d1

Recovery Manager complete.

input datafile file number=00006 name=/rdsdbdata/db/RDSS3_A/datafile/CO_9.dbf
channel d1: validation complete, elapsed time: 00:00:07
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
6 OK 0 179 262145 49441986
File Name: /rdsdbdata/db/RDSS3_A/datafile/CO_9.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 74
Index 0 148
Other 0 261743

Finished validate at 2023-06-19 17:55:08

released channel: d1

Recovery Manager complete.

PL/SQL procedure successfully completed.

Validate the transported data

In this step, the transported tablespaces are read-only in the destination database. Logically validate the transported tablespaces using the following Amazon RDS procedure:

SQL>SET SERVEROUTPUT ON
BEGIN
rdsadmin.rdsadmin_rman_util.validate_tablespace(
p_tablespace_name => 'HR',
p_validation_type => 'PHYSICAL+LOGICAL',
p_rman_to_dbms_output => TRUE);
rdsadmin.rdsadmin_rman_util.validate_tablespace(
p_tablespace_name => 'CO',
p_validation_type => 'PHYSICAL+LOGICAL',
p_rman_to_dbms_output => TRUE);
END;
/

–Sample output

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Jun 21 16:49:56 2023
Version 19.19.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: RDSS3 (DBID=4239462957)

RMAN> run {
2> ALLOCATE CHANNEL d1 DEVICE TYPE DISK ;
3> VALIDATE TABLESPACE 'HR' CHECK LOGICAL;
4> RELEASE CHANNEL d1;
5> }
6> 

using target database control file instead of recovery catalog

allocated channel: d1
channel d1: SID=1901 device type=DISK

Starting validate at 2023-06-21 16:49:57
channel d1: starting validation of datafile
channel d1: specifying datafile(s) for validation
input datafile file number=00007 name=/rdsdbdata/db/RDSS3_A/datafile/HR_8.dbf
channel d1: validation complete, elapsed time: 00:00:07
List of Datafiles
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
7 OK 0 3073 435201 49677487
File Name: /rdsdbdata/db/RDSS3_A/datafile/HR_8.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 405644
Index 0 0
Other 0 26483

Finished validate at 2023-06-21 16:50:04

In this phase, you need to pre-create the users and other metadata objects, such as procedure, function, and sequence, from the source database.

  1. Run Data Pump export on the source database to export the object metadata into a dump file:
    [oracle@~]$expdp system \
    dumpfile=fullmeta_dump.dmp \
    content=metadata_only \
    exclude=table,index \
    directory=DATA_PUMP_DIR \
    full=y \
    logfile=fullmeta_export.log
  2. Transfer the dump file created in the source database and import it to the target database. For instructions, refer to Integrate Amazon RDS for Oracle with Amazon EFS.
  3. In addition, you can check the count of the objects, such as the table, index, synonym, view, and package objects.
  4. Place your tablespaces read/write mode:
SQL> alter tablespace tbs1 read write;
SQL> alter tablespace tbs2 read write;

Clean up leftover files

In this step, you remove any unneeded files.

  1. Clean up old backups from DATA_PUMP_DIR using UTL_FILE.FREMOVE.
  2. Drop any unused tablespaces and the data files associated with the tablespaces.
  3. Remove the log files that are not required.
  4. Get a list of orphaned data files by running the procedure rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files and delete the orphaned files by running the procedure rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import.

Limitations

The use of transportable tablespaces in Amazon RDS for Oracle is subject to certain limitations imposed by Oracle Database. To obtain more detailed information, refer to the Limitations on Transportable Tablespaces and General Limitations on Transporting Data sections in Transporting Data. In addition to these limitations, take note of the following restrictions specific to transportable tablespaces in Amazon RDS for Oracle:

  • Edition limitation – Only Enterprise Edition (EE) is supported; Standard Edition 2 (SE2) can’t be used as either the source or target database.
  • Source database restriction – Migration of data from an RDS for Oracle DB instance using transportable tablespaces is not supported. Transportable tablespaces can only be used to migrate data to an RDS for Oracle DB instance.
  • Release level compatibility – Transporting tablespaces into a database with a lower release level is not allowed. The target database must be at the same or later release level compared to the source database.
  • Supported file size – If you transfer files using Amazon S3, the maximum supported file size is 5 TiB.
  • Oracle option compatibility – If the source database employs Oracle options such as Spatial, transportable tablespaces can’t be utilized unless the same options are configured on the target database.
  • Amazon RDS for Oracle replica configuration – Transporting tablespaces into an RDS for Oracle DB instance in an Oracle replica configuration is not possible. As a workaround, you can delete all replicas, transport the tablespaces, and then recreate the replicas.

Conclusion

In this post, we demonstrated how you can use the Oracle transportable tablespaces feature to copy a set of tablespaces from an Oracle database hosted on Amazon EC2 or an on-premises Oracle database to an RDS for Oracle DB instance. The migration downtime can be significantly reduced with the use of RMAN incremental backups.

If you have questions or suggestions, leave a comment.


About the authors

Vineet Agarwal is a Senior Database Specialist Solutions Architect with AWS. Prior to AWS, Vineet worked for large enterprises in financial, retail, and healthcare verticals, helping them with database and solution architectures. In his spare time, you’ll find him playing poker, trying a new activity, or working on a DIY project.

Manash Kalita is an AWS Senior Database Specialist Solutions Architect for APJ, with extensive experience in enterprise cloud architecture.