Category: RDS Oracle


Cross-Region Automatic Disaster Recovery on Amazon RDS for Oracle Database Using DB Snapshots and AWS Lambda

Sameer Malik is a specialist solutions architect and Christian Williams is an enterprise solutions architect at Amazon Web Services.

Many AWS users are taking advantage of the managed service offerings that are available in the AWS portfolio to remove much of the undifferentiated heavy lifting from their day-to-day activities. Amazon Relational Database Service (Amazon RDS) is one of the managed services for your relational database deployments. With Amazon RDS, you can significantly reduce the administrative overhead of managing and maintaining a relational database.

There are multiple ways to implement disaster recovery (DR) solutions, depending on the Recovery Time Objective (RTO) and Recovery Point Objective (RPO), costing, and the administrative tasks involved in setting up and maintaining the DR site. This post covers setting up cross-region DR with the “backup and restore” method, which is a cost-effective DR strategy that meets less stringent RTO/RPOs. (For more information, see the RTO and RPO considerations at the end of this post.)

Backup and restore DR solution overview
In this post, we describe how to automate the cross-region DR process for Amazon RDS for Oracle Database using the backup and restore DR method. This strategy uses the Amazon EBS snapshot mechanism (both Amazon RDS system-generated automated and manual snapshots). It also uses a small set of AWS tools, such as Amazon RDS events, Amazon SNS topics, and AWS Lambda functions. The final state is an automated architecture that does the following:

  1. Takes snapshots based on a user-defined schedule.
  2. Copies the snapshots to a second DR Region (frequency of snapshot copying is determined based on the RPO requirement).
  3. Restores the latest snapshot to spin up another Amazon RDS for Oracle Database instance on the DR Region in the event of a failure in the primary Region.

The process is shown in the following diagram:

The following are the necessary steps described in this post:

  1. Create Amazon SNS topics to subscribe to the Amazon RDS event notifications (in the next step).
  2. Enable Amazon RDS event notifications on the primary database.
  3. Create Lambda functions to do the following:
  4. Initiate snapshot creation of the primary database.
  5. Copy the newly created snapshot to the DR Region.
  6. Restore the copied snapshot in the DR Region in the event of a failure.
  7. Enable Amazon CloudWatch Events to schedule the initial snapshot creation based on your RTO requirements.
  8. Optional: Create an additional Lambda function to delete old snapshots.

(more…)

Oracle Migration Automated – How to Automate Generation of Selection and Transformation Rules for AWS Data Migration Service

Akm Raziul Islam is a database architect at Amazon Web Services.

You can use AWS Data Migration Service (AWS DMS) to copy data from multiple schemas in an Oracle source to a single Amazon RDS for Oracle target. You can also migrate data to different schemas on a target. But to do this, you need to use schema transformation rules on the AWS DMS table mappings.

If you are migrating multiple Oracle schemas, it can be challenging to manually create and track multiple selection and transformation rules. This post describes how you can use simple dynamic SQL in Oracle to generate custom JSON selection and transformation rules for all your user schemas.

Creating selection and transformation JSON rules
For this example, an Oracle target endpoint uses the RDSMASTER user, and the source endpoint uses DMS_USER, so you must create a transformation rule as follows. The rule ensures that the DMS_USER schema is targeted and the RDSMASTER schema is not created.

{
"rule-type": "selection",   
"rule-id": "1",         
"rule-name": "1",      
"object-locator": {     
 "schema-name": "DMS_USER",   
"table-name": "%"      
  },         
 "rule-action": "include"
},
  {
   "rule-type": "transformation",
   "rule-id": "2",
   "rule-name": "2",
   "rule-action": "rename",
   "rule-target": "schema",
   "object-locator": {
   "schema-name": "DMS_USER"
},
"value": "DMS_USER"
}

You can do the same thing in the AWS DMS console:

If the transformation rules are not created for the same schema, then by default, DMS_USER from the source schema and all other selected schemas for migration are copied under the target endpoint user RDSMASTER. In AWS DMS, you can create those selection and transformation rules using the console/CLI and JSON. But it can be challenging to create and track multiple selection and transformation rules for multiple Oracle schema migrations.

For example, if you have to migrate roughly 300+ Oracle schemas from each database, there would be about 300+ schemas (300 selection + 300 transformation rules). It’s pretty challenging to input every single schema for each JSON selection and transformation rule. A better approach is to use simple dynamic SQL in Oracle that generates JSON selection and transformation rules for all the user schemas.

Generating JSON rules for multiple schemas
Here is the dynamic SQL that you can run on the Oracle source to generate the JSON selection and transformation rules:

drop sequence my_number1;
drop sequence my_number2;
create sequence my_number1 start with 1 increment by 2;
create sequence my_number2 start with 2 increment by 2 ;
select '
        {
            "rule-type": "selection",
            "rule-id": "'||my_number1.nextval||'",
            "rule-name": "'||my_number1.currval||'",
            "object-locator": {
                "schema-name": "'||username||'",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "'||my_number2.nextval||'",
            "rule-name": "'||my_number2.currval||'",
            "rule-action": "rename",
            "rule-target": "schema",
            "object-locator": {
                "schema-name": "'||username||'"
            },
            "value": "'||username||'"
        },' FROM DBA_USERS where default_tablespace = 'USERS' and ORACLE_MAINTAINED ='N' ;

A few things to note about this code: I ran the code using SQL Developer connected to an Oracle 12c instance. Oracle 12c comes with an ORACLE_MAINTAINED column that isn’t available in other versions. This column makes sure that you consider only non-SYSTEM Schemas. If the source Oracle version is earlier than 12x, simply skip the following part of the code:

and ORACLE_MAINTAINED=’N’;

And replace it with something like this:

FROM DBA_USERS where default_tablespace='USERS' and USERNAME NOT in ('DIP','AUDSYS','GSMUSER','XS$NULL','GSMCATUSER');

You can add more schemas in the USERNAME field that you don’t want to be included in the JSON list.

To generate the transformation rules:

  1. Run the code from SQL Developer (press F5 or choose Run Script).
  2. Scroll down to the end of the line of the generated code. It should contain a comma (,) after the curly brace “}”. Go ahead and delete the comma and leave the curly brace.
  3. Copy the rest of the JSON code (starts from the curly brace before the first “rule-type” to the last curly brace in the output window), and paste it under the following code:
    {
        "rules": [
     
    ----- code goes here
    ]
    }
    
  4. Copy the code using any text editor (e.g., TextWrangler, Notepad++), and paste it in the AWS DMS task Table mappings on the JSON tab (making sure that Enable JSON editing is selected).
    After you copy the JSON code, if you choose the Guided tab, you can see that all the schemas are auto-populated for both selection and transformation rules.

Summary
You can use simple dynamic SQL in Oracle to generate AWS DMS custom selection and transformation rules for filtering and table mappings on the target. For more information about AWS DMS selection and transformation table mapping, see the AWS DMS documentation.

Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle

Marc Teichtahl is a solutions architect at Amazon Web Services.

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. Amazon RDS offers all the benefits of high availability, security, and scalability, while eliminating the need to manage infrastructure, upgrades and updates and the complexity of handling a highly available environment.

Recently, we have been asked by many enterprises to provide guidance on how to install Oracle JD Edwards EnterpriseOne on Amazon RDS. This blog post provides an overview of installation, highlighting the modifications required to ensure success. Accompanying this post is a detailed whitepaper, providing a step-by-step guide for this process.

Key elements of the installation process for JD Edwards EnterpriseOne on Amazon RDS for Oracle include creation of an Amazon RDS Oracle instance, configuration of the SQL Plus client, installation of the platform pack, and modification of the original installation scripts. Much of this process is straightforward. However, significant changes to the original installation scripts are required. This blog post describes those changes.

You can follow the standard JD Edwards installation process found in the JD Edwards EnterpriseOne Applications Installation Guide until section 6.8, “Understanding the Oracle Installation.” At this point, the installation process varies when using Amazon RDS for Oracle.

The standard installation process creates a series of installation scripts. However, the process assumes these scripts will be executed on the database server. In this case, we are targeting Amazon RDS, and we can’t execute installation scripts on the Amazon RDS instance itself.

For the functionality we require, we need to modify the standard installation scripts in four key ways:

  1. Move the data pump files from various directories on the deployment server installation media to the DATA_PUMP_DIR directory on the RDS instance using DBMS_FILE_TRANSFER.PUT_FILE. For more information, see the Amazon RDS documentation.DataFiles
  2. Amazon RDS only supports Oracle Managed Files (OMF) for data files, log files, and control files. Thus, when creating data files and log files, you can’t specify physical file names (for more information, see the Amazon RDS documentation).
    Make syntax changes to the “CREATE TABLESPACE” statements accordingly. Take the following example.
    CreateTableSpace
    This code must be modified as shown following.
    BigFileTableSpace
  3. Change the name of the pristine data dump file and also the import data script for the TEST environment and pristine environment. The standard scripts change the import directory; we are going to change the file name. Make these changes to ORCL\InstallOracleDatabase.BAT, as shown following.
    Bat1
    Bat2
  4. Change the database grants to remove “create any directory” because this grant doesn’t work on Amazon RDS for Oracle. Amazon RDS for Oracle doesn’t support creating directories on the RDS instance.
    For example, take the following Create_dir.sql file statement.
    CreateSEssion
    Here, you change this statement to the statement following.
    CreateSEssion2

The preceding guidelines should give you enough insight into adapting JD Edwards EnterpriseOne installation to take advantage of the benefits of Amazon RDS for Oracle. For more granular step-by-step instructions, see the whitepaper.

Using Amazon RDS for Oracle as the Oracle SOA Suite Database

This post is written by Fabio Douek, an architect at AWS Partner Rubicon Red.

When we started to provision Oracle Fusion Middleware platforms with AWS, we took advantage of the ability to create AWS infrastructure within minutes. We can also use MyST to provision complex platforms that are EDG-compliant with Oracle Fusion Middleware in less than an hour.

One challenge that we faced was that we couldn’t use Amazon RDS for Oracle as the database for our Oracle Fusion Middleware installations. Our inability to do so was primarily because the RDS master user didn’t have the database privileges required to run the Oracle Repository Creation Utility (RCU). As a result, we implemented our own automation for provisioning the Oracle Database running on Amazon EC2 instances.

Although this approach works for running development and test workloads on AWS, when it comes to implementing production workloads, Amazon RDS for Oracle provides a number of additional benefits. These benefits include simplified administration tasks, including backups, software patching, monitoring, and hardware scaling.

In addition, Multi-AZ deployment with Amazon RDS for Oracle simplifies the implementation of a highly available architecture, because it contains built-in support for automated failover from your primary database to a synchronously replicated secondary database in an alternative Availability Zone in case of a failure.

Our approach to this challenge started to change late last year, with a number of our customers looking at running Oracle SOA workloads in production on AWS. Being an AWS Technology Partner, we provided this feedback to AWS, who in return invited us to collaborate with the RDS team.

We spent the last four months working with the Amazon RDS team to test the RCU capability within MyST. This process went extremely well, and the Amazon RDS team worked closely with us to support the go-live of our first customer on Oracle SOA 12.2.1 on AWS using Amazon RDS for Oracle —what we believe to be a world first!

The following diagram illustrates the type of architecture required.

More recently, Amazon has now announced that RCU is officially supported by RDS Oracle. That’s great news, because it means that we can provision an Oracle Fusion Middleware environment that is EDG-compliant and highly available within minutes. This way, we can take advantage of RDS to simplify ongoing operations.

To find out more detail on the steps required to enable Oracle SOA with RDS Oracle, see our blog post on the Rubicon Red website.