A common technique for transferring files between a database and various application servers is to use the Oracle feature UTL_FILE. This approach allows client servers to copy files into and then read files from a POSIX-compliant directory. It also allows the database to write to and read from those same files by using PL/SQL routines.
However, there’s a challenge to this type of file transfer when moving Oracle databases into Amazon RDS During the move, the database can continue to read and write files to a POSIX-compliant directory. However, these directories are not exposed to external systems, thus preventing external systems from accessing them. For a lot of organizations, this issue blocks their adopting Amazon RDS for Oracle.
This blog post takes you through the process of solving this problem. It solves the problem by integrating an Oracle RDS instance with an Amazon S3 bucket using Oracle PL/SQL routines. This provides an external file transfer mechanism like the functionality that is currently provided by UTL_FILE.
This blog post only deals with the process of modifying an Amazon RDS for Oracle instance. You can find additional reference information in the following documentation topics. These take you through the process of creating an RDS for Oracle instance and gaining access to that instance:
You can also find reference information in the following documentation topic, which takes you through the process of creating an S3 bucket with which to test this process:
In all cases, the S3 bucket and Oracle instances are private to the Amazon VPC. Public access is not required for this solution.
Prerequisites
You need to fulfill five prerequisites before you can create an S3 interface into RDS for Oracle, as listed following.
This blog post does not describe how to perform these steps.
- You need an S3 environment with a suitable bucket. This should be a private bucket, not a public bucket.
- You need a mechanism for copying files into and out of an S3 bucket. AWS CLI is the easiest way to do this.
- You need an Amazon RDS for Oracle instance:
- This instance must be at least Oracle 12cR1.
- The instance must be associated with an option group that contains the APEX options.
- The database can be Standard Edition, which allows the license-inclusive model to be used.
- You need an external environment that can connect to the Amazon RDS for Oracle instance.
- Oracle Instant Client is the easiest way to achieve this. If you are using Oracle Instant Client, you need the following components:
- basic
- sqlplus
- tools
- jdbc
- odbc
- sdk
- Optionally, you need a full Oracle 12c client environment so that you can create the necessary Oracle Wallet. If this is not possible, the code included with this blog does include a pre-created wallet that can be used for this purpose.
Start with a simple file copy in a traditional database
This section is here for completeness only and demonstrates a simple UTL_FILE routine running in a traditional database. It is this routine that we plan to replicate in RDS for Oracle.
Feel free to carry out these commands yourself. It’s not necessary to complete the solution exactly as detailed in this blog post.
In all of these examples I use the database user mike, and for simplicity’s sake I have granted DBA to this user.
The first step is to create a directory to use.
$ sqlplus mike/<password>
CREATE OR REPLACE DIRECTORY FILE_COPY AS '/data/file-copy';
EXIT;
Then run some simple code to populate a file in that directory.
$ touch /data/file-copy/test-file
$ sqlplus mike/<password>
DECLARE
fHandle UTL_FILE.FILE_TYPE;
BEGIN
fHandle := UTL_FILE.FOPEN('FILE_COPY', 'test-file', 'w');
UTL_FILE.PUT_LINE(fHandle, 'This is the first line' );
UTL_FILE.PUT_LINE(fHandle, 'This is the second line');
UTL_FILE.PUT_LINE(fHandle, 'This is the third line' );
UTL_FILE.FCLOSE(fHandle);
END;
/
EXIT;
$ cat /data/file-copy/test-file
This is the first line
This is the second line
This is the third line
This is a simple version of the process that many customers use. This process is difficult to replicate in RDS for Oracle because of the security in RDS environments. This security prevents access to the underlying infrastructure.
This is the example code we are going to use to perform a file transfer in RDS.
Create an APEX option group
Before creating a database, you need to create the option group that we are going to use. I have found that it helps to include the VPC, option, and database version in your naming convention.

Add APEX options
The only option that is required is the APEX option; you don’t need APEX-DEV. However, I recommend that you select the most recent version, currently 5.1.4 v1.

Create an RDS for Oracle instance
For the purposes of this blog post, I am creating an Oracle Standard Edition Two instance, which is available through the RDS Free Usage Tier.

You must create an Oracle 12.1 database, because the solution in this blog post requires SHA2 encryption.

Give your database a name. I used the name file-test for this blog post.
Note: The master username is akin to the SYS password in a traditional database. My recommendation is to set a standard for this and stick to it.

Set your database options. I accepted the defaults for this, except for the option group, where you need to select the group you created earlier that contains the APEX options.

After your database has created, record the endpoint and use it to connect to your database from a bastion host. Alternatively, you can create your database with public access, but I don’t recommend this.
Edit your tnsnames.ora file to add a suitable entry for the database you have just created. My database is called file-test, as is the tnsnames entry.
Test your UTL_FILE code with your RDS for Oracle database
The first step is to create a directory to use. In all tests, I am using the admin user that was specified when the database was created.
Note: Be aware of the change in syntax for RDS for Oracle.
$ sqlplus mike/<password>@file-test
Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
BEGIN
RDSADMIN.RDSADMIN_UTIL.CREATE_DIRECTORY('FILE_COPY');
END;
/
EXIT;
Then run some simple code to populate a file in that directory. This is the same code that we used previously in the traditional database.
$ sqlplus mike/<password>@file-test
DECLARE
fHandle UTL_FILE.FILE_TYPE;
BEGIN
fHandle := UTL_FILE.FOPEN('FILE_COPY', 'test-file', 'w');
UTL_FILE.PUT_LINE(fHandle, 'This is the first line' );
UTL_FILE.PUT_LINE(fHandle, 'This is the second line');
UTL_FILE.PUT_LINE(fHandle, 'This is the third line' );
UTL_FILE.FCLOSE(fHandle);
END;
/
PL/SQL procedure successfully completed.
EXIT;
The problem now is that while the PL/SQL code has run perfectly, how do I see the contents for the file I have just created?
Write the files to an S3 bucket
The solution to this conundrum is to copy the files that have been created by using the UTL_FILE routines to an S3 bucket, which can then be accessed from any external system.
Create an Oracle directory
First create an Oracle directory to store the Oracle Wallet that is required for S3 access. Then extract the directory name that has been created for later use.
$ sqlplus mike/<password>@file-test
BEGIN
RDSADMIN.RDSADMIN_UTIL.CREATE_DIRECTORY('SSL_WALLET_DIR');
END;
/
SELECT directory_path
FROM all_directories
WHERE directory_name = 'SSL_WALLET_DIR'
/
DIRECTORY_PATH
--------------------------------------------------------------------------
/rdsdbdata/userdirs/02
EXIT;
Download and update the Amazon S3 package
Then you need to download the necessary AWS_RDS_TO_S3_PKG code, located in this AWS GitHub repository.
After it’s downloaded, you need to modify the environment-specific variables in the Amazon S3 package to work against your environment.
In an editor of your choice, edit the package body script AWS_RDS_TO_S3_PKG.pkb and search for the following strings.
Note: You can set all of these values at runtime by using the AWS_RDS_TO_S3_PKG if necessary, with the exception of the wallet path.
$ vi AWS_RDS_TO_S3_PKG.pkb
AWS_ACCESS_KEY VARCHAR2(20) := 'AKIAIOSFODNN7EXAMPLE';
AWS_REGION VARCHAR2(16) := 'eu-west-2';
AWS_SECRET_KEY VARCHAR2(64) := 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY';
TIME_ZONE VARCHAR2(64) := 'UTC';
WALLET_PASSWORD VARCHAR2(32) := 'S3-Oracle';
WALLET_PATH CONSTANT VARCHAR2(32) := 'file:/rdsdbdata/userdirs/02/';
- The access key and secret key are the security credentials for an AWS Identity and Access Management (IAM) user. This IAM user has programmatic access for the S3 bucket into which we are going to write the files.
- The AWS Region is where the S3 bucket is located.
- The time zone shouldn’t need to be changed, but you can adjust it to your AWS Region if required.
- The wallet password is the password for the wallet in GitHub. If you plan to use your own wallet, then this password also needs to be changed.
- The wallet path is created when you create the Oracle directory. Use the output of the command that was used to create the directory previously.
Create database access control lists (ACLs)
Before your database can successfully communicate with an S3 bucket, you have to open up the database firewall to allow access. You can do this by using the database ACL administration command.
Note: Access is granted to the owner of the APEX package rather than the owner of the database package that we are creating here.
Note: You also have to create the necessary corresponding AWS network security groups and network ACLs in your VPC environment.
$ sqlplus mike/<password>@file-test
BEGIN
DBMS_NETWORK_ACL_ADMIN.append_host_ace
(
host => '*.amazonaws.com',
lower_port => 443,
upper_port => 443,
ace => xs$ace_type
(
privilege_list => xs$name_list('http'),
principal_name => 'APEX_050100',
principal_type => xs_acl.ptype_db
)
);
END;
/
Create an Oracle Wallet
To allow the database to successfully connect to an S3 bucket, we store the SSL certificates in an Oracle Wallet. Later, we copy this wallet into the Oracle directory that we just created.
Issue the following command to identify which certificates are required to access an Amazon S3 bucket. We are interested in the information at the beginning of the stack that is returned.
$ openssl s_client -connect s3.eu-west-2.amazonaws.com:443
CONNECTED(00000003)
depth=2 C = IE, O = Baltimore, OU = CyberTrust, CN = Baltimore CyberTrust Root
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Baltimore CA-2 G2
verify return:1
depth=0 C = US, ST = Washington, L = Seattle, O = Amazon.com Inc., CN = *.s3.eu-west-2.amazonaws.com
verify return:1
---
Certificate chain
0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=*.s3.eu-west-2.amazonaws.com
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Baltimore CA-2 G2
1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Baltimore CA-2 G2
i:/C=IE/O=Baltimore/OU=CyberTrust/CN=Baltimore CyberTrust Root
---
From this stack trace, we can see that we require two certificates:
- DigiCert Baltimore CA-2 G2
- Baltimore CyberTrust Root
These can be downloaded from the Digicert website at the following addresses:
After you download these certificates, run the Oracle utility for managing Public Key Infrastructure (PKI) elements. One of the challenges with this is that this utility, orapki, is only available in a full install of Oracle. So the wallet needs to be created in the source database environment.
Alternatively, the Oracle JDeveloper Studio edition also contains the orapki utility. This is a free-to-use product, although a rather large download.
Finally, if neither of these options are suitable, I have included a copy of the necessary wallet in the GitHub repository.
These are the steps that you need to run if you plan to create your own wallet.
$ orapki wallet create -wallet -pwd S3-Oracle -auto_login
$ orapki wallet add -wallet -trusted_cert -cert BaltimoreCyberTrustRoot.crt -pwd S3-Oracle
$ orapki wallet add -wallet Wallet/ -trusted_cert -cert DigiCertBaltimoreCA-2G2.crt -pwd S3-Oracle
$ orapki wallet display -wallet
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.
Requested Certificates:
User Certificates:
Trusted Certificates:
Subject: CN=DigiCert Baltimore CA-2 G2,OU=www.digicert.com,O=DigiCert Inc,C=US
Subject: CN=Baltimore CyberTrust Root,OU=CyberTrust,O=Baltimore,C=IE
Copying the Wallet to the RDS for Oracle Wallet directory
The next challenge is copying the wallet into the Oracle Wallet directory. As you know, there is no access to the underlying file structure of the RDS environment.
The recommended method is to use S3 and download the file by using HTTP. However, the recent security upgrade on S3 to SHA-2 has rendered this method inoperable.
An alternative is to use a Perl script, and I have included a copy of a suitable Perl script in the GitHub directory for this blog post.
This script has two variables that might require modification before you can use the file in your environment. Using an editor of your choice, make the following changes to the script.
$ vi Copy-eWallet.pl
...
my $dirname="SSL_WALLET_DIR";
...
my $filename="ewallet.p12";
...
- SSL_WALLET_DIR is name of the Oracle directory we previously created.
- p12 is the name of the wallet that we created with the orapki utility.
The biggest challenge with this approach is modifying the environment that you are using to connect to RDS for Oracle so that it can run Perl against an Oracle database.
These are the steps that I have used for this purpose. They use the Oracle Instant Client and have been run on RHEL 6.9 and the latest Amazon Machine Image (AMI).
# install necessary packages
$ su -
$ yum install -y perl-DBI
$ yum install -y cpan
# configure environment variables for root
$ export ORACLE_BASE=/usr/lib/oracle
$ export ORACLE_HOME=$ORACLE_BASE/12.2/client64
$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib
$ export TNS_ADMIN=$ORACLE_HOME/lib/network/admin
$ export PERL_MM_USE_DEFAULT=1
$ export DYLD_LIBRARY_PATH=$LD_LIBRARY_PATH
# copy a make file into the client area
$ cp /usr/share/oracle/12.2/client64/demo/demo.mk /usr/share/oracle/12.2/client64/demo.mk
#Install Necessary cpan components
$ cpan IO::Tee
$ cpan DBD::Oracle
- The environment variables need to point to your Oracle environment; for me, these are the locations of the Oracle Instant Client.
- When you run cpan, you usually get lots of configuration questions, especially if this is the first time you have used cpan. In all cases, take the default and ignore the warnings, and there will most likely be lots of warnings.
The next step is to use the Perl script to copy your wallet into the RDS for Oracle directory.
$ ./Copy-eWallet.pl \
--USER mike \
--PASSWD <password> \
--TNS file-test \
--LDIR /home/oracle/Wallet
Uploading file ewallet.p12
Files uploaded successfully
02/ size: 4096 bytes
ewallet.p12 size: 2360 bytes
- USER is the master username.
- PASSWD is the password for the master username.
- TNS is the access point of the RDS for Oracle database by using the TNS_NAMES entry.
- LDIR is the full path of the local directory where the wallet is located.
Load the Amazon S3 database package
The final step to this process is to install the Amazon S3 database package that we modified earlier.
$ sqlplus mike/<password>@file-test @AWS_RDS_TO_S3_PKG.pks
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
Package created.
SQL> @AWS_RDS_TO_S3_PKG.pkb
Package body created.
SQL> exit;
Test the process
Now that the environment is set up, we can perform a simple test listing all of the S3 buckets in our account.
$ sqlplus mike/<password>@file-test
SET SERVEROUT ON SIZE 20000
SET LINESIZE 132
SET PAGESIZE 64
DECLARE
blBucketList AWS_RDS_TO_S3_PKG.BUCKET_LIST;
BEGIN
blBucketList := AWS_RDS_TO_S3_PKG.getBucketList;
FOR i IN 1 .. blBucketList.count
LOOP
DBMS_OUTPUT.PUT('Bucket: ');
DBMS_OUTPUT.PUT( blBucketList(i).bucket_name );
DBMS_OUTPUT.PUT(' Created on: ');
DBMS_OUTPUT.PUT_LINE( blBucketList(i).creation_date );
END LOOP;
END;
/
Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
Bucket: db-freedom Created: 16-JUL-18
Bucket: demos.revittmk.aws.co.uk Created on: 05-JUN-18
Bucket: ec2-config.revittmk.aws.co.uk Created on: 29-JUN-18
Bucket: file-test.revittmk.aws.co.uk Created on: 27-JUL-18
Bucket: labs.revittmk.aws.co.uk Created on: 31-MAY-18
PL/SQL procedure successfully completed.
EXIT;
And now let’s check the contents of the S3 bucket we created for this test.
$ sqlplus mike/<password>@file-test
SET SERVEROUT ON SIZE 20000
SET LINESIZE 132
SET PAGESIZE 64
DECLARE
olObjectList AWS_RDS_TO_S3_PKG.OBJECT_LIST;
bAllFiles BOOLEAN;
BEGIN
AWS_RDS_TO_S3_PKG.getObjectList
(
pBucket => 'file-test.revittmk.aws.co.uk',
pFilesRemaining => bAllFiles,
pObjectList => olObjectList
);
for i in 1 .. olObjectList.count loop
dbms_output.put('Object: ');
dbms_output.put( olObjectList(i).key);
dbms_output.put(' Size Bytes: ');
dbms_output.put( olObjectList(i).size_bytes);
dbms_output.put(' Modified on: ');
dbms_output.put_line( olObjectList(i).last_modified);
end loop;
END;
/
PL/SQL procedure successfully completed.
EXIT;
This should return no data, because we have not copied anything to this bucket yet.
Now for the real test. Let’s copy our file into the S3 bucket so that we can see the contents and then rerun the bucket list check.
$ sqlplus mike/<password>@file-test
BEGIN
AWS_RDS_TO_S3_PKG.copyFile
(
'FILE_COPY',
'test-file',
'file-test.revittmk.aws.co.uk'
);
END;
/
REM re-run AWS_RDS_TO_S3_PKG.getObjectList, you should now see the file
Object: test-file Size Bytes: 70 Modified on: 27-JUL-18
PL/SQL procedure successfully completed.
There are lots of methods for making API calls into S3. I am going to use AWS CLI to have a look at the file.
$ aws s3 cp s3://file-test.revittmk.aws.co.uk/test-file . --profile mike
download: s3://file-test.revittmk.aws.co.uk/test-file to ./test-file
$ cat test-file
This is the first line
This is the second line
This is the third line
So, there we have it, I have populated a file in Oracle and accessed it from my PC.
Just for fun, let’s modify the file and copy it back into the Oracle database.
$ echo 'This is the fourth line' >> test-file
$ echo 'This is the fifth line' >> test-file
$ echo 'This is the sixth line' >> test-file
$ cat test-file
This is the first line
This is the second line
This is the third line
This is the fourth line
This is the fifth line
This is the sixth line
$ aws s3 cp test-file s3://file-test.revittmk.aws.co.uk/ --profile mike
upload: ./test-file to s3://file-test.revittmk.aws.co.uk/test-file
$ sqlplus mike/<password>@file-test
BEGIN
AWS_RDS_TO_S3_PKG.copyFile
(
'file-test.revittmk.aws.co.uk',
'test-file',
'FILE_COPY'
);
END;
/
A simple SQL statement enables us to read the contents for the file in the Oracle directory.
$ sqlplus mike/<password>@file-test
SET SERVEROUT ON SIZE 20000
SET LINESIZE 132
SET PAGESIZE 64
DECLARE
fHandle UTL_FILE.FILE_TYPE;
vcText VARCHAR2(132);
BEGIN
fHandle := UTL_FILE.FOPEN('FILE_COPY', 'test-file', 'r');
LOOP
UTL_FILE.GET_LINE(fHandle, vcText );
DBMS_OUTPUT.PUT_LINE( vcText );
END LOOP;
UTL_FILE.FCLOSE(fHandle);
EXCEPTION
WHEN NO_DATA_FOUND
THEN
UTL_FILE.FCLOSE(fHandle);
END;
/
This is the first line
This is the second line
This is the third line
This is the fourth line
This is the fifth line
This is the sixth line
date
Mon 6 Aug 2018 10:24:52 BST
PL/SQL procedure successfully completed.
EXIT;
There you have it: All of the functionality of UTL_FILE for creating, modifying and deleting files in an S3 bucket. In addition, we can copy files between Oracle directories and S3 buckets.
Troubleshoot
Occasionally, things don’t work when implementing a solution like this. This section addresses the most common issues that have been identified during testing.
Debug
The code has been supplied with debug statements imbedded that can be turned on to identify issues.
Note: Only the owner of the package can run in debug modes CONN and VERBOSE.
A sample debug command follows.
$ sqlplus mike/<password>@file-test
SET SERVEROUT ON SIZE 20000
SET LINESIZE 132
SET PAGESIZE 64
DECLARE
aBucketList aws_s3_pkg.BUCKET_LIST;
BEGIN
-- AWS_RDS_TO_S3_PKG.setDebugOn( AWS_RDS_TO_S3_PKG.DEBUG_VERBOSE );
-- AWS_RDS_TO_S3_PKG.setDebugOn( AWS_RDS_TO_S3_PKG.DEBUG_CONN );
AWS_RDS_TO_S3_PKG.setDebugOn( AWS_RDS_TO_S3_PKG.ON );
aBucketList := AWS_RDS_TO_S3_PKG.getBucketList;
FOR i IN 1 .. aBucketList.count loop
dbms_output.put('Bucket: ');
dbms_output.put( aBucketList(i).bucket_name);
dbms_output.put(' Created: ');
dbms_output.put_line( aBucketList(i).creation_date);
END LOOP;
END;
/
Wallet issues
Suppose that you get the following errors.
ORA-29273: HTTP request failed
ORA-28759: failure to open file
ORA-06512: at "SYS.UTL_HTTP", line 368
ORA-06512: at "SYS.UTL_HTTP", line 1118
If so, it’s likely that the Oracle Wallet cannot be read. This is most likely due to the following variables being incorrectly set:
- WALLET_PASSWORD
- WALLET_PATH
Ensure that the password is correct. Also, ensure that the wallet path is equal to the value you got when you created the directory in the previous section, Create an Oracle directory.
About the author
Mike Revitt is a global database specialist working for AWS with over 30 years of experience working in the IT industry, 3 years on CAD-CAM in Adelaide before moving to the UK where he has worked with Database and Oracle technologies for nearly 30 years. Mike has worked with Grid and later Cloud technologies since the mid 1990’s and has been working with Converged Infrastructures since 2008.
Before joining AWS, Mike was the Global Head of Oracle Managed PaaS for one of AWS’ partners, and as the owner of that Hybrid Cloud Offering, was responsible for all technical and commercial aspects of the PaaS and IaaS solutions that were based around Cloud Services and Converged Infrastructures.