AWS Database Blog

Migrating Oracle Autonomous Transactions to PostgreSQL

July 2023: This post was reviewed for accuracy.

Migrating and operating a database includes layers of complexity and thorough planning. Amazon RDS for PostgreSQL and Amazon Aurora with PostgreSQL compatibility can help you manage a variety of use cases. While migrating from Oracle to PostgreSQL, it’s very common to come across Oracle Autonomous Transactions. This post explains Autonomous Transactions and how you can get this functionality in PostgreSQL.

Persistent data use case

Database transactions provide a mechanism to bundle a group of SQL statements such that either all of them are committed or all of them are rolled back. For example, assume that while moving funds from Account A to Account B you need to perform the following steps:

  • Subtract the required amount from Account A
  • Add the required amount to Account B

In this scenario you need to make sure that either both the steps complete successfully or none of them are successful. You can achieve this by running both the SQL statements in a database transaction. This is a very useful functionality. It’s needed in majority of the use cases. However, there are cases where your application needs the ability to persist some data even if a transaction fails.

PostgreSQL and Oracle both follow the same transaction principle but in order to handle this use case, Oracle provides a feature called Autonomous Transaction. PostgreSQL does not have a direct equivalent for Autonomous Transaction but you can use dblink to get a similar result.

What is Autonomous Transaction?

Oracle provides the Autonomous Transaction feature. The feature enables a subprogram to execute SQL operations and commit or rollback these operations without committing or rolling-back the main transaction.

Consider the following scenario:

  1. A row with some critical information must be added to a table according to the business logic in an insert trigger.
  2. An insert SQL query is executed to add the required row.
  3. If the trigger fails for any reason, all operations including this insert will be rolled back.

If you require that the insert is successful, irrespective of whether the main transaction completes successfully or not, then the insert SQL statement can be called in an Autonomous Transaction. Here is an example Oracle procedure with autonomous transaction:

CREATE OR REPLACE PROCEDURE insert_critical_info(v_critical_info varchar2)
IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
    INSERT INTO critical_info_table (critical_info) VALUES (v_critical_info);
    commit;
END;
/

With this change, the insert is committed independently and does not rely on the successful completion of the main transaction.

In addition to the above scenario, Autonomous Transactions is used for auditing access. For example, the Autonomous Transaction is called from within a trigger or a stored procedure with the audit information as a parameter. The Autonomous Transaction stores and commits this information irrespective of whether the original transaction completes successfully on not.

Autonomous Transactions in PostgreSQL

PostgreSQL does not explicitly support Autonomous Transactions. We recommend that refactoring is done to eliminate the requirement of using Autonomous Transactions. If that is not possible, then the workaround is to use the PostgreSQL dblink. The PostgreSQL  dblink is supported by Amazon RDS and Amazon Aurora PostgreSQL.

The dblink actually opens a new connection and runs a query using this new connection. Any SQL operation executed via a dblink is committed immediately regardless of the main transaction. The example function below inserts a row into a table. This insert will get committed even if the calling transaction is rolled back.

CREATE OR REPLACE FUNCTION insert_critical_info(v_critical_info TEXT)
RETURNS void
AS
$BODY$
DECLARE
    v_sql text;
BEGIN
    PERFORM dblink_connect('myconn', 
         'dbname=mydbname user=myuser password=mypassword host=127.0.0.1 port=5432');
    v_sql := format('INSERT INTO critical_info_table (critical_info) 
         VALUES (%L)', v_critical_info);
    PERFORM dblink_exec('myconn', v_sql);
    PERFORM dblink_disconnect('myconn');
END;
$BODY$
LANGUAGE  plpgsql;

The function above calls INSERT via a dblink and the operation is committed immediately.

The problem with the above function is that you are specifying the database credentials in every dblink_connect call. The workaround for this is to use dblink_fdw to create the remote server and user mapping as follows:

CREATE SERVER loopback_dblink FOREIGN DATA WRAPPER dblink_fdw 
       OPTIONS (hostaddr '127.0.0.1', dbname 'mydbname');
CREATE USER MAPPING FOR my_app_user SERVER loopback_dblink 
       OPTIONS (user 'myuser', password 'mypassword');

With the foreign server and user mapping in place, you can now call the dblink_connect function as follows:

CREATE OR REPLACE FUNCTION insert_critical_info(v_critical_info TEXT)
RETURNS void
AS
$BODY$
DECLARE
    v_sql text;
BEGIN
    PERFORM dblink_connect('myconn', 'loopback_dblink');
    v_sql := format('INSERT INTO critical_info_table (critical_info) 
                         VALUES (%L)', v_critical_info);
    PERFORM dblink_exec('myconn', v_sql);
    PERFORM dblink_disconnect('myconn');
END;
$BODY$
LANGUAGE  plpgsql;

Drawbacks

This workaround enables you to create Autonomous Transactions in PostgreSQL but it comes at a cost. The PostgreSQL dblink is session-specific. Any dblink opened in one session cannot be used in a different session. This means that each session will have to open a new DB connection and this increases response time.

The function above opens one connection each time it is called, even within the same session. Since dblink can exist within a session, one possible optimization is to not explicitly close the dblink. In this scenario, the function instead checks if the dblink is already open. If already opened, it is used instead of opening a new dblink connection. Here is the modified version of the function with the check:

CREATE OR REPLACE FUNCTION insert_critical_info(v_critical_info TEXT)
RETURNS void
AS
$BODY$
DECLARE
    v_sql text;
    v_con_count int;
BEGIN
    SELECT count(1) INTO v_con_count FROM dblink_get_connections() 
         WHERE dblink_get_connections@>'{myconn}';
    IF v_con_count = 0 THEN
        PERFORM dblink_connect('myconn', 'loopback_dblink');
    END IF;
    
    v_sql := format('INSERT INTO critical_info_table (critical_info) 
                         VALUES (%L)', v_critical_info);
    PERFORM dblink_exec('myconn', v_sql);
END;
$BODY$
LANGUAGE  plpgsql;

The dblink connection closes automatically at the end of the session even if it’s not closed explicitly using the dblink_disconnect function.

In addition to this, you can also evaluate using a connection pooler like pgBouncer to reduce the number of database connections.

Summary

Autonomous Transaction is a useful feature. However, its drawbacks must be considered. If your code can be refactored to eliminate this requirement, then that is the recommended path. If your code cannot be refactored, then this blog post demonstrates one way to manage your requirement.

You can find more information about PostgreSQL dblink at the PostgreSQL documentation web site.

If you have any questions or comments about this blog post, feel free to use the comments section here to post your thoughts.


About the Author

Yaser Raja is a Senior Consultant with Professional Services team at Amazon Web Services. He works with customers to build scalable, highly available and secure solutions in AWS cloud. His focus area is homogenous and heterogeneous migrations of on-premise databases to AWS RDS and Aurora PostgreSQL.