AWS Database Blog

Best practices for migrating an Oracle database to Amazon RDS PostgreSQL or Amazon Aurora PostgreSQL: Target database considerations for the PostgreSQL environment

An Oracle to PostgreSQL migration in the AWS Cloud can be a complex multistage process with different technologies and skills involved, starting from the assessment stage to the cutover stage. This blog post is the third in a series that discusses high-level aspects about the components to consider for a database migration. The series doesn’t address the complexities and intricacies of the application components and different scenarios, because these depend on the use case. To better understand these complexities, see the AWS Database Blog post Database Migration—What Do You Need to Know Before You Start?

In the previous blog posts, Migration process and infrastructure considerations and Source database considerations for the Oracle and AWS DMS CDC environment, we discussed Oracle database configuration setup. This discussion included setup for AWS Data Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT). After these are set up, and before we start data migration, we must have the target PostgreSQL database up and running with all of the relevant schemas and parameters.

This final series blog post gives an overview on how you can set up the PostgreSQL environment to assist with migrating from an Oracle database using AWS DMS and AWS SCT. This post provides you with configuration settings for PostgreSQL database parameters that are useful for the migration setup.

In a migration environment, it’s a good idea also to employ strategies for high availability, scalability, archiving, backup, load balancing, and rollback. These strategies aren’t covered in this post. In addition, this post doesn’t discuss the manual aspects of database migration. It also doesn’t include step-by-step instructions that you can adapt for your own requirements or the intricacies of application dependencies. For more of these details, see the Oracle Database 11g/12c To Amazon Aurora with PostgreSQL Migration Playbook. The playbook describes how to manually migrate specific Oracle database objects and features to Amazon Aurora with PostgreSQL compatibility equivalents.

DMS and migration of secondary objects

DMS only migrates objects that are required to efficiently migrate any data from source-to-target. These include primary keys, and in some cases unique indexes. DMS doesn’t migrate secondary objects, such as foreign key constraints, triggers, secondary indexes and so on.

Because of this functionality, make sure that foreign key constraints and secondary indexes are disabled during the full load, because DMS performs migrations table by table. If you retain  foreign key constraints, full load fails. The foreign key constraints and secondary indexes should be enabled after the full load phase. You can apply indexes before applying cached changes, but foreign keys need cached changes to be applied before you enable them.

Recommendations for key Amazon RDS for PostgreSQL parameters

Following are some recommendations for key Amazon RDS for PostgreSQL parameters. For your data load only, modify your database parameter group to include the following settings. You should test the parameter settings to find the most efficient settings for your DB instance size. You also need to revert to production values for these parameters after the data migration is complete.

Set a few of the key DB instance parameter settings before the data load:

  • Disable DB instance backups (set backup_retention to 0). Doing this disables automated backups during load.
  • Disable Multi-AZ. We suggest this because Multi-AZ deployments can increase write and commit latency compared to a Single-AZ deployment, due to the synchronous data replication that occurs to standby.
  • Increase wal_buffer, Maintenance_work_mem, Increase checkpoint_segments, and checkpoint_timeout (for an explanation, see following).
  • Disable synchronous_commit (for an explanation, see following).
  • Don’t disable AUTOVACUUM. When enabled, AUTOVACUUM checks for tables that have had a large number of inserted, updated, or deleted tuples. This functionality is expected until the loads are completed and stops transaction ID wraparound for large data loads.
  • Don’t modify vacuum_freeze_min_age. Decreasing this setting increases the number of transactions that can elapse before the table must be vacuumed again and can cause transaction ID wraparound.
  • Vacuum analyze after initial load to keep latest statistics:
    • It’s important to have reasonably accurate statistics; otherwise poor plan choices might degrade database performance.
    • Perform VACUUM FULL on tables with LOBSs. DMS load the data in chunks by updating the rows, which creates bloat on PostgreSQL. VACUUM FULL can reclaim more disk space but runs much slower than the standard form of VACUUM, because the standard form can run in parallel with production database operations.
  • Adjust the logging default to be more verbose.


Determine how large you want to set the shared_buffers parameter. Start at 25 percent of system memory as a default setting in the RDS PostgreSQL parameter (the unit is 8 KB). In Aurora PostgreSQL, the default is 75 percent of memory.

To check this parameter, run the following command:

pg=> select setting from pg_settings where name like 'shared_buffers';  ( in 8K unit) setting 
(1 row)

To show the current value of this parameter, run the following command:

pg=> show shared_buffers;   # show actual value
(1 row)

[oracle@ip-x-x-x-x ~]$ echo "62500*8"|bc


Estimate your maximum connections generously, because this is a hard limit. Clients are refused connection after the limit is reached and hang the system.

In RDS PostgreSQL 9 and 10, this value is: LEAST({DBInstanceClassMemory/9531392},5000)

For example, take RDS PostgreSQL 9 and 10 for the T2.micro DB instance class. The T2.micro instance class has 1 GB memory, which equates to 1024 * 1024 * 1024 bytes / 9531392, in other words 112.65. Because we can’t have 0.65 connections, you round maximum connections to 112 or 113. You can check the max_connections value on your instance using the following command:

postgres=> show max_connections;

In Aurora, the default parameter group contains database engine defaults and Amazon RDS system defaults based on the engine, compute class, and allocated storage of the instance. You can’t modify the parameter settings of a default DB parameter group. To change parameter settings from their default value, you must create your own DB parameter group as described in the RDS documentation. You do this by modifying the max_connections parameter in your custom parameter group. After doing so, you need to modify your DB instance to use the new parameter group.

When you change the DB parameter group associated with a DB instance, you must manually reboot the instance before the new DB parameter group is used by the DB instance.

Write-ahead logging (WAL)

You use WAL files to log information about any changes to your database. These changes are stored in segments that are called WAL segments. WAL segments represent permanent storage, in contrast to WAL buffers, which are ephemeral. WAL segments help ensure that the contents—the WAL records—are available for recovery and replication. The wal_buffers parameter sets the number of disk-page buffers in the shared memory for WAL. This shared memory is used for WAL data that has not yet been written to disk. The value is in 8-KB units. Setting this parameter helps your WAL generation speed.

You use the parameter max_wal_size, measured in WAL segments, to control WAL segments. You can’t adjust this parameter in Aurora PostgreSQL, only in RDS PostgreSQL. However, max_wal_size also affects checkpointing. This effect occurs because a checkpoint is begun every checkpoint_timeout seconds, or if max_wal_size is about to be exceeded, whichever comes first. For WAL segments, this parameter determines how many WAL files are kept on disk in case a standby needs them for recovery. Each file is 16 MB in size. RDS defaults to 32. However, you can vary this according to your specific needs. For example, if the individual file is 16 MB, setting max_wal_size to 32 means 512 MB of space is reserved for WAL records on disk.

In PostgreSQL 10, the units of this parameter have changed from files to megabytes. This change means setting max_wal_size to 32 in PostgreSQL 10 sets it to 32 MB.

Running the command pg=> show wal_buffers; gives you output that looks something like the following:

 14432 KB
(1 row)


Parameter Default Value Max Value Status Current Unit Description
wal_buffers -1 2147483647 True Engine default Integer (8 KB) Sets the number of disk-page buffers in shared memory for WAL

The default setting of -1 selects a size equal to 1/32 (about 3 percent) of shared_buffers, as described in the PostgreSQL documentation.

When working with WAL parameters for migration, do the following:

  • Change work_mem to 1 M default unless your database is a data warehouse. This parameter is set as per sorting , per client required without running out of memory.
  • The memory-based tunable parameters, primarily shared_buffers and work_mem, need to be adjusted carefully and in unison to make sure that your system doesn’t run out of memory altogether. Set maintenance_work_mem to around 50 MB per GB of RAM.
  • Increase wal_buffers to 16 MB; to do so, set it to 2048, because the unit for this parameter is 8 KB.

Checkpoint parameters

At times, users interpret bad performance to nonstandard setup of checkpoint parameters, leading to I/O issues with storage. The handling of IOPS in Aurora PostgreSQL is different from in the standard version of the PostgreSQL engine. Aurora uses a log-based storage system as opposed to the data block–based system for traditional PostgreSQL.

This being the case, the writes are very different between the two engines. In Aurora PostgreSQL, the Write IOPS metric reports the number of Aurora storage write records generated per second. This number is more or less the same as the number of log records generated by the database. These writes don’t correspond to 8-KB page writes and also don’t correspond to network packets sent. Also, this value isn’t what we use to meter your I/O for billing purposes.

The system can be slow if checkpoints aren’t set appropriately based on the workload. Checkpoints are caused by the following:

  • checkpoint_segments worth of WAL files have been written.
  • checkpoint_timeout seconds have passed since the last checkpoint.
  • You manually force a checkpoint by using the SQL command CHECKPOINT.

Certain settings help avoid database checkpoints. In RDS PostgreSQL, we recommend that you increase checkpoint_segments from 16 (256 MB) to 256(256*16=4096=4 GB) at least, even to 1,024. We also recommend that you increase checkpoint_timeout to 1,800 (the default is 300). You can only adjust these parameters in RDS PostgreSQL, not in Aurora PostgreSQL.

Similarly, for PostgreSQL 9.4, not 9.5, set max_wal_size to 256 (the default is 128). The default max_wal_size value for PostgreSQL 9.6 and 10 is 1 GB.

The same goes for Oracle. Certain settings delay the writing of dirty buffer cache pages into database data files on disk. Such a delay increases point-in-time recovery time and improves data migration performance. For this improvement, set the following:

  • checkpoint_segments – Maximum number of log file segments between automatic WAL checkpoints (each segment is normally 16 megabytes). This value is 16 by default, which is 16*16 MB=256 MB. This maximum can be easily filled if the server is busy or when it’s doing a massive data migration.
  • checkpoint_timeout – Maximum time between automatic WAL checkpoints, in seconds. This value is 300 seconds by default, which is 5 minutes. In RDS PostgreSQL, we recommend that you change it to 1800 (30 minutes). You can’t modify this value in Aurora PostgreSQL. In Aurora PostgreSQL, the checkpoint_timeout parameter value is set to 60 seconds. This setting based on the development team’s performance testing, dependencies of RDS recovery services, and other factors.

Check the above current checkpoint parameter settings by executing –

pg=> select * from pg_settings where name like '%checkpoint%';

For more information on runtime configuration for WAL parameters in PostgreSQL, see Write Ahead Log in the PostgreSQL documentation.


Do not disable FSYNC. Instead, use DISABLE synchronous_commit to speed up any writes to disk. Disabling FSYNC isn’t worth it to get only a few percentage points of performance improvement. Disabling FSYNC only puts the database at a possible risk for corruption.


Modify the maintenance_work_mem parameter to 16 MB, 512 MB, 1,024 MBB, or 4,096 MB, depending on your application, to see how much improvement you get. We recommend this approach:

  1. Start by setting it to 2 GB and then working backward.
  2. Specify the maximum amount of memory to be used for maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.
  3. Be careful not to set the default value too high. When AUTOVACUUM runs, up to autovacuum_max_workers times this amount of memory can be allocated. The default autovacuum_max_workers value is 3 in RDS PostgreSQL.

The following command shows the current value of maintenance_work_mem.

postgres=> show maintenance_work_mem;
50961 kB
(1 row)

Another related parameter, work_mem, works per sort and per client. Typically, there are not that many sorts that go on in a query. Usually, only one or two occur. In addition, not every client that is active sorts simultaneously.

The normal guidance for work_mem is to consider how much free RAM is around after shared_buffers is allocated. The same OS caching size figure needed to compute effective_cache_size. You divide this value by max_connections, and then take a fraction of that figure. A half of that figure is an aggressive work_mem value. We probably won’t have every client performing two active sorts simultaneously, which might make the server run out of memory.

The following commands show the current value of work_mem.

postgres=> select name,setting,unit from pg_settings where name = 'work_mem';
 name          | setting | unit
work_mem | 4076    | kB
(1 row)

postgres=> show work_mem;

(1 row)

For more information on tuning work_mem and using log_temp_files, see the Tuning Your PostgreSQL Server post on the PostgreSQL wiki. You can do this type of tuning to log sorts, hashes, and temp files. You can use this logging to figure out if sorts are spilling to disk instead of fitting in memory.

Another related solution is to set max_connection to maybe 1000 instead of default value (100) from RDS.

Vacuum parameters

By default, the autovacuum parameter is enabled on RDS PostgreSQL. You can verify this by running the following command:

postgres=> show autovacuum;
(1 row)

To set up for migration, first perform a manual vacuum before loading data, because doing so can improve migration performance. For another useful approach, see the AWS Database blog post Implement an Early Warning System for Transaction ID Wraparound in Amazon RDS for PostgreSQL.

We recommend the following for your vacuum process before data load:

pg=> vacuum verbose;
pg=> vacuum full verbose;
pg=> select relname,last_vacuum, last_autovacuum, last_analyze, last_autoanalyze from pg_stat_user_tables;
pg=> select relname,last_vacuum, last_autovacuum, last_analyze, last_autoanalyze from pg_stat_all_tables where schemaname = 'public';

check table n_dead_tup > 0
pg=> select relname, n_dead_tup, last_vacuum, last_autovacuum from pg_catalog.pg_stat_all_tables where n_dead_tup > 0 [ and relname = 'pg_authid' ] order by n_dead_tup desc;
 relname  | n_dead_tup | last_vacuum | last_autovacuum 
pg_authid |          2 |             | 

Shared_buffers and pg_buffercache

PostgreSQL has both its own dedicated memory (indicated by the shared_buffers parameter) and also uses the file system cache. Initially, you can start with the RDS default settings for shared_buffers and effective_cache_size. Adjust effective_cache_size based on shared_buffers plus the OS cache. In RDS PostgreSQL 9, the default is {DBInstanceClassMemory/16384}, which is half of the instance memory, because the unit is 8 KB.

Divide the OS cache size by max_connections, then by two. This gives an idea of a maximum reasonable setting for work_mem, which is used for per client sorting. The default is 1M, so try 4M, 8M, until 1G to understand how it performs for a data warehouse system. The work_mem unit is KB.

If you have a system with 1 GB or more of RAM, a reasonable starting value for shared_buffers is one-fourth of the memory in your system. If you have less RAM, you must account more carefully for how much RAM the OS is taking up. Closer to 15 percent is more typical there. For some workloads, even larger settings for shared_buffers are effective. However, given the way PostgreSQL that also relies on the operating system cache, it’s unlikely that you need to use more than 40 percent of RAM.

The RDS PostgreSQL default setting for shared_buffers is {DBInstanceClassMemory/32768} *8K = 8192/32768=1/4.

The following indicates the number of shared memory buffers used by the server.

shared_buffers -> {DBInstanceClassMemory/32768} -> 16-1073741823 -> true - > system -> static -> integer -> (8kB)

The shared_buffers parameter is the major component of the shared memory used by the server. It indicates a large block allocated for caching blocks, read from and written to the database.

pg=> select name,setting,unit,current_setting(name) from pg_settings where name='shared_buffers';
      name      | setting | unit | current_setting 
shared_buffers | 65000   | 8kB  | 500MB
(1 row)

If shared_buffers is set too low, you might get the following error when starting up:

FATAL: could not create shared memory segment: Invalid argument
DETAIL: Failed system call was shmget(key=5440001, size=4011376640, 03600)
postgres=> SHOW rds.extensions;   
(since RDS Postgres 9.4, pg_buffercache extension is available, RDS Postgres 9.3 doesn't support) 
(1 row)

postgres=> select * from pg_available_extensions where name = 'pg_buffercache'; 
      name      | default_version | installed_version |             comment             
 pg_buffercache | 1.0             | 1.0               | examine the shared buffer cache
(1 row)
postgres=> create extension pg_buffercache;


postgres=> select name,setting,unit,current_setting(name) FROM pg_settings where name='shared_buffers';
      name      | setting | unit | current_setting 
 shared_buffers | 25480   | 8kB  | 203840kB
(1 row)
postgres=> select count(*) from pg_buffercache;
(1 row)

You can confirm that the utility is working as expected by looking at how large your system shared_buffers is, and noting that the count of entries returned by pg_buffercache matches.

To help understand the workings of relevant database internals, you can look inside the current contents of the PostgreSQL shared_buffers database cache using the pg_buffercache module. Doing so provides visibility on how blocks in shared memory change as you perform various activities. Using pg_buffercache can show you useful information related to the database you’re currently connected to.

For more information on pg_buffercache, see pg_buffercache in the  PostgreSQL documentation and Tuning Your PostgreSQL Server on the PostgreSQL wiki.

To see the cache using pg_buffercache, run the following command.

pg=> create extension pg_buffercache;

pg=> SELECT c.relname, pg_size_pretty(count(*) * 8192) as buffered, round(100.0 * count(*) / ( SELECT setting FROM pg_settings WHERE name='shared_buffers')::integer,1) AS buffers_percent,round(100.0 * count(*) * 8192 / pg_relation_size(c.oid),1) AS percent_of_relation FROM pg_class c INNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenode INNER JOIN pg_database d ON (b.reldatabase = d.oid AND d.datname = current_database()) WHERE pg_relation_size(c.oid) > 0 GROUP BY c.oid, c.relname ORDER BY 3 DESC LIMIT 10;

        relname               |  buffered  | buffers_percent | percent_of_relation 
 pg_depend                    | 520 kB     |            0.1 |               106.6
 pg_transform_type_lang_index | 8192 bytes |   0.0 |   100.0
 TEST9                             | 8192 bytes |         0.0 |               100.0
 awsdms_status            | 8192 bytes |         0.0 |               100.0
 t1                                    | 8192 bytes |         0.0 |               100.0
 TEST10                           | 8192 bytes |         0.0 |               100.0
 pg_namespace_oid_index | 16 kB      |      0.0 |               100.0
 pg_constraint_oid_index   | 16 kB      |      0.0 |               100.0
 pg_extension_oid_index    | 16 kB      |      0.0 |               100.0
 TEST8_pkey                   | 16 kB          |          0.0 |               100.0
(10 rows)

A page in memory actually holds a block plus a small amount of overhead to identify what block it is—what’s referred to as the buffer header.


The effective_cache_size parameter is a guideline for how much memory you can expect to be available in the OS and PostgreSQL buffer caches, not an allocation. This value is used only by the PostgreSQL query planner to figure out whether the plans it’s considering would fit in RAM. If this parameter is set too low, indexes might not be used for executing queries the way you expect. The setting for shared_buffers is not taken into account here. Only the effective_cache_size value is, so it should include memory dedicated to the database too.

Setting effective_cache_size to half of total memory is a normal conservative setting.

Foreign keys

If the foreign keys are enabled on some tables, loads fail during the first load. During the full load step, you must disable foreign key dependencies. Alternatively, you can use the additional attribute discussed following to avoid load failures. Otherwise, there are foreign key conflicts. You can disable foreign keys for the full load, then stop the migration task, and then re-enable foreign keys again. For details, see Full Load Task Settings in the DMS documentation.

In addition, you might want to check into having an extra connection attribute on the target endpoint to disable foreign keys during full load. For more details, see

Extra Connection Attributes When Using PostgreSQL as a Target for AWS DMS in the DMS documentation. Check the extra connection attribute that you can use to configure PostgreSQL as a target for AWS DMS:

afterConnectScript=SET session_replication_role='replica'


The load process for tables with foreign keys is as follows:

  1. Migrate the schema using AWS SCT.
  2. Disable the foreign key dependency on the target, in case you have not used the attribute mentioned preceding to have AWS DMS bypass all foreign keys and user triggers. Using this attribute greatly reduces the time it takes to bulk load data when using full load mode.
  3. Create the task with target prep mode as DO NOTHING and also ensure that the task stops right before ongoing replication starts. For more details, see Full Load Task Settings in the DMS documentation.
  4. When full load is done and the task stops, enable the foreign keys back on the target.
  5. Start the task and when the source and target are in sync, stop the application. Access the source, wait for a few minutes, and confirm that the open transaction has completed on the target.
  6. Point the application to the target, and the whole migration process is good to go,


An Oracle to PostgreSQL migration can be a complex process requiring specialized skills through the different stages of data migration. This blog series provides a high-level discussion about the components to consider for a database migration. The series doesn’t address the complexities and intricacies of the application components and different scenarios as they depend on use case. To better understand the complexities involved, see the AWS Database Blog post Database Migration—What Do You Need to Know Before You Start?

In the migration process, you can use the migration methodologies available through the AWS DB Freedom program. We provide innovative modernization services, enabling organizations to create new AWS Cloud solutions by applying the latest technologies to existing software applications. For example, this might mean migrating legacy Oracle applications to the AWS Cloud, gaining real-time access to multiple data sources, or using legacy applications with Aurora or RDS PostgreSQL databases. In addition, you can use the process to allow applications in older languages to connect directly to newer applications. You can do so using the same standards to migrate and integrate legacy data by working with AWS Database Migration Service (AWS DMS), and also by working with AWS Consulting Service Partners (AWS CSP) and our AWS ISV partners. By doing so, you can help maximize the potential of existing applications. Organizations can increase their return on investment, improve productivity and efficiency, and gain the ability to manage operational risk.

Our discussion of the different stages of database migration environment setup is divided into three separate blog post. If you haven’t already, read Migration process and infrastructure considerations, which goes into detail about the early migration process and infrastructure considerations. You can also look at Source database considerations for the Oracle and AWS DMS CDC environment,which helps you set up the source database environment for both a one-time migration or one-way continued replication using change data capture (CDC).

Together, the three blog posts provide a brief overview of the setup of the components for an Oracle to PostgreSQL database migration. These posts cover basic steps, which you must test before building a more comprehensive database migration strategy. The series in no way can be considered the complete solution. For more details about how to manually migrate specific Oracle database objects and features to Amazon Aurora with PostgreSQL Compatibility equivalents, see Oracle Database 11g/12c To Amazon Aurora with PostgreSQL Migration Playbook.


This blog could not have been possible without the thorough and persistent review and feedback of – Melanie Henry, Wendy Neu, Eran Schitzer, Mitchell Gurspan, Ilia Gilderman, Kevin Jernigan, Jim Mlodgenski, Chris Brownlee, Ed Murray and Michael Russo.

About the Author

Mahesh Pakala has worked with Amazon since April 2014. Prior to joining Amazon, he worked with companies such as Ingres, Oracle Corporation & Dell Inc.; advice strategic marquee customers design their highly available scalable applications, heterogeneous cloud application migrations and assist with system performance.