AWS Database Blog
Resources consumed by idle PostgreSQL connections
July 2023: This post was reviewed for accuracy.
PostgreSQL is one of the most popular open-source relational database systems. With more than 30 years of development work, PostgreSQL has proven to be a highly reliable and robust database that can handle a large number of complex data workloads. AWS provides two managed PostgreSQL options: Amazon Relational Database Service (Amazon RDS) for PostgreSQL and Amazon Aurora PostgreSQL-Compatible Edition.
This is a two-part series. In this post, I talk about how PostgreSQL manages connections and the impact of idle connections on the memory and CPU resources. In the second post, Performance impact of idle PostgreSQL connections, I discuss how idle connections impact PostgreSQL performance
Connections in PostgreSQL
When the PostgreSQL server is started, the main process forks to start background maintenance processes. With default configurations, the process tree looks like the following on a self-managed PostgreSQL instance:
You can see this process tree in Amazon RDS and Aurora PostgreSQL by enabling enhanced monitoring and looking at the OS Process List page (see the following screenshot). For more information, see Enhanced Monitoring.
These child processes take care of activities such as logging, checkpointing, stats collection, and vacuuming. The process list in Enhanced Monitoring limits the total number of processes that are shown on the console. If you need to view the complete list of processes, consider using the pg_proctab extension to query system statistics. This extension is available in the latest RDS PostgreSQL minor versions.
After initializing the maintenance child processes, the main PostgreSQL process starts waiting for new client connections. When a new connection is received, the main process forks to create a child process to handle this new connection. The main process goes back to wait for the next connection, and the newly forked child process takes care of all activities related to this new client connection. A new child process is started for each new connection received by the database.
The following screenshot shows that a user app_user
is connected to the database mydb
from a remote host (10.0.0.123).
The max_connections parameter controls the total number of connections that can be opened simultaneously.
Memory used by connections
PostgreSQL uses shared memory and process memory. Shared memory is a chunk of memory used primarily for data page cache. The shared_buffers parameter configures the size of the shared memory. This shared memory is used by all the PostgreSQL processes.
The process memory contains process-specific data. This memory is used for maintaining the process state, data sorting, storing temporary tables data, and caching catalog data.
On Linux, when a new process is forked, a copy of the process gets created. As an optimization, Linux kernel uses the copy-on-write method, which means that initially the new process keeps pointing to the same physical memory that was available in the parent process. This continues until the parent or the child process actually changes the data. At that point, a copy of the changed data gets created. This method reduces some memory overhead when PostgreSQL forks a child process on receiving the new connection. For more information about fork functionality, see the entry in the Linux manual.
Idle connections
Idle connections are one of the common causes of bad database performance. It’s very common to see a huge number of connections against the database. A common explanation is that they’re just idle connections and not actually doing anything. However, this is incorrect—they’re consuming server resources.
To determine the impact of idle PostgreSQL connections, I performed a few tests using a Python script that uses the Psycopg 2 for PostgreSQL connectivity. The tests include the following parameters:
- Each test consists of 2 runs
- Each test run opens 100 PostgreSQL connections
- Depending on the test case, some activity is performed on each connection before leaving it idle
- The connections are left idle for 10 minutes before closing the connections
- The second test runs DISCARD ALL on the connection before leaving it idle
- The tests are performed using Amazon RDS for PostgreSQL 11.6
Although this post shows the results for Amazon RDS for PostgreSQL 11.6 only, these tests were also performed with Aurora PostgreSQL 11.6, PostgreSQL on EC2, and Amazon RDS for PostgreSQL 12 to confirm that we see a similar resource utilization trend. I used the RDS instance type db.m5.large for the test runs, which provides 2 vCPUs and 8GB memory. For storage, I used an IO1 EBS volume with 3000 IOPS.
The DISCARD ALL statement discards the session state. It discards all temporary tables, plans, and sequences, along with any session-specific configurations. This statement is often used by connection poolers before reusing the connection for the next client. For each test, a run with DISCARD ALL has been added to see if there is any change in the memory utilization.
Connections test #1: Connections with no activity
This basic test determines the memory impact of newly opened connection. This test performs the following steps:
- Open 100 connections.
- Leave the connections idle for 10 minutes.
- Close the connections.
During the 10-minute wait, check the connection state as follows:
The second run repeats the same steps but runs DISCARD ALL before leaving the connection idle. If you run the preceding query, you get the following output:
The following Amazon CloudWatch metrics show the connections count (DatabaseConnections
) and memory utilization (FreeableMemory
) on an RDS PostgreSQL instance.
The free memory chart shows no significant difference between the run with and without DISCARD ALL. As the connections got opened, the free memory reduced from approximately 5.27 GB to 5.12 GB. The 100 test connections used around 150 MB, which means that on average, each idle connection used around 1.5 MB.
Connections test #2: Temporary tables
This test determines the memory impact of creating temporary tables. In this test, the connections create and drop a temporary table in the following steps:
- Open a connection.
- Create a temporary table and insert 1 million rows with the following SQL statement:
- Drop the temporary table:
- Commit the changes.
- Repeat these steps for all 100 connections.
- Leave the connections idle for 10 minutes.
- Close the connections.
During the 10-minute wait, check the connections state as follows:
The second run repeats the same step but runs DISCARD ALL before leaving the connections idle. If you run the same query again, you get the following results:
The following chart shows the connections count and the memory utilization on an RDS PostgreSQL instance.
The free memory chart shows no significant difference between the run with and without DISCARD ALL. As the connections got opened, the free memory reduced from approximately 5.26 G to 4.22 G. The 100 test connections used around 1004 MB, which means that on average, each idle connection used around 10.04 MB.
This additional memory is consumed by the buffers allocated for temporary tables storage. The parameter temp_buffers controls the maximum memory that can be allocated for temporary tables. The default value for this parameter is set to 8 MB. This memory, once allocated in a session, is not freed up until the connection is closed.
Connections test #3: SELECT queries
This test determines the memory impact of running some SELECT queries. In this test, each connection fetches one row from each of the tables in the PostgreSQL internal schema information_schema
. In PostgreSQL 11, this schema has 60 tables and views. The test includes the following steps:
- Open a connection.
- Fetch the names of all the tables and views in
information_schema
:
- In a loop, run
select
on each of these tables withLIMIT 1
. The following code is an example query:
- Repeat these steps for all 100 connections.
- Leave the connections idle for 10 minutes.
- Close the connections.
During the 10-minute wait, check the connections state as follows:
The second run repeats the same steps but runs DISCARD ALL before leaving the connections idle. Running the query again gives you the following results:
The following chart shows the connections count and the memory utilization on an RDS PostgreSQL instance.
The free memory chart shows no significant difference between the run with and without DISCARD ALL. As the connections got opened, the free memory reduced from approximately 5.25 GB to 4.17 GB. The 100 test connections used around 1080 MB, which means that on average, each idle connection used around 10.8 MB.
Connections test #4: Temporary table and SELECT queries
This test is a combination of tests 2 and 3 to determine the memory impact of creating a temporary table and running some SELECT queries on same connection. The test includes the following steps:
- Open a connection.
- Fetch the names of all the tables and views in
information_schema
:
- In a loop, run select on each of these tables with
LIMIT 1
. The following is an example query:
- Create a temporary table and insert 1 million rows:
- Drop the temporary table:
- Commit the changes.
- Repeat these steps for all 100 connections.
- Leave the connections idle for 10 minutes.
- Close the connections.
During the 10-minute wait, check the connection state as follows:
The second run repeats the same step but runs DISCARD ALL before leaving the connections idle. Running the preceding query gives the following results:
The following chart shows the connections count and the memory utilization on an RDS PostgreSQL instance.
The free memory chart shows no significant difference between the run with and without DISCARD ALL. As the connections got opened, the free memory reduced from approximately 5.24 GB to 3.79 GB. The 100 test connections used around 1450 MB, which means that on average, each idle connection used around 14.5 MB.
CPU impact
So far, we have focused on memory impact only, but the metrics show that CPU utilization also goes up when the number of idle connections go up. The idle connections have minimal impact on the CPU, but this can be an issue if CPU utilization is already high.
The following figure shows the connection counts and CPU utilizations with different numbers of idle connections. In this test, the connections were open and left idle for 10 minutes before closing the connections and waiting another 10 minutes before opening next batch of connections. The figure shows that the CPU utilization was around 1% with some small spikes to 2% with no connections. The utilization increased to 2% with 100 idle connections, increased to 3% with 500 idle connections, increased to 5% with 1,000 idle connections, increased to 6% with 1,500 idle connections and increased to 8% with 2,000 idle. Note that this utilization is for an instance with 2 vCPUs.
CPU utilization goes up with the number of connections because PostgreSQL needs to examine each process to check the status. This is required irrespective of whether the connection is active or idle.
Summary
PostgreSQL connections consume memory and CPU resources even when idle. As queries are run on a connection, memory gets allocated. This memory isn’t completely freed up even when the connection goes idle. In all the scenarios described in this post, idle connections result in memory consumption irrespective of DISCARD ALL.
The amount of memory consumed by each connection varies based on factors such as the type and count of queries run by the connection, and the usage of temporary tables. As per the test results shown in this post, the memory utilization ranged from around 1.5–14.5 MB per connection.
If your application is configured in a way that results in idle connections, it’s recommended to use a connection pooler so your memory and CPU resources aren’t wasted just to manage the idle connections. The following post in this series shows how these idle connections impact your database performance.
About the Author
Yaser Raja is a Senior Consultant with Professional Services team at Amazon Web Services. He works with customers to build scalable, highly available and secure solutions in AWS cloud. His focus area is homogenous and heterogeneous migrations of on-premise databases to AWS RDS and Aurora PostgreSQL.