As we continue to innovate and add new features on behalf of our customers, you can now maintain and manage query plans in Amazon Aurora PostgreSQL-Compatible Edition. This feature enables you to identify the query plans contributing to the current database load and track the performance statistics of query plans over time.
In this post, we demonstrate how you can monitor query plans to maintain optimal database performance, and discuss some key use cases of the monitoring query plan feature.
Efficient query planning is a crucial part of database performance, so we have added features such as Amazon RDS Performance Insights support for Amazon Aurora PostgreSQL, which provides database monitoring capabilities, and Aurora PostgreSQL query plan management, which provides plan stability and plan adaptability to maintain optimal database performance.
Solution overview
Monitoring query plans is currently available with Amazon Aurora PostgreSQL version 14.10, 15.5, and higher versions. On the supported versions, this feature is automatically enabled by default and is controlled by the parameter aurora_compute_plan_id with a default value of 1 (ON). You can disable it by setting aurora_compute_plan_id
to 0 (OFF) in the parameter group. Additionally, pg_stat_statements must be in shared_preload_libraries
for the plans to be tracked.
Before we explore the new views, it’s important to understand what a plan identifier is. Separate from a query identifier, a plan identifier is made up of the sql_hash
and a hash of the plan. This provides unique insights into how a query is run.
You can use the following functions to access and monitor query plans:
- aurora_stat_activity – Shows information related to the current activity of a process, including a plan identifier of the running query
- aurora_stat_plans – Returns query plans and other important statistics for the query identifier
For a comprehensive understanding of these functions and their usage, refer to Aurora PostgreSQL functions reference.
You can monitor the query plans using the different parameters in a DB parameter group. For more information, see Parameter reference for Aurora PostgreSQL query execution plans.
Prerequisites
To follow along with this post, complete the following prerequisites:
- Create an Aurora PostgreSQL cluster if you don’t already have one. For instructions, refer to Create an Aurora PostgreSQL DB cluster.
- Create an Amazon Elastic Compute Cloud (Amazon EC2) instance to install the PostgreSQL client to access the Aurora PostgreSQL instance. For instructions, refer to Create your EC2 resources and launch your EC2 instance. Or you can set up connectivity between your RDS database and EC2 compute instance in one click.
- On Amazon Linux 2023, use the following command to download the psql command line tool to run all the SQL statements:
sudo dnf install postgresql15-client
Monitor query plans
Let’s dive deeper into the process of monitoring query plans with Aurora PostgreSQL. The following is a comprehensive, step-by-step guide to help you understand it better:
- Make sure
pg_stat_statements
is in shared_preload_libraries
for the plans to be tracked. To verify that pg_stat_statements
is initialized, run the following command:
pg155db=> SELECT setting ~ 'pg_stat_statements'
FROM pg_catalog.pg_settings
WHERE name = 'shared_preload_libraries';
If pg_stat_statements
is in shared_preload_libraries
, the command will return the following:
- Check that the
aurora_compute_plan_id
parameter setting is set to ON
, which is the default:
show aurora_compute_plan_id;
The result should display on in the output:
pg155db=> show aurora_compute_plan_id;
aurora_compute_plan_id
------------------------
on
(1 row)
- Verify that
aurora_stat_plans.with_costs
and aurora_stat_plans.with_timing
parameters are set to ON
with the following query:
pg155db=> select name, setting from pg_settings where name='aurora_compute_plan_id' or name like 'aurora_stat_plans%';
name | setting
-------------------------------------------+---------
aurora_compute_plan_id | on
aurora_stat_plans.calls_until_recapture | 0
aurora_stat_plans.minutes_until_recapture | 0
aurora_stat_plans.with_analyze | off
aurora_stat_plans.with_buffers | off
aurora_stat_plans.with_costs | on
aurora_stat_plans.with_timing | on
aurora_stat_plans.with_triggers | off
aurora_stat_plans.with_wal | off
(9 rows)
- Turn on the required
aurora_stat_plans.with*
parameters in your cluster parameter group.
- Use pgbench to seed some sample data. For this post, initialize
pgbench
with a scale factor 10. You can use following command to create pgbench
tables and load the data:
pgbench -i -h <cluster endpoint> -d <database> -s 10
- Run a TCP-B test with
pgbench
for 120 seconds in 64 parallel sessions:
pgbench -h aurora-xxxxx.us-east-1.rds.amazonaws.com --protocol=simple -P 30 --time=120 --client=64 --jobs=16 -b tpcb-like pg155db > results_apg_plans.out
- During the
pgbench
run, check aurora_stat_activity
in another session using the following query:
SELECT
count(*),
query_id,
plan_id,
substr(query, 1, 20)
FROM
aurora_stat_activity ()
WHERE
state = 'active'
AND pid <> pg_backend_pid()
GROUP BY
query_id,
plan_id,
substr(query, 1, 20)
ORDER BY
4 DESC;
It gives the following result:
count | query_id | plan_id | substr
-------+----------------------+-------------+-----------------------------------------------
16 | -1978808372860228080 | -1987991358 | UPDATE pgbench_tellers SET tbalance = tbalanc
21 | -3571162383692102920 | 1506018018 | UPDATE pgbench_branches SET bbalance = bbalan
1 | 1171103359827119145 | 300482084 | INSERT INTO pgbench_history (tid, bid, aid, d
1 | -7810315603562552972 | | END;
(4 rows)
You can see the session count of each query and plan_id
assigned to the queries. This information is helpful to find the query plans for currently running queries.
- Use the
aurora_stat_plans
function for the same plan_id
to check the explain plan:
pg155db=> select calls, query, planid, plan_captured_time, explain_plan, total_exec_time
,min_exec_time
,max_exec_time
,mean_exec_time from aurora_stat_plans(true) where planid=1506018018;
-[ RECORD 1 ]------+----------------------------------------------------------------------------------------------------------------
calls | 13179
query | UPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2
planid | 1506018018
plan_captured_time | 2024-01-30 18:12:02.760864+00
explain_plan | Update on pgbench_branches (cost=0.00..4.13 rows=0 width=0) (actual time=0.068..0.068 rows=0 loops=1) +
| Buffers: shared hit=7 +
| -> Seq Scan on pgbench_branches (cost=0.00..4.13 rows=1 width=10) (actual time=0.008..0.011 rows=1 loops=1)+
| Filter: (bid = 10) +
| Rows Removed by Filter: 9 +
| Buffers: shared hit=4
total_exec_time | 3828216.9385649827
min_exec_time | 0.040024000000000004
max_exec_time | 7476.700546
mean_exec_time | 290.4785597211489
Time: 38.443 ms
It will show the query plan with the number of times the query was run, total runtime of the query for all runs, and minimum and maximum runtime of the query.
Use case 1: Integration with QPM auto capture
In most scenarios, the Query Plan Management (QPM) feature is set up for automatic plan capture, which captures plans for all statements run more than one time. However, it’s also possible to manually specify a particular set of statements for plan capture. This requires setting capture_plan_baselines
to off
in the cluster parameter group, and setting it to manual in the session where the plans will be manually captured in. If QPM identifies a different plan due to changes, this plan must be approved for use. After approval, the new plan is automatically incorporated into aurora_stat_plans
. To test this use case, complete the following steps:
- Open the custom DB cluster parameter group and set the
rds.enable_plan_management
parameter to 1
.
- Create the
apg_plan_mgmt
extension for your DB instance. Refer to Turning on Aurora PostgreSQL query plan management for more information.
pg155db=> CREATE EXTENSION apg_plan_mgmt;
CREATE EXTENSION
pg155db=> select extname,extversion from pg_extension where extname='apg_plan_mgmt';
extname | extversion
---------------+------------
apg_plan_mgmt | 2.6
(1 row)
- Create a simple table and insert some data:
pg155db=> create table test_table ( id int );
CREATE TABLE
pg155db=> insert into test_table select n from generate_series(1, 100000) as n;
INSERT 0 100000
- Check the query plan and verify that it goes to Seq Scan:
pg155db=> explain verbose select * from test_table where id = 1;
QUERY PLAN
---------------------------------------------------------------
Seq Scan on public.test_table (cost=0.00..1693.00 rows=1 width=4)
Output: id
Filter: (test_table.id = 1)
Query Identifier: 334499705516714165
(4 rows)
- Verify the
apg_plan_mgmt.dba_plans
view to check for any approved plans:
pg155db=> select sql_hash, plan_hash, status, plan_outline from apg_plan_mgmt.dba_plans;
sql_hash | plan_hash | status | plan_outline
----------+-----------+--------+--------------
(0 rows)
- Set the following parameters to capture and use the plans:
pg155db=> set apg_plan_mgmt.capture_plan_baselines = ON;
SET
pg155db=> set apg_plan_mgmt.use_plan_baselines = ON;
SET
- Run the query and verify the plan is captured:
pg155db=> select * from test_table where id = 1;
id
----
1
(1 row)
pg155db=> select sql_hash, queryid, plan_hash, status, plan_outline from apg_plan_mgmt.dba_plans;
sql_hash | queryid | plan_hash | status | plan_outline
------------+--------------------+-----------+----------+---------------------
1511466288 | 334499705516714165 | 443796055 | Approved | { +
| | | | "Fmt": "03.00", +
| | | | "Outl": { +
| | | | "Op": "SScan", +
| | | | "QB": 1, +
| | | | "S": "public", +
| | | | "Plel": "F", +
| | | | "Tbl": "test_table",+
| | | | "Rid": 1 +
| | | | } +
| | | | }
(1 row)
- Check
aurora_stat_plans
if the plan is picked up:
pg155db=> select queryid, planid, calls, explain_plan from aurora_stat_plans(true) where queryid = 334499705516714165;
queryid | planid | calls | explain_plan
--------------------+-----------+-------+--------------------------------------------------------------------------------------------------
334499705516714165 | 443796055 | 1 | Seq Scan on test_table (cost=0.00..1693.00 rows=1 width=4) (actual time=0.010..6.925 rows=1 loops=1)+
| | | Filter: (id = 1) +
| | | Rows Removed by Filter: 99999 +
| | | Buffers: shared hit=443
(1 row)
- Create an index to speed up the query:
pg155db=> create index test_table_idx1 on test_table(id);
CREATE INDEX
- Run the query again and verify if the new plan is captured and in
Unapproved
state:
pg155db=> select * from test_table where id = 1;
id
----
1
(1 row)
pg155db=> select sql_hash, queryid, plan_hash, status, plan_outline from apg_plan_mgmt.dba_plans where queryid = 334499705516714165;
sql_hash | queryid | plan_hash | status | plan_outline
------------+--------------------+------------+------------+--------------------------
1511466288 | 334499705516714165 | 443796055 | Approved | { +
| | | | "Fmt": "03.00", +
| | | | "Outl": { +
| | | | "Op": "SScan", +
| | | | "QB": 1, +
| | | | "S": "public", +
| | | | "Plel": "F", +
| | | | "Tbl": "test_table", +
| | | | "Rid": 1 +
| | | | } +
| | | | }
1511466288 | 334499705516714165 | 1850754382 | Unapproved | { +
| | | | "Fmt": "03.00", +
| | | | "Outl": { +
| | | | "Op": "IOScan", +
| | | | "QB": 1, +
| | | | "S": "public", +
| | | | "Plel": "F", +
| | | | "Idx": "test_table_idx1",+
| | | | "Tbl": "test_table", +
| | | | "Rid": 1 +
| | | | } +
| | | | }
(2 rows)
- Check that
aurora_stat_plans
is still using the Approved
plan:
pg155db=> select queryid, planid, calls, explain_plan from aurora_stat_plans(true) where queryid = 334499705516714165;
queryid | planid | calls | explain_plan
--------------------+-----------+-------+--------------------------------------------------------------------------------------------------
334499705516714165 | 443796055 | 2 | Seq Scan on test_table (cost=0.00..1693.00 rows=1 width=4) (actual time=0.010..6.925 rows=1 loops=1)+
| | | Filter: (id = 1) +
| | | Rows Removed by Filter: 99999 +
| | | Buffers: shared hit=443
(1 row)
- Approve the new plan and reject the old plan:
pg155db=> select apg_plan_mgmt.set_plan_status(1511466288, 1850754382, 'Approved';
set_plan_status
-----------------
0
(1 row)
pg155db=> SELECT apg_plan_mgmt.set_plan_status (1511466288, 443796055, 'Rejected');
set_plan_status
-----------------
0
(1 row)
pg155db=> select * from test_table where id = 1;
id
----
1
(1 row)
pg155db=> select sql_hash, queryid, plan_hash, status, plan_outline from apg_plan_mgmt.dba_plans where queryid = 334499705516714165;
sql_hash | queryid | plan_hash | status | plan_outline
------------+--------------------+------------+----------+--------------------------
1511466288 | 334499705516714165 | 443796055 | Rejected | { +
| | | | "Fmt": "03.00", +
| | | | "Outl": { +
| | | | "Op": "SScan", +
| | | | "QB": 1, +
| | | | "S": "public", +
| | | | "Plel": "F", +
| | | | "Tbl": "test_table", +
| | | | "Rid": 1 +
| | | | } +
| | | | }
1511466288 | 334499705516714165 | 1850754382 | Approved | { +
| | | | "Fmt": "03.00", +
| | | | "Outl": { +
| | | | "Op": "IOScan", +
| | | | "QB": 1, +
| | | | "S": "public", +
| | | | "Plel": "F", +
| | | | "Idx": "test_table_idx1",+
| | | | "Tbl": "test_table", +
| | | | "Rid": 1 +
| | | | } +
| | | | }
(2 rows)
- Check if
aurora_stat_plans
picked up the new approved plan:
pg155db=> select queryid, planid, calls, explain_plan from aurora_stat_plans(true) where queryid = 334499705516714165 and planid=1850754382;
queryid | planid | calls | explain_plan
--------------------+------------+-------+---------------------------------------------------
334499705516714165 | 1850754382 | 1 | Index Only Scan using test_table_idx1 on test_table (cost=0.29..4.31 rows=1 width=4) (actual time=0.019..0.020 rows=1 loops=1)+
| | | Index Cond: (id = 1) +
| | | Heap Fetches: 0 +
| | | Buffers: shared hit=3
(2 rows)
Use case 2: Integration with QPM manual capture
Sometimes, the query optimizer doesn’t generate the best query plan. Although it’s possible to use query hints in your application code to work around this, it’s often discouraged due to maintenance issues. Instead, you can use the pg_hint_plan
extension in PostgreSQL can be used. This saves directives like “scan method,” “join method,” and “join order” in the QPM, allowing the optimizer to enforce the desired plan without hints in the application code.
In this section, we demonstrate how to use the aurora_stat_activity
and aurora_stat_plans
views effectively. We explore how to identify the plans of current queries and cross-reference them with the QPM to find potentially optimized plans.
- Create the following tables and insert data:
CREATE TABLE join_tab1 (
id int,
name varchar
);
INSERT INTO join_tab1
VALUES (generate_series(1, 10), 'test' || generate_series(1, 10));
CREATE TABLE join_tab2 (
id int,
name varchar
);
INSERT INTO join_tab2
VALUES (generate_series(1, 100000), 'test' || generate_series(1, 100000));
- Create the
pgbench_join_simple_query.sql
file using the following command:
SELECT * FROM
join_tab1 b
JOIN
join_tab2 a
ON b.id=a.id
ORDER BY
a.id;
- Run the following
pgbench
command:
pgbench -h aurora-xxxx.us-east-1.rds.amazonaws.com --time=120 --client=16 --jobs=4 pgbench_join_simple_query.sql pg155db > results_apg_plans.out
- Check the
aurora_stat_plans
and aurora_stat_activity
views for query runs and plans:
pg155db=> select count(*), query_id, plan_id, query
from aurora_stat_activity() where state =
and pid <> pg_backend_pid()
group by query_id, plan_id, query
order by 1 desc;
count | query_id | plan_id | query
-------+---------------------+------------+----------------------
2 | 1931676889986960790 | 1066209958 | SELECT * FROM +
| | | join_tab1 b +
| | | JOIN +
| | | join_tab2 a +
| | | ON b.id=a.id+
| | | ORDER BY +
| | | a.id;
(1 row)
select queryid, calls, query , planid, plan_captured_time, substr(explain_plan,1,350) as explain_plan, total_exec_time
,min_exec_time
,max_exec_time
,mean_exec_time from aurora_stat_plans(true) where
queryid=1931676889986960790;
-[ RECORD 1 ]------+-------------------------------------------------------
queryid | 1931676889986960790
calls | 13832
query | SELECT
*
FROM
join_tab1 b
JOIN join_tab2 a ON b.id = a.id
ORDER BY
a.id;
planid | 1066209958
plan_captured_time | 2024-02-08 23:31:13.190271+00
explain_plan | Sort (cost=2896.63..2899.81 rows=1270 width=49) (actual time=56.697..56.699 rows=10 loops=1)
| Sort Key: b.id
| Sort Method: quicksort Memory: 25kB
| Buffers: shared hit=545
| -> Hash Join (cost=2791.00..2831.16 rows=1270 width=49) (actual time=56.664..56.672 rows=10 loops=1)
| Hash Cond: (b.id = a.id)
| Buffers: shared hit=542 +
|
total_exec_time | 1070447.3642510038
min_exec_time | 25.936728
max_exec_time | 225.35296599999998
mean_exec_time | 77.3891963744216
- Check the approved plan in
dba_plans
for this query; it should match with aurora_stat_plans
and aurora_stat_activity
:
SELECT sql_hash,
plan_hash,
status,
enabled,
estimated_total_cost "cost",
sql_text
FROM apg_plan_mgmt.dba_plans
WHERE queryid=1931676889986960790;
sql_hash | plan_hash | status | enabled | cost | sql_text
------------+------------+----------+---------+------------------+----------------------------------------
1230395705 | 1066209958 | Approved | t | 2899.80989116354 | SELECT
| | | | | *
| | | | | FROM
| | | | | join_tab1 b
| | | | | JOIN join_tab2 a ON b.id = a.id+
| | | | | ORDER BY +
| | | | | a.id;
(1 row)
- Check the query plan and use
pg_hint_plan
to use hints to generate a better plan:
pg155db=> EXPLAIN (hashes true)
SELECT
*
FROM
join_tab1 b
JOIN join_tab2 a ON b.id = a.id
ORDER BY
a.id;
QUERY PLAN
--------------------------------------------------------------------------------------
Sort (cost=2896.63..2899.81 rows=1270 width=49)
Sort Key: b.id
-> Hash Join (cost=2791.00..2831.16 rows=1270 width=49)
Hash Cond: (b.id = a.id)
-> Seq Scan on join_tab1 b (cost=0.00..22.70 rows=1270 width=36)
-> Hash (cost=1541.00..1541.00 rows=100000 width=13)
-> Seq Scan on join_tab2 a (cost=0.00..1541.00 rows=100000 width=13)
SQL Hash: 1230395705, Plan Hash: 1066209958
(8 rows)
- Enable the
pg_hint_plan.enable_hint
parameter using the following query and check the plan using the hints:
pg155db=> SET pg_hint_plan.enable_hint = true;
SET
pg155db=> show pg_hint_plan.enable_hint;
pg_hint_plan.enable_hint
--------------------------
true
(1 row)
- Use
EXPLAIN
for the query with the hints you want to use. In the following example, use the NestLoop (a, b)
hint, which is a directive for the optimizer to use a nested loop to join table a and table b:
pg155db=> /*+ NestLoop(a b) */ EXPLAIN (hashes true)
SELECT
*
FROM
join_tab1 b
JOIN join_tab2 a ON b.id = a.id
ORDER BY
a.id;
QUERY PLAN
--------------------------------------------------------------------------------------
Sort (cost=2896.63..2899.81 rows=1270 width=49)
Sort Key: b.id
-> Hash Join (cost=2791.00..2831.16 rows=1270 width=49)
Hash Cond: (b.id = a.id)
-> Seq Scan on join_tab1 b (cost=0.00..22.70 rows=1270 width=36)
-> Hash (cost=1541.00..1541.00 rows=100000 width=13)
-> Seq Scan on join_tab2 a (cost=0.00..1541.00 rows=100000 width=13)
Note: An Approved plan was used instead of the minimum cost plan.
SQL Hash: 1230395705, Plan Hash: 1066209958, Minimum Cost Plan Hash: 1283399430
(9 rows)
The output shows that there is another plan that is cost-effective.
- Verify that plan
1283399430
was captured and note the status of the plan:
pg155db=> SELECT sql_hash,
plan_hash,
status,
enabled,
sql_text
FROM apg_plan_mgmt.dba_plans
Where plan_hash=1066209958 and sql_hash=1230395705;
sql_hash | plan_hash | status | enabled | cost | sql_text
------------+------------+------------+---------+------------------+----------------------------------------
1230395705 | 1066209958 | Approved | t | 2899.80989116354 | SELECT +
| | | | | *
| | | | | FROM
| | | | | join_tab1 b
| | | | | JOIN join_tab2 a ON b.id = a.id+
| | | | | ORDER BY
| | | | | a.id;
1230395705 | 1283399430 | Unapproved | t | 10453.0426283819 | SELECT +
| | | | | *
| | | | | FROM
| | | | | join_tab1 b
| | | | | JOIN join_tab2 a ON b.id = a.id+
| | | | | ORDER BY
| | | | | a.id;
(2 rows)
- Approve the new plan and reject the old plan:
pg155db=> SELECT apg_plan_mgmt.set_plan_status (1230395705, 1283399430, 'Approved');
set_plan_status
-----------------
0
(1 row)
pg155db=> SELECT apg_plan_mgmt.set_plan_status (1230395705, 1066209958, 'Rejected');
set_plan_status
-----------------
0
(1 row)
pg155db=> SELECT sql_hash,
plan_hash,
status,
enabled,
sql_text
FROM apg_plan_mgmt.dba_plans
Where queryid=1931676889986960790;
sql_hash | plan_hash | status | enabled | sql_text
------------+------------+----------+---------+----------------------------------------
1230395705 | 1283399430 | Approved | t | SELECT +
| | | | * +
| | | | FROM +
| | | | join_tab1 b +
| | | | JOIN join_tab2 a ON b.id = a.id+
| | | | ORDER BY +
| | | | a.id;
1230395705 | 1066209958 | Rejected | t | SELECT +
| | | | * +
| | | | FROM +
| | | | join_tab1 b +
| | | | JOIN join_tab2 a ON b.id = a.id+
| | | | ORDER BY +
| | | | a.id;
(2 rows)
- Run the following query and check that
aurora_stat_plans
picked up the plan:
pg155db=> SELECT
*
FROM
join_tab1 b
JOIN join_tab2 a ON b.id = a.id
ORDER BY
a.id;
id | name | id | name
----+--------+----+--------
1 | test1 | 1 | test1
2 | test2 | 2 | test2
3 | test3 | 3 | test3
[truncating output]
pg155db => select queryid, planid, calls, explain_plan from aurora_stat_plans(true) where planid=1283399430;
queryid | planid | calls | explain_plan
---------------------+------------+-------+-------------------------------------------------------------------------------------------------------------------------------
1931676889986960790 | 1283399430 | 2 | Merge Join (cost=9933.99..10453.04 rows=1270 width=49) (actual time=37.240..37.250 rows=10 loops=1) +
| | | Merge Cond: (b.id = a.id)
| | | Buffers: shared hit=543, temp read=343 written=685
| | | I/O Timings: temp read=0.642 write=2.748
| | | -> Sort (cost=88.17..91.35 rows=1270 width=36) (actual time=0.015..0.018 rows=10 loops=1)
| | | Sort Key: b.id
| | | Sort Method: quicksort Memory: 25kB
| | | Buffers: shared hit=1
| | | -> Seq Scan on join_tab1 b (cost=0.00..22.70 rows=1270 width=36) (actual time=0.008..0.010 rows=10 loops=1) +
| | | Buffers: shared hit=1
| | | -> Sort (cost=9845.82..10095.82 rows=100000 width=13) (actual time=37.216..37.218 rows=11 loops=1) +
| | | Sort Key: a.id
| | | Sort Method: external sort Disk: 2736kB
| | | Buffers: shared hit=542, temp read=343 written=685 +
| | | I/O Timings: temp read=0.642 write=2.748
| | | -> Seq Scan on join_tab2 a (cost=0.00..1541.00 rows=100000 width=13) (actual time=0.005..8.676 rows=100000 loops=1)+
| | | Buffers: shared hit=541
(1 row)
- Run
pgbench
again and verify in aurora_stat_plans
if the new plan is picked up:
pg155db=> select queryid, planid, calls from aurora_stat_plans(true) where queryid=1931676889986960790 ;
queryid | planid | calls
---------------------+------------+-------
1931676889986960790 | 1283399430 | 22625
1931676889986960790 | 1066209958 | 29125
(2 rows)
pg155db=> select queryid, planid, calls from aurora_stat_plans(true) where queryid=1931676889986960790 ;
queryid | planid | calls
---------------------+------------+-------
1931676889986960790 | 1283399430 | 23030
1931676889986960790 | 1066209958 | 29125
You can see the calls count increase for the new plan with ID 1283399430
.
Clean up
Complete the following steps after your tests to prevent unnecessary expenses:
- Create any database snapshots.
- Stop your EC2 instance.
- Delete the Aurora cluster.
Summary
In this post, we showed you how to monitor query plans in Amazon Aurora PostgreSQL with the new query monitoring feature. This feature provides valuable insights and optimizations, promoting improved database performance. It allows users to analyze query plans in real-time, swiftly pinpointing and addressing performance issues. This proactive strategy reduces latency, boosts resource usage, and minimizes downtime, thereby enhancing productivity.
If you have any questions or suggestions, leave a comment.
About the authors
Sameer Malik is a Principal Database Solutions Architect with Amazon Web Services.
Baji Shaik is a Sr. Lead Consultant with AWS Professional Services, Global Competency Centre. His background spans a wide depth and breadth of expertise and experience in SQL and NoSQL database technologies. He is a Database Migration Expert and has developed many successful database solutions addressing challenging business requirements for moving databases from on premises to Amazon RDS and Amazon Aurora PostgreSQL/MySQL. He is an eminent author, having written several books on PostgreSQL. A few of his recent works include “PostgreSQL Configuration,” “Beginning PostgreSQL on the Cloud,” and “PostgreSQL Development Essentials.” Furthermore, he has delivered several conference and workshop sessions.