How can I troubleshoot local storage issues in Aurora PostgreSQL instances?

Last updated: 2021-09-30

I want to troubleshoot local storage issues in my Amazon Aurora PostgreSQL-Compatible Edition instances.

Short description

DB instances that are in Amazon Aurora clusters have two types of storage:

  • Storage used for persistent data (shared cluster volume). For more information, see What the cluster volume contains.
  • Local storage for each Aurora instance in the cluster, based on the instance class. This storage size is bound to the instance class and can be changed only by moving to a larger DB instance class. Aurora PostgreSQL-Compatible uses local storage for storing error logs and temporary files.

Resolution

You can monitor the local storage space that's associated with the Aurora DB instance or node by using the Amazon CloudWatch metric for FreeLocalStorage. This metric reports the amount of storage available to each DB instance for temporary tables and logs. For more information, see Monitoring Amazon Aurora metrics with Amazon CloudWatch.

If your Aurora local storage is full, then see the following error messages and troubleshooting steps:

Local storage space is used by temporary tables or files

"ERROR: could not write block XXXXXXXX of temporary file: No space left on device."

This error can occur when temporary storage is exhausted on the DB instance. Operations that alter large tables, add indexes on large tables, or that perform large SELECT queries with complex JOINs, GROUP BY, or ORDER BY clauses can lead to these errors.

You can use the following methods to check temporary tables and temporary files size:

1.    For temporary files, turn on the log_temp_files parameter on the Aurora PostgreSQL-Compatible DB instance. This parameter logs the use of temporary files that are larger than the number of specified kilobytes. After this parameter is turned on, a log entry is made for each temporary file when the file is deleted. A value of 0 logs all temporary file information. A positive value logs only the files that are larger than or equal to the specified number of kilobytes. The default value is -1, which turns off temporary file logging. You can use this parameter to identify the temporary file details, and then you can relate these temporary files with the FreeLocalStorage metric.

Note: Enabling the log_temp_files parameter can cause excessive logging on the Aurora PostgreSQL-Compatible DB instance. For this reason, it's a best practice to check the size of the Aurora PostgreSQL-Compatible log files before enabling log_temp_files. If the Aurora PostgreSQL-Compatible log files are consuming the maximum space of the local storage, then you can reclaim space by reducing the value of the rds.log_retention parameter. The default value for rds.log_retention is three days.

You can also review temporary files by using the delta of subsequent runs of the following command:

maxiops=> select datname, temp_files , pg_size_pretty(temp_bytes) as temp_file_size  FROM   pg_stat_database order by temp_bytes desc;

Note: In the temp_files column, all temporary files are counted—regardless of when you created the temporary file (for example, by sorting or hashing). The columns temp_files and temp_bytes in view pg_stat_database are collecting statistics for the accumulated value. This value can be reset by using the pg_stat_reset() function or by restarting the DB instance. For more information, see the PostgreSQL documentation for Additional statistics functions.

Note: If you use Aurora PostgreSQL-Compatible or Amazon RDS for PostgreSQL 10 or later, you can monitor temp_bytes and temp_files by using Performance Insights. Performance Insights provide native counters for your DB engine's internal metrics, in addition to wait events. For more information, see Native counters for Amazon RDS for PostgreSQL.

You can also increase maintenance_work_mem and work_mem to allocate more memory to the processes that are performing the operation. This uses more memory for the operation, which can use less temporary disk storage. For more information about these parameters, see the PostgreSQL documentation for maintenance_work_mem and work_mem. It's a best practice to set the values for maintenance_work_mem and work_mem at a query or session level to avoid running out of memory. For more information, see Amazon Aurora PostgreSQL reference.

2.    For temporary tables, run a query similar to the following:

maxiops=> SELECT
n.nspname as SchemaName
,c.relname as RelationName
,CASE c.relkind
WHEN 'r' THEN 'table'
WHEN 'v' THEN 'view'
WHEN 'i' THEN 'index'
WHEN 'S' THEN 'sequence'
WHEN 's' THEN 'special'
END as RelationType
,pg_catalog.pg_get_userbyid(c.relowner) as RelationOwner
,pg_size_pretty(pg_relation_size(n.nspname ||'.'|| c.relname)) as RelationSize
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n
    ON n.oid = c.relnamespace
WHERE  c.relkind IN ('r','s')
AND  (n.nspname !~ '^pg_toast' and nspname like 'pg_temp%')
ORDER BY pg_relation_size(n.nspname ||'.'|| c.relname) DESC;

It's a best practice to closely monitor your application and see which transactions create temporary tables. By doing so, you can manage the usage of the available local storage capacity. You can also move to a higher instance class for your Aurora instance so that the instance has more available local storage.

Local storage used by log files

Excessive logging can also cause your DB instance to run out of local storage. The following are some examples of logging parameters that can consume the local storage space. The consumption could be due to either excessive logging or retaining the error log for a long time.

rds.log_retention_period
auto_explain.log_min_duration
log_connections
log_disconnections
log_lock_waits
log_min_duration_statement
log_statement
log_statement_stats

To identify which parameter is causing excessive logging, analyze the PostgreSQL logs to find the largest logs. Then, identify which parameter is responsible for the majority of the entries in those logs. You can then modify the parameter that is causing the excessive logging. If you're repeatedly running a query that's failing with an error, then PostgreSQL logs the errors by default to the PostgreSQL error log. Review the errors logged, and then fix the failing query to prevent logs from using excessive storage. You can also reduce the default value for rds.log_retention (three days) to reclaim space.

You can also increase the size of your Aurora DB instance so that the DB instance has more available local storage.


Did this article help?


Do you need billing or technical support?