How can I troubleshoot local storage issues in Aurora PostgreSQL-Compatible instances?

6 minute read
1

I am experiencing issues with local storage in my Amazon Aurora PostgreSQL-Compatible Edition DB instances.

Short description

DB instances that are in Amazon Aurora clusters have two types of storage:

  • Storage used for persistent data (shared cluster volume). For more information, see What the cluster volume contains.
  • Local storage for each Aurora instance in the cluster, based on the instance class. This storage size is bound to the instance class and can be changed only by moving to a larger DB instance class. Aurora PostgreSQL-Compatible uses local storage for storing error logs and temporary files. For more information, see Temporary storage limits for Aurora PostgreSQL.

Resolution

You can monitor the local storage space that's associated with the Aurora DB instance or node by using the Amazon CloudWatch metric for FreeLocalStorage. This metric reports the amount of storage available to each DB instance for temporary tables and logs. For more information, see Monitoring Amazon Aurora metrics with Amazon CloudWatch.

If your Aurora local storage is full, then use these troubleshooting steps depending on the error you receive.

Local storage space is used by temporary tables or files

"ERROR: could not write block XXXXXXXX of temporary file: No space left on device."

This error occurs when temporary storage is exhausted on the DB instance. This can have a number of causes, including operations that:

  • Alter large tables
  • Add indexes on large tables
  • Perform large SELECT queries with complex JOINs, GROUP BY, or ORDER BY clauses.

Use these methods to check temporary tables and temporary files size:

1.    For temporary files, turn on the log_temp_files parameter on the Aurora PostgreSQL-Compatible DB instance. This parameter logs the use of temporary files that are larger than the number of specified kilobytes. After this parameter is turned on, a log entry is made for each temporary file when the file is deleted. A value of 0 logs all temporary file information. A positive value logs only the files that are larger than or equal to the specified number of kilobytes. The default value is -1, which turns off temporary file logging. Use this parameter to identify the temporary file details, and then relate these temporary files with the FreeLocalStorage metric.

Note: Turning on the log_temp_files parameter can cause excessive logging on the Aurora PostgreSQL-Compatible DB instance. For this reason, it's a best practice to check the size of the Aurora PostgreSQL-Compatible log files before turning on log_temp_files. If log files are consuming the maximum space for the local storage, then reduce the value of rds.log_retention to reclaim space. The default value for rds.log_retention is three days.

You can also review temporary files by using the delta of subsequent runs of this command:

maxiops=> select datname, temp_files , pg_size_pretty(temp_bytes) as temp_file_size  FROM   pg_stat_database order by temp_bytes desc;

Note: In the temp_files column, all temporary files are counted—regardless of when you created the temporary file (for example, by sorting or hashing). The columns temp_files and temp_bytes in view pg_stat_database are collecting statistics for the accumulated value. This value can be reset by using the pg_stat_reset() function or by restarting the DB instance. For more information, see the PostgreSQL documentation for Additional statistics functions.

If you use Aurora PostgreSQL-Compatible 10 or later, you can monitor temp_bytes and temp_files by using Performance Insights. This also applies to Amazon Relational Database Service (Amazon RDS) for PostgreSQL. Performance Insights provide native counters for your DB engine's internal metrics, in addition to wait events. For more information, see Native counters for Amazon RDS for PostgreSQL.

You can also increase maintenance_work_mem and work_mem to allocate more memory to the processes that are performing the operation. This uses more memory for the operation, which can use less temporary disk storage. For more information about these parameters, see the PostgreSQL documentation for maintenance_work_mem and work_mem. It's a best practice to set the values for maintenance_work_mem and work_mem at a query or session level to avoid running out of memory. For more information, see Amazon Aurora PostgreSQL reference.

2.    For temporary tables, run a query like this:

maxiops=> SELECT
n.nspname as SchemaName
,c.relname as RelationName
,CASE c.relkind
WHEN 'r' THEN 'table'
WHEN 'v' THEN 'view'
WHEN 'i' THEN 'index'
WHEN 'S' THEN 'sequence'
WHEN 's' THEN 'special'
END as RelationType
,pg_catalog.pg_get_userbyid(c.relowner) as RelationOwner
,pg_size_pretty(pg_relation_size(n.nspname ||'.'|| c.relname)) as RelationSize
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n
    ON n.oid = c.relnamespace
WHERE  c.relkind IN ('r','s')
AND  (n.nspname !~ '^pg_toast' and nspname like 'pg_temp%')
ORDER BY pg_relation_size(n.nspname ||'.'|| c.relname) DESC;

It's a best practice to closely monitor your application and see which transactions create temporary tables. By doing so, you can manage the usage of the available local storage capacity. You can also move to a higher instance class for your Aurora instance so that the instance has more available local storage.

Local storage used by log files

Excessive logging can also cause your DB instance to run out of local storage. These are some examples of logging parameters that can consume the local storage space. The consumption could be due to either excessive logging or retaining the error log for a long time.

rds.log_retention_period
auto_explain.log_min_duration
log_connections
log_disconnections
log_lock_waits
log_min_duration_statement
log_statement
log_statement_stats

To identify which parameter is causing excessive logging, analyze the PostgreSQL logs to find the largest logs. Then, identify which parameter is responsible for the majority of the entries in those logs. You can then modify the parameter that is causing the excessive logging.

If you're repeatedly running a query that's failing with an error, then PostgreSQL logs the errors to the PostgreSQL error log by default. Review the errors logged, and then fix the failing query to prevent logs from using excessive storage. You can also reduce the default value for rds.log_retention (three days) to reclaim space used by the error logs.

If excessive logging is required and you are throttling with available local storage because of log files, consider moving to a higher instance class. This means that your Aurora DB instance has more available local storage.


Related information

Best practices with Amazon Aurora PostgreSQL

AWS OFFICIAL
AWS OFFICIALUpdated a year ago