I set up an Amazon Relational Database Service (Amazon RDS) Oracle DB instance but it keeps getting larger than I expected. Why is this happening, and what can I do about it?

Temporary table space can grow significantly to accommodate operations needed for database functions. RDS Oracle databases are created with auto-extend enabled for all tablespaces. This means that each tablespace keeps growing to accommodate more data, until no more is needed or the allocated storage is used.

To resolve this issue, complete the following steps:

Find the amount of space allocated to data in the tablespaces

1.    Determine the amount of space allocated to data compared to the amount of space allocated to temporary files:

SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024/1024 AS GBYTES FROM DBA_DATA_FILES GROUP BY TABLESPACE_NAME;
SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024/1024 AS GBYTES FROM dba_temp_files GROUP BY TABLESPACE_NAME; 

2.    Determine the amount of free space allocated to your databases:

SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024/1024 AS GBYTES FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME;

3.    Determine the amount of space allocated for online redo logs:

SELECT SUM(bytes*members) bytes FROM v$log;

4.    Determine the amount of space allocated for control files:

SELECT SUM(block_size * file_size_blks) bytes FROM v$controlfile;

5.    Determine the amount of space allocated to the audit ADUMP and trace/log files BDUMP directories:

SELECT * FROM DBA_DIRECTORIES;

SELECT * FROM table(rdsadmin.rds_file_util.listdir('ADUMP'));

SELECT * FROM table(rdsadmin.rds_file_util.listdir('BDUMP'));

SELECT SUM(FILESIZE)/1024/1024 as total_mb FROM table(rdsadmin.rds_file_util.listdir('ADUMP'));

SELECT SUM(FILESIZE)/1024/1024 as total_mb FROM table(rdsadmin.rds_file_util.listdir('BDUMP'));

Shrink tablespaces that are consuming more storage than necessary

1.    Resize files to a more reasonable value if they are using more storage than expected. For example, if the temp tablespace is consuming 100 GB of storage, run the following command to shrink the temp tablespace to 10 GB:

ALTER TABLESPACE temp RESIZE 10g;

Note: This command fails if the tablespace has allocated extents beyond the 10 GB threshold. Retry the command and specify an incrementally larger value for the resize parameter. Or, try running the following command:

ALTER TABLESPACE temp SHRINK SPACE KEEP 10g;

2.    If resizing to incrementally larger values is not an option, kill sessions with currently allocated temp segments. To determine which session has temp segments allocated, run the following query:

SELECT * FROM v$sort_usage;

3.    If killing sessions is not an option, consider other steps, such as creating another default temp2 tablespace, dropping the original temp tablespace, re-creating a new default temp tablespace with desired parameters, and finally dropping the temp2 tablespace.

4.    For non-temporary tablespace, move segments aside to free up space at the end of the data file, which allows the data file to be shrunk. When working with tablespace allocated for online redo logs, this process can be tedious, and some time can elapse before the space is automatically cleared so that the process of shrinking the data file can be completed.

5.    To view the current archive log and trace file retention settings, run these commands:

SET SERVEROUTPUT ON;
EXEC rdsadmin.rdsadmin_util.show_configuration; 

6.    To get the size of archive log files on the instance, run the following command:

select nvl(sum(BLOCKS * BLOCK_SIZE),0) bytes from V$ARCHIVED_LOG
  where FIRST_TIME >= SYSDATE-(replace_archivelog_retention_hours/24) and DEST_ID=1; 

7.    Check for the existence of unneeded .dmp files that can be removed to free up space. To check for .dmp files in the data_pump_dir that were not cleaned up after an import, run the following query:

SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) ORDER by mtime;

If this query finds .dmp files, you can delete them with the following query by specifying the appropriate file name:

EXEC utl_file.fremove('DATA_PUMP_DIR','[file name]');

Did this page help you? Yes | No

Back to the AWS Support Knowledge Center

Need help? Visit the AWS Support Center.

Published: 2015-10-30

Updated: 2018-05-04