AWS Database Blog
Best practices for Amazon Aurora MySQL database configuration
March 2025: This post was reviewed and updated for accuracy.
After migrating or launching your new Amazon Aurora MySQL-Compatible Edition cluster in the AWS Cloud, have you asked yourself one or more of the following questions?
- What are the next steps? How can I make it run optimally?
- Is it recommended to modify any existing parameters?
- What parameters should I modify?
If so, this post can provide some guidance on what to do (and what not to do).
In this post, we discuss, clarify, and provide recommendations on the configuration parameters for Aurora MySQL. These database parameters and their values are important when carrying over, tuning, or reconfiguring your newly created or migrated instance in the AWS Cloud. We also discuss which parameters are carried over from an Amazon Relational Database Service (Amazon RDS) for MySQL instance into Aurora clusters. We cover which are the default values, and which values are critical for your instance’s stability and optimal performance.
The most important consideration before making changes is to understand the needs and motivations behind your change. Although most parameters settings can be left at their default values, application workload changes might cause these parameters to require adjustment. Before making any changes, ask yourself the following questions:
- Am I experiencing stability issues, such as restarts or failovers?
- Am I experiencing performance or scalability issues with my workload?
Quick primer on Aurora parameter groups
There are two types of Aurora MySQL parameter groups: DB parameter groups and DB cluster parameter groups. Some parameters affect the configuration for an entire DB cluster, such as binary log format, time zone, and character set defaults. Others limit their scope to a single DB instance.
In this post, we classify them in a different context: which parameters affect your Aurora cluster’s behavior, stability, and functionality, and which ones affect performance when modified.
Keep in mind that both parameter types come with preset defaults, and some parameters allow modification.
For a refresher and more in-depth look into the basics of modifying and working with parameter groups, see Working with DB Parameter Groups and DB Cluster Parameter Groups and Aurora MySQL Parameters.
Before making changes to your production database
Parameter changes can produce unexpected results, including degraded performance and system instability. Follow these best practices before changing any database configuration parameters:
- Make changes in a test environment by creating a clone or restoring a snapshot of your production instance.
- Generate a workload for your test instance that mimics your production workload. This way, your setup resembles your production environment as closely as possible.
- Check system performance on key performance indicators such as CPU utilization, number of database connections, memory utilization, cache hit rates and query throughput, and latency. Do this before the change to obtain baseline numbers and after to observe results.
- Make only one parameter change at a time to avoid ambiguity.
- If you think you need a large change in a capacity-related parameter (memory/cpu), make the change in small steps so that you can learn about the incremental impact of modifying that parameter. For example, you probably don’t want to change your temporary table limit from 16MB to 10GB in one step.
- If the change makes a demonstrably positive impact on your system without negative side-effects, you can consider to keep the new value (or keep testing, if this change was just a step towards a final setting). Otherwise, revert to the default value or a previously tested value that showed positive impact.
- Document which parameter had the positive impact you expect and which key performance indicators showed improvement. And most importantly, document why you changed the setting in the first place.
Default parameter values and their importance
Some DB instance parameters contain variables or formulas. Examples are the instance’s size and memory footprint. It’s best to leave these unchanged, because they adjust automatically whenever an instance scale-up or -down operation is performed.
For example, the Aurora DB parameter innodb_buffer_pool_size
defaults to:
DBInstanceClassMemory
is a variable that is set to your instance’s memory size in GiB.
Example: For a db.r8g.xlarge instance with 32 GiB of memory, this value is 25.77 GiB.
Suppose that we decide to set this parameter to a fixed value, say to 18,000,000,000 bytes, and later we run a scale-down operation to db.r8g.large, which has half as much memory (16 GiB). In this case, we will likely then encounter an out-of-memory condition on the database engine after modifying it, and the instance won’t be able to properly start.
For a quick glance on which parameters are automatically calculated by system variables, you can search for these parameters inside your parameter group definition. To do this, search for the curly bracket character ({
), as shown in the following figure.
If you want to query the actual value in use by the instance, there are two ways to do this on a command line. These are by using a SHOW GLOBAL VARIABLES
or SELECT
statement:
Symptoms and diagnosis for incorrectly set parameter values
When a memory-related parameter is misconfigured, the misconfiguration can manifest itself as an out-of-memory condition that’s recorded in your MySQL error log. In this case, the instance enters an incompatible parameter state and generates event logs similar to the following:
[-] Database instance put into incompatible-parameters. Your Aurora MySql memory setting is inappropriate for the usages
Refer to the RDS logs, events, and streams documentation to learn more about how to access your instance’s event logs.
Classifying parameters
For the scope of this post, we can classify Aurora MySQL parameters into two main groups:
- Parameters that control the database’s behavior and functionality but have no impact on resource utilization and instance stability.
- Parameters that might affect performance by managing how resources, such as caching and internal memory-based buffers, are allocated in the instance.
Let’s look at some of these, their default values, and how they affect your instance’s behavior or performance when modified. The following table describes the parameter names as you find them in the parameter group, the Aurora and MySQL default values, and a summary of the functionality affected by modifying this parameter.
Parameter name | Impacts | Aurora 3.x default value | MySQL 8 value | Parameter description |
autocommit | Functionality | 1 (ON) | 1 (ON) | When set to 1, the engine automatically commits transactions when completed. If set to 0, you must use COMMIT to accept a transaction or ROLLBACK to cancel it. |
max_connections | Functionality | {Variable} | {Variable} | Limits the maximum number of concurrent database connections. |
max_allowed_packet | Functionality | 4194304 (bytes) | 4194304 (bytes) | Maximum allowed packet size that the server can receive. |
group_concat_max_len | Functionality | 1024 (bytes) | 1024 (bytes) | Maximum result length returned by the server for the GROUP_CONCAT() function. |
innodb_ft_result_cache_limit | Functionality | 2000000000 (bytes) | 2000000000 (bytes) | Limits cache size for InnoDB full-text search query results. |
max_heap_table_size | Functionality | 16777216 (bytes) |
16777216 (bytes) |
Limits size of new user-defined MEMORY tables. Doesn’t apply limit to existing tables. |
binlog_cache_size | Performance | 32768 (bytes) |
32768 (bytes) | Controls binary log cache size, increasing it improves performance on systems with large transactions. Limits value in environments with a large number of DB connections. |
bulk_insert_buffer_size | Performance | N/A | 0 | Controls MyISAM cache size to speed up bulk insert operations. Doesn’t apply to Aurora MySQL. |
innodb_buffer_pool_size | Performance | {Variable} 3/4 of instance memory |
134217728 (128 MB) |
Controls memory size of the InnoDB buffer pool where table and index data are cached. |
innodb_sort_buffer_size | Performance | 1048576 (bytes) |
1048576 (bytes) |
Defines how much data is read into memory for sorting operations before writing to disk. |
join_buffer_size | Performance | 262144 (bytes) |
262144 (bytes) |
Minimum buffer size used for joins, indexing, and range scans that are not indexed. |
key_buffer_size | Performance | 8388608 (bytes) | 8388608 (bytes) |
Key cache for MyISAM tables. While Aurora MySQL doesn’t allow usage of MyISAM tables to store persistent data, they are used to store internal temporary tables. |
myisam_sort_buffer_size | Performance | 8388608 (bytes) |
8388608 (bytes) |
MyISAM index buffer. Doesn’t apply to Aurora. |
read_buffer_size | Performance | 262144 (bytes) |
262144 (bytes) |
Controls memory allocation for multiple types of buffers, for example when sorting rows for an ORDER BY clause, partition inserts, and nested queries. |
read_rnd_buffer_size | Performance | 524288 (bytes) |
262144 (bytes) |
Improves performance on systems with multi-range read queries. |
table_open_cache | Performance | LEAST({DBInstanceClassMemory/1179121}, 6000) | 4000 | Limits number of open tables in memory for all threads. |
table_definition_cache | Performance | LEAST({DBInstanceClassMemory/393040}, 20000) | -1 (signifies autosizing; do not assign this literal value) | Limits number of table definitions stored in the cache without using file descriptors. |
internal_tmp_mem_storage_engine | Functionality | TempTable | TempTable | Controls the storage engine for temporary tables on the writer. Readers can only use TempTable. |
temptable_max_ram | Performance | 1 GiB on DB instances with 16 or more GiB of memory, and 16 MB on DB instances with less than 16 GiB of memory | 1073741824 (bytes) |
Size of the common memory pool of the TempTable storage engine. |
temptable_max_mmap | Performance | 1073741824 (bytes) | 0 | Defines the maximum amount of memory (in bytes) that the TempTable storage engine is permitted to allocate from memory-mapped temporary files before it starts storing data to InnoDB internal temporary tables on disk. Note: On Aurora Replicas, this value is always set to |
aurora_tmptable_enable_per_table_limit | Functionality | OFF | N/A | When this variable is enabled, tmp_table_size defines the maximum size of individual in-memory internal temporary table created by the TempTable storage engine |
tmp_table_size | Performance | 16777216 (bytes) |
16777216 (bytes) |
Limits the size of the engine’s internal, in-memory temporary tables when using the MEMORY storage engine or when the aurora_tmptable_enable_per_table_limit is set to ON . |
innodb_change_buffer_max_size | Performance | NA | 25 | Maximum size for the change buffer as a percentage of the total size of the buffer pool. Aurora does not use the change buffer and it is disabled by default in MySQL 8.4 |
binlog_format | Functionality | OFF | ROW | Sets the format of the binary log. In Aurora, setting a format enables binary logs. |
binlog_group_commit_sync_delay | Functionality | 0 | 0 | Controls how many microseconds the binary log commit waits before synchronizing the binary log file to disk. |
gtid_mode | Functionality | OFF_PERMISSIVE | OFF | Controls whether global transaction identifier (GTID)-based logging is enabled and what type of transactions the logs can contain. |
binlog_backup | Functionality | 1 | N/A | This setting controls if binary log data will be backed up or not. To use aurora_enhanced_binlog this must be 0 , and to disable aurora_enhanced_binlog this must be 1 . |
binlog_replication_globaldb | Functionality | 1 | N/A | This setting controls if binary log data will be replicated to global database secondary clusters. To use |
aurora_enhanced_binlog | Functionality | 0 | N/A | This parameter can be used to enable and disable Aurora enhanced binlog. |
performance_schema | Functionality | 0 | ON | Enables or disables the Performance Schema. Setting to 0 allows Performance Insights to manage the Performance Schema. |
query_cache_* | Performance | Removed | Removed | All query cache options were removed in community MySQL 8+. |
Recommendations and impact
Here’s a brief explanation of how each of these critical parameters can affect your database, along with some use cases to demonstrate how to tune them:
autocommit
Recommended setting: Use the default value (1
or ON)
to help ensure that each SQL statement is automatically committed as you run it unless it’s part of a transaction opened by the user.
Impact: A value of OFF
might encourage incorrect usage patterns such as transactions that are held open longer than required, not closed, or not committed. This can affect the performance and stability of the database. This variable can be set at the session level if you need to test the performance or make changes without committing them.
max_connections
Recommended setting: Default (variable value). When using a custom value, configure only as many connections as the application actively uses to perform work.
Impact: Configuring a too-high connection limit can contribute to higher memory use even if connections aren’t actively used. It can also cause high database connection spikes that affect the database’s performance and stability.
This variable parameter is automatically populated based on your instance’s memory allocation and size using the following formula, so use the default value first:
For example, for an Aurora MySQL db.r8g.large instance with 16 GiB of memory, this variable is set to 1,000
:
max_connections = GREATEST({log(17179869184/805306368)*45},{log(17179869184/8187281408)*1000})
max_connections = GREATEST(195.56,1000) = 1000
If you’re encountering connection errors and getting excessive Too many connections
in your error logs, you can set this parameter to a fixed value instead of the variable setting.
When you’re considering setting max_connections
to a fixed value, if your application requires a higher number of connections, consider using a connection pool or proxy between the application and the database. You can also do this if connections can’t be predicted or controlled reliably.
When you manually configure a value for this parameter that exceeds the recommended number of connections, Amazon CloudWatch metrics for DB connections show a red line where the threshold is exceeded. This is the formula that CloudWatch uses:
For example, for a db.r8g.large instance with a memory size of 16 GiB (16x 1024 x 1024 x 1024 = 17179869184 bytes), the warning threshold is approximately 1,365 connections. You can still use the maximum number of configured connections, provided that there are enough resources on the instance.
max_allowed_packet
Recommended setting: Default (67,108,864 in Aurora 3.x). Use a custom value only if required by your database workload. Tune this parameter when you’re dealing with queries returning large elements, such as long strings or BLOBs. Make sure that this value is the same on any cross-region read replicas that use logical (binary log) replication from the primary cluster to avoid replication errors.
Impact: Setting a large value here doesn’t affect the initial size of the message buffers. Instead, it allows them to grow up to the defined size if required by your queries. A large parameter value coupled with a large number of concurrent eligible queries can increase the risk of out-of-memory conditions.
The following example error is shown when setting this parameter too small:
ERROR 1153 (08S01) at line 3: Got a packet bigger than 'max_allowed_packet' bytes
group_concat_max_len
Recommended setting: Default (1,024 bytes). Use a custom value only if your workload requires it. You need to tune this parameter only when you want to alter the return of the GROUP_CONCAT()
statement and allow the engine to return longer column values. This value should be used in parallel with max_allowed_packet
, because this determines the maximum size of a response.
Impact: Some of the symptoms of setting this parameter too high are high memory use and out-of-memory conditions. Setting it too low causes queries to fail.
innodb_ft_result_cache_limit
Recommended setting: Default (2,000,000,000 bytes). Use a custom value depending on your workload.
Impact: Because the value is already close to 1.9 GiB, increasing it beyond its default can result in out-of-memory conditions.
max_heap_table_size
Recommended setting: Default (16,777,216 bytes). Limits the maximum size of tables created in memory as defined by a user. Altering this value only effects only newly created tables, not existing ones.
Impact: Setting this parameter too high causes high memory utilization or out-of-memory conditions if in-memory tables grow.
binlog_cache_size
Recommended setting: Default (32,768 bytes). This parameter controls the amount of memory that the binary log cache can use. By increasing it, you can improve performance on systems with large transactions by using buffers to avoid excessive disk writes. This cache is allocated on a per connection basis.
Impact: Limit this value in environments with a large number of DB connections to avoid causing an out-of-memory condition.
bulk_insert_buffer_size
Recommended setting: Leave as is, because it doesn’t apply to Aurora MySQL.
innodb_buffer_pool_size
Recommended setting: Default (variable value), because it’s preconfigured in Aurora to 75% of instance memory size. You can see buffer pool use in the output of SHOW ENGINE INNODB STATUS
.
Impact: A larger buffer pool improves overall performance by allowing less disk I/O when accessing the same table data repeatedly. The actual amount of allocated memory might be slightly higher than the actual configured value because of InnoDB engine overhead.
innodb_sort_buffer_size
Recommended setting: Default (1,048,576 bytes)
Impact: Higher-than-default values can increase the overall memory pressure on systems with a large number of concurrent queries
join_buffer_size
Recommended setting: Default (262,144 bytes). This value is preallocated for various types of operations (such as joins) and a single query can allocate multiple instances of this buffer. If you want to improve the performance of your joins, we recommend that you add indexes to such tables.
Impact: Changing this parameter can cause severe memory pressure in environments with a large number of concurrent queries. Increasing this value doesn’t provide faster JOIN query performance, even when you add indexes.
key_buffer_size
Recommended setting: Leave as the default value (8,388,608 bytes), because it isn’t relevant to Aurora and affects only MyISAM table performance.
Impact: No impact on Aurora performance.
myisam_sort_buffer_size
Recommended setting: Leave as the default value (8,388,608 bytes). It’s not applicable for Aurora because it has no effect on InnoDB.
Impact: No impact on the performance of Aurora.
read_buffer_size
Recommended setting: Default (262,144 bytes).
Impact: Large values cause higher overall memory pressure and provoke out-of-memory issues. Don’t increase the setting unless you can prove that the higher value benefits performance without compromising stability.
read_rnd_buffer_size
Recommended setting: Default (524,288 bytes). There’s no need to increase the setting for Aurora because of the performance characteristics of the underlying storage cluster.
Impact: Large values might cause out-of-memory issues.
table_open_cache
Recommended setting: Leave as is, unless your workload requires accessing a very large number of tables simultaneously. The table cache is a major memory consumer, and the default value in Aurora is significantly higher than the MySQL defaults. This parameter is automatically adjusted based on instance size.
Impact: A database with a large number of tables (in the hundreds of thousands) requires a larger setting, because not all tables fit in memory. Setting this value too high might contribute to out-of-memory conditions. This setting also indirectly contributes to Performance Schema memory usage, if the Performance Schema is enabled.
table_definition_cache
Recommended setting: Defaults. This setting is pretuned in Aurora to be significantly larger than in MySQL, and it’s automatically adjusted based on instance size and class. If your workload requires it and your database requires a very large number of tables to be opened concurrently, increasing this value might speed up opening tables operations. This parameter is used in conjunction with table_open_cache
.
Impact: This setting also indirectly contributes to Performance Schema memory usage if the Performance Schema is enabled. Watch out for higher-than-default settings because they might provoke out-of-memory issues.
internal_tmp_mem_storage_engine
Recommended setting: Default. This setting controls the behavior of the temporary table storage engine. The TempTable storage engine provides efficient storage for VARCHAR
and VARBINARY
columns and other binary large object types.
Impact: Using the TempTable engine can generate the following error on reader instances when the global temp table size is reached, regardless of the tmp_table_size parameter.
ERROR 1114 (HY000): The table '/rdsdbdata/tmp/#sqlxx_xxx' is full
temptable_max_ram
Recommended setting: Default, which is 1 GiB on DB instances with 16 or more GiB of memory, and 16 MB on DB instances with less than 16 GiB of memory. This can prevent out-of-memory errors.
Impact: Increasing the amount of RAM available for temp tables will increase the size that can be stored in memory instead of on disk on the instance. This can cause less RAM to be available for the buffer pool and system.
temptable_max_mmap
Recommended setting: Start with 80% of the local storage size for the DB instance class that you’re using. This allows you to have memory to avoid an out-of-space error on your temporary tables while still leaving memory on the instance for any on-disk usage on the instance.
Impact: This allows the temporary tables to spill over into the local storage on the disk for your instances. On readers, this sets the total maximum size available for the temporary tables
aurora_tmptable_enable_per_table_limit
Recommended setting: OFF
. temptable_max_ram
and temptable_max_mmap
to control temporary tables without adding a third parameter.
Impact: When set to ON
the tmp_table_size
parameter defines the maximum size of a temporary table created by the TempTable storage engine. This can also cause temp-table-full errors on readers even if they don’t reach the global temp table limits when enabled, because readers cannot write InnoDB tables to the Aurora storage layer.
tmp_table_size
Recommended setting: Default (16,777,216 bytes). Together with max_heap_table_size
, this parameter limits the size for in-memory tables used for query processing when using the MEMORY storage engine or when aurora_tmptable_enable_per_table_limit
is enabled. When the temporary table size limit is exceeded, tables are then swapped to disk.
Impact: Very large values (hundreds of megabytes or more) are notorious for causing memory issues and out-of-memory errors. When this is improperly tuned when using the TempTable storage engine and aurora_tmptable_enable_per_table_limit
, this can cause temp-table-full errors.
binlog_format
Recommended setting: OFF
if binary logging isn’t needed, ROW
or MIXED
if it is needed.
Impact: This controls binary logging on your Aurora cluster. Aurora doesn’t need binary logging for reader instances, because these use a shared storage and not logical replication. Enabling binary logging causes an impact to write transactions as these need to write to the binary log after every commit. This requires a lock on the binlog, and only one transaction can hold this lock at a time. STATEMENT
based binary logging can cause issues if any non-deterministic queries are used.
binlog_group_commit_sync_delay
Recommended setting: Default. If binary log replication is used and replica lag is increasing, consider increasing this to increase the number of parallel transactions on the replica.
Impact: Increasing this can increase the number of parallel committing transactions on any server that has a logical (binary log) replica and therefore can increase parallel execution on the replica. This can also increase commit latency on your database.
gtid_mode
Recommended setting: Default if not using logical (binary log) replicas. ON or ON_PERMISSIVE if using binary log replicas, as global transaction identifier (GTID) replication is more reliable.
binlog_backup
Recommended settings Default.
Impact: This controls whether the binary logs are backed up. This can only be disabled when using Aurora enhanced binary logs and setting it to 0
is required as part of enabling the Aurora enhanced binlog.
binlog_replication_globaldb
Recommended setting: Default.
Impact: This setting controls if binary log data will be replicated to global database secondary clusters. This can only be disabled when using Aurora enhanced binary logs and setting it to 0 is required as part of enabling the Aurora enhanced binlog.
aurora_enhanced_binlog
Recommended setting: Default.
Impact: This controls Aurora enhanced binary logs. Aurora enhanced binary logs have a lower performance overhead when compared to standard binary logs, but have limitations. Enhanced binlogs aren’t available in a clone and aren’t stored with the Aurora snapshot. They are also not replicated to a global cluster, so historical binary log data will not be available after a failover. We recommend testing this prior to enabling it in your production environment.
performance_schema
Recommended setting: Default and enable Performance Insights, which will enable performance_schema. If not enabling Performance Insights, set to 1.
Impact: Leaving this as default allows Performance Insights to automatically control the performance schema tables. In MySQL 8.0 + these tables are always created because the system uses these tables, but they are not user accessible without enabling performance_schema. With the default and Performance Insights enabled, the performance_schema will be enabled. If you choose to not use Performance Insights, the recommendation is to enabled this parameter to allow user access to the performance schema tables.
query_cache:
This was removed in community MySQL 8+
Impact: This was removed in MySQL 8+ because the MySQL developer team found this wasn’t needed because of the availability of high-speed storage. If caching is needed, consider a separate tool such as Amazon ElastiCache.
Conclusion and key points to take away
Many parameters have already been optimized when a new Aurora MySQL instance is deployed, and they serve as a good baseline before performing any parameter changes. The exact combination of values for each parameter depends on individual systems, application workload, and desired throughput characteristics. Moreover, on database systems with a high rate of change, rapid growth, high data ingestion rates, and dynamic workloads, these parameters require constant monitoring and evaluation. We recommend that you do this monitoring and evaluation every few months, perhaps every few weeks, as you adapt your database to your application and business needs.
To perform successful parameter tuning that translates into measurable performance increases, it’s a good practice to experiment, establish a baseline, and compare your results after the changes are made. We recommend that you do this before changes are committed into a live, production system.
For further information on a specific parameter, contact AWS Support.
About the authors
Fabio Higa is a Database Specialist Technical Account Manager with AWS focusing on Amazon RDS and Aurora MySQL engines. He has worked with enterprise customers around the globe for more than 3 years. In his spare time, he likes working on his cars and taking them to the local racetracks.
Nirupam Datta is a Senior Cloud Support DBE at AWS. He has been with AWS for over 4 years. With over 12 years of experience in database engineering and infra-architecture, Nirupam is also a subject matter expert in the Amazon RDS core systems and Amazon RDS for SQL Server. He provides technical assistance to customers, guiding them to migrate, optimize, and navigate their journey in the AWS Cloud.
Pat Doherty is a Cloud Support Engineer at AWS supporting the database team. He has been with AWS for 2 years. He has 10 years in the IT industry, previously supporting major pharmaceutical companies, especially during the height of the COVID-19 pandemic. He currently provides technical support on MySQL, Amazon Aurora, MariaDB, and SQL Server databases, in addition to assistance with the AWS Database Migration service.
Ryan Moore is a Cloud Support Engineer II who has worked within the AWS database team for 2 years. Ryan is an Aurora MySQL and RDS MySQL subject matter expert who specializes in enabling customers to build performant, scalable, and secure architectures within the AWS Cloud.