How do I troubleshoot high JVM memory pressure on my Amazon Elasticsearch Service cluster?

Last updated: 2020-04-14

My Amazon Elasticsearch Service (Amazon ES) cluster has high JVM memory pressure. What do the different JVM memory pressure levels mean and how do I reduce them?


The JVM memory pressure specifies the percentage of the Java heap in a cluster node. The following guidelines indicate what the JVM memory pressure percentages mean:

  • If JVM memory pressure reaches 75%, then Amazon ES triggers the Concurrent Mark Sweep (CMS) garbage collector. The garbage collection is a CPU-intensive process. If JVM memory pressure stays at this percentage for a few minutes, then you could encounter ClusterBlockException, JVM OutOfMemoryError, or other cluster performance issues.
  • If JVM memory pressure exceeds 92% for 30 minutes, then Amazon ES blocks all write operations.
  • If JVM memory pressure reaches 100%, then Amazon ES JVM is configured to exit and eventually restarts on OutOfMemory (OOM).

High JVM memory pressure can be caused by the following reasons:

  • Spikes in the numbers of requests to the cluster.
  • Aggregations, wildcards, and selecting wide time ranges in the queries.
  • Unbalanced shard allocations across nodes or too many shards in a cluster.
  • Field data or index mapping explosions.
  • Instance types that are unable to handle incoming loads.

You can resolve high JVM memory pressure issues by reducing traffic to the cluster. To reduce traffic to the cluster, follow these best practices:

For more information about how to troubleshoot high JVM memory pressure, see Why did my Elasticsearch node crash?