Why is Kibana in red status on my Amazon Elasticsearch Service domain?
Last updated: 2019-11-06
How do I troubleshoot red status in Kibana on my Amazon Elasticsearch Service (Amazon ES) domain?
Kibana reports a green status when all health checks pass on all nodes of the Elasticsearch cluster. If a health check fails, Kibana enters red status. Kibana also goes red when the Elasticsearch cluster is in red status. Here are some common reasons why Kibana turns red:
- A node fails because of a problem with an Amazon Elastic Compute Cloud (Amazon EC2) instance or Amazon Elastic Block Store (Amazon EBS) volume. For more information, see Why did my Elasticsearch node crash?
- One or more nodes doesn't have enough memory.
- You're upgrading to a newer Elasticsearch version.
- The Kibana and Elasticsearch versions are incompatible.
- You're running a single-node cluster with a heavy load and no dedicated master nodes (or the dedicated master node is unreachable).
Use one or more of the following methods to resolve red status for Kibana on an Amazon ES domain.
Note: If the Elasticsearch cluster shows a circuit breaker exception, increase the circuit breaker limit first, as explained at the end of this article. If you don't have a circuit breaker exception, try the other methods before you increase the circuit breaker limit.
If you're running complex queries such as heavy aggregations, tune the queries for maximum performance. Sudden spikes in heap memory consumption can be caused by the field data or per-request data structures used for aggregation queries.
Review the output of the following API calls to identify the cause of the spike. Replace es-endpoint with your Amazon ES domain endpoint.
Use dedicated master nodes
It's a best practice to allocate three dedicated master nodes for each production Amazon ES domain. For more information, see Use Dedicated Master Instances to Improve Cluster Stability.
You can do this two ways: Increase the number of nodes, or choose an EC2 instance type with more memory. For more information, see How can I scale up my Amazon ES domain?
Check your shard distribution
Be sure that the shards for the index you're ingesting into are distributed evenly across the data nodes. Otherwise, one or more of the data nodes might run out of storage space. Use the following formula to confirm that the shards are distributed evenly:
Number of shards for index = k * (number of data nodes), where k is the number of shards per node
For example, if there are 24 shards in the index, and there are eight data nodes, you should have three shards per node. For more information, see Get Started with Amazon Elasticsearch Service: How Many Shards Do I Need?
Check your versions
The Kibana and Elasticsearch versions must be compatible. Run the following API call to confirm. Replace es-endpoint with your Amazon ES domain endpoint.
If the command is successful, the Kibana and Elasticsearch versions are compatible.
Set up Amazon CloudWatch alarms that notify you when resources are used above a certain threshold. For example, if you set an alarm for JVM memory pressure, you can take action before the pressure reaches 100%. For more information, see Recommended CloudWatch Alarms and Improve the Operational Efficiency of Amazon Elasticsearch Service Domains with Automated Alarms Using Amazon CloudWatch.
Increase the circuit breaker limit
You might be able to temporarily solve the problem by increasing the parent or field data circuit breaker limit to prevent cluster running out of memory. For more information, see Circuit Breaker in the Elasticsearch documentation.