Why is Kibana in red status on my Amazon Elasticsearch Service domain?
Last updated: 2020-08-11
Kibana keeps showing red status on my Amazon Elasticsearch Service (Amazon ES) domain. Why is this happening and how do I troubleshoot this?
Kibana shows green status when all health checks pass on every node of the Elasticsearch cluster. If a health check fails, Kibana enters red status. Kibana also shows red status when Amazon ES is in red cluster status. Kibana's status can turn red for the following reasons:
- Node failure caused by an issue with an Amazon Elastic Compute Cloud (Amazon EC2) instance or Amazon Elastic Block Store (Amazon EBS) volume. For more information about node crashes, see Why did my Amazon Elasticsearch Service node crash?
- Insufficient memory for your nodes.
- Upgrading Elasticsearch to a newer version.
- Incompatibility between Kibana and Amazon ES versions.
- A single-node cluster is running with a heavy load and no dedicated leader nodes. The dedicated leader node could also be unreachable. For more information about how Amazon ES increases cluster stability, see Dedicated leader nodes.
Use one or more of the following methods to resolve Kibana red status on an Amazon ES domain.
Note: If the Elasticsearch cluster shows a circuit breaker exception, first increase the circuit breaker limit. If you don't have a circuit breaker exception, try the other methods before you increase the circuit breaker limit.
If you're running complex queries (such as heavy aggregations), tune the queries for maximum performance. Sudden spikes in heap memory consumption can be caused by the field data or data structures that are used for aggregation queries.
Review the following API calls to identify the cause of the spike, replacing es-endpoint with your Amazon ES domain endpoint:
Use dedicated leader nodes
It's a best practice to allocate three dedicated leader nodes for each Amazon ES domain. For more information about improving cluster stability, see Get started with Amazon Elasticsearch Service: Use dedicated leader instances to improve cluster stability.
To scale up your Amazon ES domain, increase the number of nodes or choose an Amazon EC2 instance type that holds more memory. For more information about scaling, see How can I scale up my Amazon Elasticsearch Service domain?
Check your shard distribution
Check the index that your shards are ingesting into to confirm that they are evenly distributed across all data nodes. If your shards are unevenly distributed, one or more of the data nodes could run out of storage space.
Use the following formula to confirm that the shards are distributed evenly:
Total number of shards = shards per node * number of data nodes
For example, if there are 24 shards in the index, and there are eight data nodes, you should have three shards per node. For more information about the number of shards needed, see Get started with Amazon Elasticsearch Service: How many shards do I need?
Check your versions
Important: Your Kibana and Amazon ES versions must be compatible.
Run the following API call to confirm that your versions are compatible, replacing es-endpoint with your Amazon ES domain endpoint:
Note: If the command is unsuccessful, it could indicate compatibility issues between Kibana and Supported Elasticsearch versions. For more information about compatible Kibana and Elasticsearch versions, see Set up Kibana on the Elasticsearch website.
Set up Amazon CloudWatch alarms that notify you when resources are used above a certain threshold. For example, if you set an alarm for JVM memory pressure, take action before the pressure reaches 100%. For more information about CloudWatch alarms, see Recommended CloudWatch alarms and Improve the operational efficiency of Amazon Elasticsearch Service domains with automated alarms using Amazon CloudWatch.
Increase the circuit breaker limit
To prevent the cluster from running out of memory, try increasing the parent or field data circuit breaker limit. For more information about field data circuit breaker limits, see Circuit breaker on the Elasticsearch website.