AWS Open Source Blog

Deploying and scaling Apache Solr on Kubernetes

Apache Solr is an open source enterprise search platform built on Apache Lucene. Solr has been powering large-scale web and enterprise applications across industries such as retail, financial services, healthcare, and more. Its features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, and rich document handling.

Apache Solr’s distributed deployment comes with the challenges of setting up a highly available and scalable cluster. The operational overhead of maintaining a stable and elastic Solr cluster increases along with the volume of data and number of queries. Deploying Solr on Kubernetes helps remove some of these complexities so you can run a scalable, stable, and distributed Solr installation with low maintenance.

In this blog post, we explain how to deploy a highly available, scalable, and fault-tolerant enterprise-grade search platform with Apache Solr using Amazon Elastic Kubernetes Service (Amazon EKS). Amazon EKS is a managed service that can be used to run Kubernetes (K8s) on Amazon Web Services (AWS) without needing to install, operate, and maintain your own Kubernetes control plane or nodes. We also demonstrate how Prometheus is used for monitoring, observability, alerting, and auto-scaling the deployment.

Architecture overview

The deployment is divided into three logical layers. The first layer is the SolrCloud layer, followed by the Apache ZooKeeper ensemble layer, and finally a control applications layer running Prometheus.

Image shows a high-level architecture of the three layers of a scalable Apache Solr deployment on Amazon Elastic Kubernetes service. First layer is the SolrCloud layer, followed by the ZooKeeper ensemble layer and finally a control applications layer running Prometheus for the purposes of scaling.

Figure 1: Apache Solr on Elastic Kubernetes Service—high-level architecture

  1. SolrCloud layer: A group of K8s pods running the Solr server processes.
  2. ZooKeeper ensemble layer: Pods running Apache ZooKeeper in an ensemble of odd number of pods. It helps with coordinating and distributed synchronization for the SolrCloud deployment.
  3. Control applications layer: Pods running Prometheus for monitoring and scaling the SolrCloud deployment.

Each layer is deployed in its own Amazon EKS-managed node group, which acts as the physical layer providing storage and compute. In order for the ZooKeeper ensemble to achieve quorum, pods inside the ensemble must be able to recognize themselves and know where every other pod within its managed node group is. The SolrCloud managed node group must also be aware of the ZooKeeper pods via connection strings so that it can make use of the ZooKeeper pods for coordinating search tasks. This is achieved by deploying the pods as stateful containers using Kubernetes StatefulSets, which ensures a stable and unique network identifier for each pod. StatefulSets also provide guarantees about the ordering and uniqueness of pods, making it suitable for ZooKeeper and SolrCloud, which need network identifiers that are unique and stable.

The config files and parameters discussed in the subsequent sections can be found in our GitHub repository.

SolrCloud layer

When deployed on a distributed cluster, Apache Solr is known as SolrCloud. Each server within the cluster consists of a single instance of Java Virtual Machine (JVM) running a Solr server process.

A Solr cluster stores one or more logical groups of indices, known as a collection. A collection is a complete logical index that has its own schema definition and consists of documents that can be stored and indexed for search. A collection consists of one or more shards, and shards provide a way to split a collection into one or more logical slices.

Finally, a replica is the physical representation of a shard that actually stores the data. There can be one or more copies of a replica, and the number of replicas directly affects search query performance. The cluster indexes documents within the collections and runs distributed search tasks. This happens without the need to have a dedicated leader node in the cluster to allocate search tasks, or keeping the replicas or shards in sync. Instead, SolrCloud uses a ZooKeeper ensemble to manage the tasks, replicas, and shards.

Image shows SolrCloud cluster, shards, and replicas. It also shows a Zookeeper ensemble containing five Zookeeper servers.

Figure 2: SolrCloud shards, replicas, and ZooKeeper ensemble

Solr replicas are stored using persistent volumes, which use Amazon Elastic Block Store (Amazon EBS) volumes to implement a persistent volumes and claims (PVC) subsystem for the Kubernetes cluster. This storage architecture provides a PersistentVolume subsystem for storing the replicas, and PersistentVolumeClaim requests to be used by the StatefulSet to provision storage volumes on-demand as new SolrCloud pods join the cluster during a scale-out event. This enables decoupling of the storage layer with the data processing layer, and prevents loss of data. These Amazon EBS volumes are then backed up using EBS snapshots and stored into Amazon Simple Storage Service (Amazon S3) object storage.

ZooKeeper ensemble layer

Apache ZooKeeper is open source software that provides centralized service for maintaining configuration information, distributed synchronization, and group naming services for a large group of servers. A cluster of servers (ensemble) running the ZooKeeper application coordinates and provides distributed synchronization for SolrCloud. That the ZooKeeper ensemble is highly available and fault tolerant is important, and that the ensemble must maintain a quorum (that is, more than half of the total number of ZooKeeper servers running at all times to ensure a healthy and scalable SolrCloud). Pods running ZooKeeper application are deployed in a dedicated Amazon EKS-managed node group. An odd number of ZooKeeper pods is recommended to help majority election (ZooKeeper Leader Election) of a leader within the cluster in case the current leader pod fails.

Kubernetes pod disruption budget (PDB) helps improve the availability of the ZooKeeper ensemble by running a minimum number of pods required to maintain quorum within the managed node group. PodDisrutptionBudget allows you to limit the number of concurrent pod failures, thus ensuring high availability. Because the ZooKeeper ensemble must maintain a quorum, in this case with a total of three ZooKeeper pods, two pods are enough to achieve a quorum.

You can define PodDisruptionBudget for the ZooKeeper ensemble-managed node group within the zookeeper.yaml configuration file by setting the minAvailable property to a value of 2. You may also use PDB’s maxUnavailable property to limit the number of allowed concurrently failed or unavailable pods. Together, these two PDB properties ensures that a necessary number of ZooKeeper pods are always available.

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-budget
spec:
  selector:
    matchLabels:
      app: zk
  minAvailable: 2
  maxUnavailable: 1

Control applications layer

The control application layer consists of pods running Prometheus monitoring system, which is open source and is used for real-time event monitoring and alerting. Pods running Prometheus are deployed in a separate managed node group within the Amazon EKS cluster. It monitors Solr metrics, which is then used to perform scaling of the SolrCloud pods within the SolrCloud managed node group.

Scaling the SolrCloud deployment

Now that we understand the layers of the Solr architecture, let’s explore how deploying SolrCloud on AWS can solve the reliability and elasticity requirements at enterprise scale. The SolrCloud deployment can be autoscaled to maintain a high query and indexing rate. The deployment architecture allows scaling of Solr replicas, SolrCloud server processes (that is, Amazon EKS pods), and the underlying compute capacity (Amazon Elastic Compute Cloud [Amazon EC2]) of the Amazon EKS cluster.

Deploying Solr across multiple Availability Zones

The Amazon EKS cluster and all the managed node groups within it are deployed inside of a secure private subnet across three Availability Zones within an Amazon Virtual Private Cloud (Amazon VPC). This Multi-AZ deployment architecture helps maintain high availability and resiliency, and helps improve the overall fault tolerance of the architecture.

Figure-3 Deployment Architecture of SolrCloud on Amazon EKS [ALT-Text] : Image shows a multi-AZ architecture of a scalable Apache Solr deployment on Amazon Elastic Kubernetes service which uses Amazon EKS managed node groups, Amazon EBS and Amazon S3 for storage and Amazon CloudWatch for logging and monitoring.

Figure 3: Deployment architecture of SolrCloud on Amazon EKS

Scaling replicas with SolrCloud autoscaling

SolrCloud (version 8.x) comes with built-in autoscaling feature that allows replicas to be added or removed automatically from a pod depending on the metrics being monitored. The deployment uses a collection-level cluster policy to scale replicas. Cluster policies can also be applied at a shard or node (pod) level. You can configure SolrCloud autoscaler to track the search rate event by monitoring the requests per minute value available via the QUERY./select.requestTimes:1minRate metric, for scaling the replicas. This metric is monitored and used to scale the number of replicas in the SolrCloud cluster. Note that there are a number of other SolrCloud metrics available via the Solr metrics API that you can consider while defining and fine-tuning replica autoscaling for the SolrCloud cluster.

Scaling SolrCloud pods with horizontal pod autoscaler

Although SolrCloud autoscaling addresses the performance of the cluster by scaling the replicas, it does not address the scaling of the pods themselves. Because each pod has a specified maximum number of replicas it can accommodate, defined by the replicas property in the solr-cluster.yml file, we must ensure a way to scale the number of pods required to support the replica scaling done by SolrCloud autoscaler. Similarly, when SolrCloud autoscaler scales down the replicas, pods that are not efficiently used can be taken out of service resulting in a scale-in of the total number of SolrCloud pods.

The deployment utilizes metrics exposed by SolrCloud, via the Solr Metrics API, to scale the number of pods. In this case, a Prometheus exporter is used to collect these metrics from the Solr Metrics API into Prometheus’ time series database that runs in the control applications-managed node group. These metrics can be used to implement a scaling policy to scale the number of pods using Horizontal Pod Autoscaler (HPA). HPA obtains these custom metrics using the Prometheus adapter and operates on the ratio between desired metric value and current metric value to scale up or down the number of pods.

This solution uses solr_metrics_jetty_dispatches_total metric that identifies the total number of requests received by the Solr application, to be exported to Prometheus. Prometheus adapter sends the metric to Kubernetes metrics server for the HPA to use. This metric is given a custom alias named solr_metrics in the Prometheus exporter’s adapterConfig.yml configuration file.

rules:
    external:
      - seriesQuery: '{__name__="solr_metrics_jetty_dispatches_total"}'
        resources:
          template: <<.Resource>>
	 name:
	   as : "solr_metrics"
	   matches: ""
	 metricsQuery: '<<.Series>>{job="solr"}'
nodeSelector:
    clusterType: control-apps

Note that depending on the use case and performance criteria, any other metric available via the Solr metrics API can be used to scale the SolrCloud pods via HPA.

Scaling the Amazon EKS cluster capacity with cluster autoscaler

The Horizontal Pod Autoscaler will continue to scale SolrCloud pods on existing Amazon EKS worker nodes (underlying Amazon EC2 compute instances) as long as there are enough compute resources available for the managed node group. However, as the number of pods grow, newer pods may ultimately fail and go into pending state as the managed node group runs out of resources to host additional pods within its available worker nodes.

Cluster autoscaler (CA) provides scalability to the managed node groups by scaling the worker nodes when SolrCloud pods cannot be scheduled on existing worker nodes. The Amazon EKS cluster autoscaler uses Amazon EC2 Auto Scaling groups behind the scenes to adjust the number of worker nodes in the cluster when pods fail and go into a pending status.

HPA and CA help autoscale the SolrCloud workload and its underlying compute capacity, respectively. This also makes the Solr search platform elastic as resources that are not required are gracefully taken out of service as demands shrink, thus saving costs.

Image shows the Horizontal Pod Autoscaling for scaling SolrCloud Pods within EKS managed node groups and cluster autoscaling to scale compute nodes for the EKS managed node groups.

Figure 4: Apache Solr horizontal pod autoscaling and cluster autoscaling

Prerequisites

In order to deploy this architecture, the following prerequisites must be met:

  1. An AWS account with an Amazon Virtual Private Cloud (Amazon VPC), a public and private subnet across multiple Availability Zones (three recommended).
  2. Permissions to required services such as Amazon EKS, Amazon EBS, Amazon S3.
  3. Permissions to create and modify AWS Identity and Access Management (IAM) roles.
  4. An AWS Cloud9 workspace or AWS CLI installed on a local machine.
  5. eksctl, command-line tool for creating clusters on Amazon EKS.
  6. kubectl, command-line tool to run commands against Kubernetes clusters.

Installation and walkthrough

Detailed step-by-step installation instructions along with the required configuration files can be found in our GitHub repository. Following is an overview of the steps involved in the deployment and configuration:

  1. Create an Amazon EKS cluster using eksctl command line tool.
  2. Once the EKS cluster is deployed, create managed node groups for SolrCloud, ZooKeeper ensemble, and control apps (Prometheus).
  3. Setup Helm charts using which you can manage Kubernetes applications.
  4. Install Prometheus using Helm charts.
  5. Install ZooKeeper using kubectl command line tool.
  6. Install Solr and Solr metrics exporter using kubectl command line tool.
  7. Configure Horizontal Pod Autoscaler and Cluster Autoscaler using kubectl command line tool.

Set up a Solr collection

A Solr collection can be created from the Solr Administration user interface. In order to access the administration UI, obtain its URL using kubectl get services solr-service command. The URL will be of the form http://<xxxxx>.<region_name>.elb.amazonaws.com:8983 (8983 being the default Apache SolrCloud listener port). Create a new collection named Books using the Collections option in the administration UI.

Image shows the creation of a Collection within SolrCloud called Books using the Collections menu, Add Collections button, via the Solr Administration UI.

Figure 5: Create a collection in Solr Administration UI

Next, set up a collection-level autoscaling trigger, specifically a Search Rate Trigger. The SolrCloud Write API can be used to add this new collection level autoscaling trigger.

curl -X POST -H 'Content-type:application/json'  -d '{
 "set-trigger": {
  "name" : "search_rate_trigger",
  "event" : "searchRate",
  "collections" : "Books",
  "metric" : "QUERY./select.requestTimes:1minRate",
  "aboveRate" : 10.0,
  "belowRate" : 0.01,
  "waitFor" : "30s",
  "enabled" : true,
  "actions" : [
   {
    "name" : "compute_plan",
    "class": "solr.ComputePlanAction"
   },
   {
    "name" : "execute_plan",
    "class": "solr.ExecutePlanAction"
   }
  ]
 }
}' http://<external-ip>:8983/api/cluster/autoscaling/
Image shows a screen capture of the Solr Administration user interface displaying the autoscaling.json config file using which the autoscaling trigger can be configured.

Figure 6: Apache Solr replica collection-level autoscaling trigger configuration

After the collection and the autoscaling trigger have been set up, select the collection in the menu, and use the Documents section to upload data. Use the sample dataset books.json, which can be found in the GitHub repository, to create the collection.

Image shows the upload of sample data into the Books collection using the “File Upload” method under Documents.

Figure 7: Upload sample data

Scaling in action (load testing)

You can use the request generator script submit_mc_pi_k8s_requests_books.py to simulate the load of requests on the Books collection. The script generates a high volume of search requests sent to the cluster, thus resulting in SolrCloud autoscaling to trigger the scaling event and increase the number of replicas.

Image shows a screen capture of the Solr Administration user interface demonstrating replicas autoscaling from 1 replica per pod to 3 replicas per Pod based on the Requests per minute metric monitored by Solr Autoscaler.

Figure 8: Apache Solr replica autoscaling in action

The number of requests received by SolrCloud increases the HPA target metrics—more than 50,000 requests received in a 20-second interval, resulting in the HPA scaling up the number of SolrCloud pods.

Image shows the Horizontal Pod Autoscaler scaling the total number of pods for SolrCloud.

Figure 9: Horizontal pod autoscaler scaling the total number of pods

When additional SolrCloud pods added by the HPA could not be scheduled on any existing Amazon EKS managed node group’s worker nodes, the Cluster Autoscaler scales the cluster by adding additional worker nodes (Amazon EC2 compute).

Image shows the Cluster Autoscaler scaling the total number of compute nodes for the EKS managed node group by adding additional EC2 instances.

Figure 10: Cluster autoscaler scaling the total number of compute nodes for the managed node group

Cleaning up

Follow the detailed step-by-step instructions provided in our GitHub repository to clean up and avoid incurring future charges.

Conclusion

In this post, we discussed how customers can use Amazon Elastic Kubernetes Service (Amazon EKS) to deploy a performant, highly available, and fault tolerant Apache Solr deployment. We also walked through how customers can control scaling at different levels to meet the demands of their organization’s enterprise search capabilities. The deployment architecture is extensible and can be customized for various use cases.

Customers are also encouraged to explore using Amazon Managed Service for Prometheus, which integrates with Amazon EKS, further minimizing the need to scale and operate underlying infrastructure.

Kinnar Kumar Sen

Kinnar Kumar Sen

Kinnar Kumar Sen is a Sr. Solutions Architect at Amazon Web Services (AWS) focusing on Big Data and Analytics. As a part of the EC2 Spot team, he works with customers to cost optimize the big data workloads running on AWS. Kinnar has more than 15 years of industry experience working in research, consultancy, engineering, and architecture.

Anjan Biswas

Anjan Biswas

Anjan Biswas is a Senior Solutions Architect with focus on AI/ML, Data Analytics, and enterprise applications. Anjan works with enterprise customers and is passionate about developing, deploying and explaining AI/ML, Data Analytics, and Big Data solutions. Anjan has over 14 years of experience working with global supply chain, manufacturing, and retail organizations and is actively helping customers get started and scale on AWS.

Ananth Raghavendra

Ananth Raghavendra

Ananth Raghavendra is a Senior Solution Architect at Amazon Web Services (AWS) with a focus on EC2 Spot, Graviton2, and EC2 related services such as Auto Scaling Group. Ananth has been passionate about helping customers with their cloud journey solving for their performance problems with a cost efficiency. Ananth has more than 25 years of experience in development, architecture, DevOps, and infrastructure.