Reviews from AWS customer

16 AWS reviews

External reviews

44 reviews
from

External reviews are not included in the AWS star rating for the product.


    R Nandasana

Data pipelines have reduced log volume and now simplify routing observability data everywhere

  • May 01, 2026
  • Review from a verified AWS customer

What is our primary use case?

Cribl is primarily used to reduce data volume. When large datasets arrive, such as 1 TB of data, it can be reduced by 600 GB or 400 GB while maintaining the same information. Additionally, Cribl is used to send the same data to multiple destinations. The same data can be copied and sent to different products such as Splunk and Dynatrace.

For firewall logs, there are many default parsing templates and pipelines available. Firewall logs can be easily converted using parser functions. Default parsers are available for all log types, such as Palo Alto traffic, access logs, audit logs, and Linux logs. When a parser function is chosen for Palo Alto traffic, it automatically extracts all fields from the firewall logs.

A specific use case implemented involves firewall logs, which are substantial in size. Statistics are performed on the firewall logs and sent every five minutes. The logs are summarized by state count, and during that five-minute interval, the logs are aggregated and sent to other locations such as Dynatrace and Splunk. This significantly reduces data size and saves considerable space and licensing costs in Splunk.

Cribl provides substantial help with sending data to different destinations. With three products in use—Splunk, Dynatrace, and DataDog—Cribl sends dual feeds to multiple products. For instance, firewall logs are needed by both Splunk and DataDog. Additionally, some observability logs are directed to Dynatrace while remaining logs are sent to Splunk. Cribl effectively splits data across the various products in use.

Cribl is recommended for organizations with more than 1 TB or 2 TB of data ingestion. For smaller data volumes of less than 1 TB, Splunk licensing alone is sufficient, and parsing can be done at the Splunk level. With 14 TB of data ingestion per day, Cribl provides significant benefits.

What is most valuable?

Cribl's user interface is the most valuable feature. The UI is extremely user-friendly and allows visibility into what Cribl is processing and how much time it takes. Multiple routing capabilities enable data duplication to any location.

Cribl Edge provides an agent that is very simple to install on any server. Installation requires only a one-line script that can be copied and pasted, and the connection is established immediately. The configuration part is also very good.

User management in Cribl is excellent compared to other products. There is no need to access the back-end for any task, and dependence on the back-end is eliminated. Everything is available on the UI, making it very simple to use.

Cribl Cloud has no issues with handling large data ingestion volumes. Cribl Cloud can handle any volume of data efficiently. However, before purchasing Cribl Cloud, the read and write IOPS requirements need to be discussed and agreed upon with Cribl support. If data volume increases, these parameters can be adjusted accordingly. For on-premises deployments, the server is managed internally, and with recommended workers configured, there should be no issues.

For endpoint telemetry, the agent can be deployed everywhere using scripts based on Windows, Linux, and Kubernetes. Once the edge script is obtained, it can be deployed across all endpoints to gather data.

What needs improvement?

Currently, there are no significant enhancements needed as Cribl is a reliable product.

One improvement opportunity exists with Git integration. Git is attached to Cribl, and while users can push changes from Cribl to the Git repository, pulling changes from Git back into Cribl is not automated. When changes are made directly in the Git repository, they must be manually pulled into Cribl. For example, if a source is created in Cribl, it can be pushed to the Git repository, but modifications made directly in the Git repository must be manually pulled back into Cribl. Automating this pull functionality would be a valuable enhancement.

For how long have I used the solution?

Cribl has been used for the last six or seven years.

What do I think about the stability of the solution?

No stability issues have been encountered. Occasionally, back-pressure issues occur, but these are not caused by Cribl. Sometimes the source experiences issues, or destinations such as Dynatrace do not accept the data due to API hit limits when sending data via HTTP. During these times, back-pressure occurs, and when back-pressure takes a long time, the parsing queue can become full.

What do I think about the scalability of the solution?

For scalability, the leader is configured for high availability with a standby leader. Standby workers are also maintained. Currently, there are 16 workers in total with six additional workers kept as standby. If a worker fails, Cribl can be started on these standby workers to maintain operations.

How are customer service and support?

Customer service is not required as the product is managed internally. Three or four people manage the product exclusively. Technical service support is not utilized because the team consists of certified Cribl engineers who have comprehensive knowledge of the product.

How was the initial setup?

Initial setup is straightforward, particularly for those familiar with Splunk. The installation is similar to Splunk—unzip the leader package and install it. The worker installation follows the same process. Installation is very simple. For cloud deployments, there are no issues as URLs are provided.

What about the implementation team?

A consultant from NetScaler, an authorized partner of Cribl, was brought in to guide the implementation. This person provided guidance, but the team completed the implementation internally. Assistance from Cribl is obtained whenever needed through this consultant.

Which other solutions did I evaluate?

Other alternatives exist, including Splunk, Enterprise Security (ES), and Itsee. Many other products are also available.

Cribl offers several advantages over alternative solutions. Managing the infrastructure, including workers and the leader, is very simple. Patching is also straightforward and requires only one click. The user interface is user-friendly.

Alternative products have many limitations that Cribl does not have. Other products may have issues with data acceptance and compatibility. Cribl accepts data over various ports, including TCP, HTTP, and UDP, as well as HEC tokens. Cribl also supports custom sources that can be added, a feature that is missing in other platforms.

What other advice do I have?

Pricing is always discussed with high-level business teams, and involvement in pricing discussions is limited. However, Cribl is very inexpensive compared to Splunk licensing, which is a significant advantage for organizations purchasing Cribl.

Upgrading is a one-click activity. The version is selected, the leader is upgraded, and then the worker is deployed with a single click to upgrade the entire infrastructure. This capability has not been seen in any other product.

Data complexity is not a concern. Although there are many fields, each field has a question mark next to it that provides a description of what needs to be entered in the checkbox or dropdown below. The UI presents all information clearly. Without prior knowledge, anyone logging into the UI and navigating through sources, destinations, and other configurations can easily understand everything.

The overall review rating for this product is 9 out of 10.


    Atharva Khadsare

Search in place has reduced log ingestion and enables faster deep investigations

  • April 21, 2026
  • Review provided by PeerSpot

What is our primary use case?

I am working in a PLM environment, which is product lifecycle management. We deal with lots of system logs and tool integrations. I used Cribl Search for debugging system errors quickly and searching logs stored in long-term storage. Instead of pushing all logs into expensive tools, we used Cribl Search to directly investigate issues from stored data.

I am currently using Cribl Search only. I have some experience with Cribl Stream, which we are using for our data pipeline solution.

We have just started using the Search in Place feature because one of our team members recommended it. There is a lot of room for improvement in the way we query the data and the whole data processing pipeline. We weren't using any other tool before.

What is most valuable?

I have been using Cribl Search for a long time now, and I think Search in Place is a very good feature in Cribl Search. Unify Search is also valuable, where you can search data from multiple sources in one place. Fast investigation reduces steps from multiple tools to a single workflow. Pre-built search packs save effort to configure the dashboards and write the queries. It also works well with other Cribl tools.

The traditional way for certain places is that logs are generated, then sent to SIEM tools like Splunk, and then stored again before you can search them. This has problems including data duplication and high storage costs. With Search in Place in Cribl Search, logs stay in storage such as S3, data lakes, or archives. You can directly run queries on that data without any movement, duplication, or reprocessing. Advantages include cost reduction and faster investigation.

Since we can directly query historical data where it is stored, there is an advantage of deep root cause analysis, which helps understand what happened in the past. This is useful for debugging recurring issues and is cost-efficient. It has helped me in faster troubleshooting because there is no need to reload old logs. We can investigate incidents after days, weeks, or even months. It has the ability to handle large data volumes, so there is no performance bottleneck.

We reduced unnecessary data ingestion by almost 40 to 50% using Search in Place. We could troubleshoot issues faster because data was already available for querying. It eliminates redundancy and keeps the architecture cleaner. As the data grows, we don't need to scale ingestion pipelines.

What needs improvement?

The user interface of Cribl Search can be more simplified because for non-technical users, it is quite difficult to grasp. There is a need for better beginner tutorials.

Cribl could have built-in guided queries for faster onboarding and better beginner tutorials. A more simplified UI would be better for non-technical people.

For how long have I used the solution?

I have been working with Cribl for eight to nine months.

What do I think about the stability of the solution?

Until now, we haven't had any downtimes. It has been working very well.

What do I think about the scalability of the solution?

It is pretty scalable horizontally. We started with one team member but now there are five to six people using it.

How are customer service and support?

We developers ask for support from our in-house IT team, but I don't know what conversation goes on between Cribl customer service and our IT team.

Which solution did I use previously and why did I switch?

We evaluated Splunk, but due to some reasons, we went with Cribl Search.

How was the initial setup?

Cribl Search was set up by the IT team, but they haven't complained about any issues or complexities that arose during the setup. I think the setup is pretty simple and not that complicated.

What about the implementation team?

The implementation was done by our internal IT team.

What was our ROI?

With Cribl, we have observed a 40 to 60% reduction in log volume hitting the firewall because Cribl filters unnecessary events and removes verbose fields.

There is reduced pipeline complexity and faster end-to-end workflow because data doesn't wait in ingestion queues. There is also optimized data processing cost because less data processed equals less compute plus storage cost. Other expensive tools are used only for critical data. There is a shift from processing to querying because traditional systems process first and query later, but Cribl stores data cheaply so we can query it when we need it.

Cribl has many filters to remove noise from the data and to remove verbose fields, which has been very good to work with.

Earlier, we had to process and store all logs in monitoring tools, which are very expensive, before analysis. After using Cribl Search, we streamlined the workflow by sending only critical data through pipelines and directly querying archive logs for investigation. This improved efficiency and reduced system load, which helped us indirectly optimize costs. We reduced the overall processing load by around 40%.

What's my experience with pricing, setup cost, and licensing?

I'd highly recommend other organizations to use Cribl Search because it did help us a lot with data processing and everything.

What other advice do I have?

Cribl Search was set up by the IT team, but they haven't complained about any issues or complexities that arose during the setup, so I think the setup is pretty simple and not that complicated. I would rate this review an 8 out of 10.


    Pal Mavani

Data routing has simplified high-volume security log management and supports flexible processing

  • April 17, 2026
  • Review provided by PeerSpot

What is our primary use case?

I use Cribl in a data management platform for IT security teams. My use cases include Stream, Edge, Search, and Lake.

What is most valuable?

I appreciate data routing the most about Cribl. I use it for data routing, data processing, and integration support. Cribl's ability to handle high volumes of diverse data types such as logs and metrics is impressive. It can easily handle logs because it is highly scalable and built to process millions of events per second, making it very easy to use.

What needs improvement?

What I dislike about Cribl are the documentation gaps and the setup complexity.

For how long have I used the solution?

I have been working with Cribl for one year.

What do I think about the stability of the solution?

Regarding stability, once the pipelines were properly set up, the ongoing maintenance was minimal and mostly involved small adjustments rather than major changes. Overall, Cribl is not maintenance heavy, but sometimes maintenance is needed.Cribl requires some maintenance on my end; it is relatively low compared to traditional log pipelines.

What do I think about the scalability of the solution?

Cribl provides high availability through distributed architecture, so we can achieve this by developing multiple workers and using load balancing to ensure continuous data flow even during failures in the pipeline.

How was the initial setup?

The initial deployment is medium because the setup is complex. It took me some time to set it up for the first time because my friend helped me, but I found it difficult.

What other advice do I have?

I have not seen a significant decrease in firewall logs while working with Cribl because it is highly scalable, so that much decrease has not occurred.


    Abhay Gor

Data routing has become efficient and log volumes are reduced while monitoring improves

  • April 15, 2026
  • Review provided by PeerSpot

What is our primary use case?

I am using Cribl Stream for data routing and data processing as part of my company's IT team. We primarily use it for monitoring and collecting data.

What is most valuable?

One of the best features is integration support because it offers more than 80 to 90 sources and destinations via Cribl packs. Additionally, the security is very good because they offer encryption and access control to protect sensitive telemetry data. The data processing and reduction is also excellent because it filters unwanted fields and removes redundant data.

I have seen a decrease in my firewall logs by 50 to 60%.

Cribl allows me to handle high volumes of diverse data, such as logs and metrics, and it helps manage them effectively.

It is helpful because it handles diverse data types and can process logs, metrics, event streams, JSON, text, structured and unstructured data.

What needs improvement?

The user interface is acceptable, but I think a person who is just starting to use it will need to go through documentation because there is a steep learning curve to become familiar with Cribl Stream. The setup is also complex, and configuring integrations and pipelines for a large environment requires significant effort.

The areas that have room for improvement are the complex setup and better documentation, such as a user guide.

For how long have I used the solution?

I have been using this product for six to eight months.

What do I think about the stability of the solution?

Cribl performs time-to-time updates and maintenance, and it must be managed effectively because we are using it daily and have not experienced any issues for a long time. The team maintaining it must be performing their job very well.

What do I think about the scalability of the solution?

Horizontally, it is quite scalable, so I rate that a ten.

How are customer service and support?

I rate the technical support a nine, and I rate the stability an eight.

Which solution did I use previously and why did I switch?

I have used Splunk, and what Cribl does is it does not replace Splunk; it optimizes the data before sending it to Splunk, reducing cost and load. Therefore, Cribl is not a direct alternative to Splunk; they are complementary to each other.

How was the initial setup?

The deployment was quite easy.

I do not know exactly how long it took to deploy because I was not the one who deployed it on the cloud, but the ones who deployed it told me that it was quite easy to deploy and there were no complaints from them.

What about the implementation team?

Roughly five to six users use the solution.

What was our ROI?

I checked out Cribl Search once, and it helped me directly search from S3 data lakes, and it did help me save time and cost.

I have not analyzed the exact amount, but in ballpark terms, it saves about 10 to 20%.

I think it is cost-efficient because overall, after using Cribl, it helps users save cost and time. If you look at the big picture, it is cost-effective.

It saves me about 30 to 40% in terms of time and cost.

Which other solutions did I evaluate?

I would highly recommend it because it is cost-efficient, helps reduce noisy logs, and filters unnecessary fields.

What other advice do I have?

I gave this review a rating of nine.


    reviewer2816211

Cribl has improved real-time infrastructure observability and optimizes server resource costs

  • April 10, 2026
  • Review provided by PeerSpot

What is our primary use case?

Our use case for Cribl is observability from an infrastructure point of view; we use Cribl for getting the logs from our infrastructure. The metrics or logs which we require from our servers or containers, or the platforms where we have deployed our product, necessitate real-time data processing, so Cribl helps us in that regard.

What is most valuable?

I love Cribl Edge feature, which is an agent we can directly deploy at our servers; that is quite a good feature that helps in collecting data locally at the server level. Additionally, the search is good; we can search across all our data sources, and it is quite fast. Cost efficiency also helps in optimizing costs.

Cribl handles high volumes of diverse data types very well. We have around 200 to 250 in-house servers, and we require observability and visibility over those servers. We don't have a team that manages them, and we cannot hire too many people to manage 200 servers. Cribl provides visibility and helps in that regard; we get real-time metrics, allowing us to see when we need to increase the compute of our servers or when we have over-provisioned resources. It helps in optimizing costs at our infrastructure level, and Cribl is quite cost-efficient, helping in that aspect as well.

What needs improvement?

We haven't gone very deep into it, so we don't have a heavy use case, but most probably, as it helps us in optimizing costs, that is the best thing about it. Cribl's UI is quite simple and minimal, helping the developer and team get familiar with it earlier; however, it provides functionalities in a very deep way. Thus, it becomes difficult if we don't require some metrics or something for filtering, as Cribl has provided many functionalities to filter out metrics which we don't require with our lighter use case. That has created some hindrance for us; otherwise, everything is quite good.

The function section is quite messy and includes too many functionalities which are generally not required at an amateur level. If we advance at that level, then definitely it is required to get the precise logs that filter out unnecessary data when the data stream is quite big. At that time, definitely it is required, but at the initial level, it becomes quite difficult to get the proper data that is required.

For how long have I used the solution?

I used the solution about six months ago.

What do I think about the stability of the solution?

We haven't faced much regarding instability such as lagging or crashing; the backend team and support staff are quite nice, and we didn't encounter any significant issues with stability.

What do I think about the scalability of the solution?

Scaling with Cribl is very easy, both horizontally and vertically, so we don't have any hindrance in scaling the tool.

How are customer service and support?

My team has contacted technical support for some tasks they were facing issues with; they reported that the staff is quite nice, and the support is very good. However, we didn't require much support, only maybe twice or thrice.

Which solution did I use previously and why did I switch?

We used to utilize Node Exporter, Grafana, and Prometheus.

Cribl sits in between those tools; it does not replace any of them. Node Exporter helps collect the host metrics, Prometheus is responsible for scraping the metrics, and Grafana serves as a dashboard. Cribl assists with infrastructure observability without replacing any of the tools. We use all of them right now as well.

How was the initial setup?

Cribl's initial deployment is quite easy and nice; we didn't face any difficulties in doing that. Additionally, scaling it horizontally or vertically is very good.

What about the implementation team?

I lead my team; I don't set and manage deployment myself anymore. Initially, when we had a very small team, I started building it, but now my team handles all this.

What's my experience with pricing, setup cost, and licensing?

I'm not from the team that handles pricing; another department deals with that. However, the pricing appears to be good because I haven't been approached with concerns about why we are spending a particular amount. I think our pricing is fair.

What other advice do I have?

For our use case, I would give Cribl a score of 10 out of 10, but overall, if I rated it for a large organization that requires it, it would be fair to give an eight. I would rate this review as an 8 overall.


    reviewer2815500

Data pipelines have optimized log routing and currently reduce noise and monitoring costs

  • April 10, 2026
  • Review provided by PeerSpot

What is our primary use case?

I use Cribl for data integration, pipelining, data monitoring, scalability, and to check how my monitor is working. The main product we use is Cribl Stream, which we use for log routing, filtering, and transforming data before sending it to our SIEM platform. This is the core part of our log management pipeline. Through Cribl Stream, we mainly work with features such as data pipelining, routing rules, and data transformation functions to control how logs move between different systems. My hands-on experience is primarily with Stream, since that is the component we rely on most for processing and optimizing log data in our environment.

What is most valuable?

The main product we use is Cribl Stream, which we use for log routing, filtering, and transforming data before sending it to our SIEM platform. Through Cribl Stream, we mainly work with features such as data pipelining, routing rules, and data transformation functions to control how logs move between different systems. My hands-on experience is primarily with Stream, since that is the component we rely on most for processing and optimizing log data in our environment.

One of the biggest advantages for my organization is better control over log data. We can filter, transform, and route logs before they reach downstream systems such as the SIEM platform, which helps reduce noise and focus only on relevant data. Another key benefit is cost optimization. By dropping unnecessary logs and sending only important data, we significantly reduce ingestion and storage costs in tools such as Splunk. It also improves operational efficiency.

What needs improvement?

One key area is simplifying the user experience, especially for new users. Since it has multiple components such as metrics, traces, and detectors, making onboarding and navigation more intuitive would be beneficial. One area of improvement could be reducing the learning curve. Since it is a very flexible tool with powerful pipeline configuration, new users may take some time to fully understand how to design and optimize pipelines efficiently. Another improvement could be more pre-built templates or out-of-the-box integration of common data sources, which would help teams get started faster without building from scratch. I also think enhanced monitoring and troubleshooting visibility for pipelines would be helpful, especially in large environments where multiple data flows are being processed.

The main strength is its flexibility, scalability, and cost optimization benefits. It gives strong control over what data is processed and sent to downstream systems. The reason I would not give it a ten is mainly due to the learning curve and initial complexity, especially for new users. Some areas such as documentation or advanced troubleshooting could be improved.

For how long have I used the solution?

I have been working in the cybersecurity and security operations space for around one year.

What do I think about the stability of the solution?

Cribl is stable and reliable. I would rate stability and reliability at eight out of ten. In my experience, it is generally performing well.

What do I think about the scalability of the solution?

I would rate the scalability of Cribl at eight or nine out of ten. Its ability to handle a high volume of different data types would get a rating of eight or nine out of ten. It is designed to process large-scale telemetry data from multiple sources such as firewalls, cloud services, applications, and infrastructure. It can handle different formats such as JSON, syslog, and custom logs, and transform them within the pipeline with its distributed architecture. We can scale horizontally by adding worker nodes, which allows it to handle increased data volumes without major performance issues.

How are customer service and support?

We faced an issue with a pipeline dropping certain log events unexpectedly. We reached out to support, and they helped us analyze the pipeline configuration and logs. Initially, the response was general, but after sharing more details such as sample logs and pipeline rules, they were able to identify that the filter condition was incorrectly configured, which was causing the data to be dropped. They guided us on how to modify the rule and validate the data flow using a live preview, and we were able to resolve the issue very quickly. Overall, the support team was very helpful and knowledgeable, especially once the issue was clearly explained, and it helped us solve the problem without major downtime.

Which solution did I use previously and why did I switch?

Before Cribl, most log processing was handled directly within the SIEM platforms, mainly using tools such as Splunk native and sometimes Logstash for data processing. The limitation with that approach was that all the raw log data was first ingested into the SIEM, and then filtering or transformation were applied afterwards. This increased the data volume and cost complexity. We moved to Cribl to introduce a dedicated data pipeline layer before the SIEM, which allows us to filter, transform, and route data more efficiently before ingestion.

How was the initial setup?

As I am on the technical side, I was involved in the initial setup of Cribl. My role included configuring data sources, setting up pipelines, and defining routing and filtering rules based on our different requirements. I also worked on integrating Cribl with our SIEM platform, ensuring that only relevant and optimized data is forwarded. During the setup, we focused on designing efficient pipelines, testing data flow, and validating transformations to make sure everything was working correctly. Overall, the initial setup was not very complex, but it required proper planning to design the pipelines.

Which other solutions did I evaluate?

Other than this platform, it is more valuable. Before adopting Cribl, we did look at a few other approaches. Some of the evaluations were around using native capabilities within SIEM platforms such as Splunk, as well as open-source log processing tools such as Logstash for handling data pipelines. Those options can work for log collection and processing, but Cribl stood out because it provides a dedicated platform specifically designed for observability and security data pipelines. It offers more flexibility in routing, filtering, and transforming logs without heavily relying on the SIEM itself. The visual pipeline management and real-time visibility into data flow were also important factors that made Cribl a better fit for managing large volumes of log data across multiple systems. We saw other options, but by way of references, we determined that Cribl is more relevant for our work. So we chose Cribl.

What other advice do I have?

I would recommend starting with a few simple pipelines, then gradually expanding as you become more comfortable with the platform. I would rate Cribl eight out of ten. A few improvements in Splunk Observability Cloud could make it even better. Overall, I would give Cribl a rating of 8.5 out of ten.


    Tirth Dhanani

Log routing has cut storage costs and saves significant time in daily monitoring workflows

  • April 08, 2026
  • Review from a verified AWS customer

What is our primary use case?

I use Cribl for filtering service logs and reducing data volume before sending to Splunk to cut storage costs, and it is mostly for logs sharing while I am working in the PLM environment.

What is most valuable?

I have experience with Cribl Stream, and in that, I appreciate data routing, data processing, and reduction because it filters out unwanted fields, helps in removing redundant data, and has good integration support.

I have observed approximately 60% reduction in firewall logs.

Cribl was able to handle the volume of different data types, such as logs and metrics, and that is why I found it valuable. It is a good monitoring tool, and although there is a steep learning curve, once you gain hands-on experience, it is quite good.

I save roughly around 30 to 50% of operational time in log handling and everything.

I find it quite stable, and I would give it a nine.

Scalability is highly achievable with its distributed leader-worker architecture, so I would rate that a ten.

I would definitely recommend Cribl to other users because it has helped me reduce my log handling time by 40 to 50%, and it also reduces the log volume by 30 to 40%, which cuts storage and SIEM costs. Additionally, the good real-time data processing filters and transforms the data before sending it to the tools. I would definitely recommend it to new users or prospective users.

What needs improvement?

When I started using Cribl interface for managing log processing tasks, it was difficult for me to navigate because it took me a month or two to gain fluency with the software since I did not have hands-on experience initially, and I found that the documentation is not thorough enough to help users navigate how to use Cribl.

The areas that have room for improvement include the documentation because it can be improved, mostly the documentation. Otherwise, I appreciate Cribl Stream, and for new users, it should be easier to understand and learn how to use the tool and how it can help them.

For how long have I used the solution?

I have been using Cribl Stream for one year, 13 to 14 months.

What do I think about the stability of the solution?

I find Cribl quite stable, and I would give it a nine.

What do I think about the scalability of the solution?

Scalability is highly achievable with its distributed leader-worker architecture, so I would rate that a ten.

How are customer service and support?

I would rate the technical support an eight.

Which solution did I use previously and why did I switch?

I have used DataDog, and I find that Cribl is more about controlling the data before it reaches the tools, while DataDog is more about analyzing the data after it arrives, so there is a clear difference between both tools. However, it really depends on what you are using it for.

How was the initial setup?

It is not on-cloud; it is a hybrid model for deployment.

What about the implementation team?

Cribl does require maintenance, and that part is also maintained by one of our team members who handles the versioning, maintenance, and any new releases, so it is pretty taken care of, and I have not heard a complaint from him about anything, so it must be good.

What's my experience with pricing, setup cost, and licensing?

I do not know about the pricing because I have not purchased it, as it was given to me by my organization.

Which other solutions did I evaluate?

I have not used Cribl Search yet, which includes the new Search in Place technology.

What other advice do I have?

I have used Cribl Edge once; it is a data collection agent, but I have not used it that much as I mainly use Cribl Stream.

There are roughly three to four users using Cribl right now; it is a small team of people.

I would give this review an overall rating of nine.


    Darsh Patel

Centralized data pipelines have reduced daily log volumes and optimize observability workflows

  • March 30, 2026
  • Review provided by PeerSpot

What is our primary use case?

I use Cribl for optimizing Splunk data. For example, I have approximately 10 TB of daily data integrations. I route the data through Cribl, optimize it, and index it into Splunk, reducing it by 30 to 40 percent. For instance, at 10 TB of integrations, it becomes 5 TB after Cribl optimization. I use Cribl for firewall logs, event logs, Windows logs, metrics logs, and EDR logs.

What is most valuable?

The feature I appreciate is the connection between Splunk and Cribl, which is very useful for routing data and pipeline filtering. Cribl has a central management system that controls all data pipelines and configurations.

Cribl works centrally by using the main Cribl instance and managing configurations, pipelines, routing routes, and all worker nodes. The leader nodes act as a central node and manage pipelines, route packs, and configurations while distributing them to the worker nodes. The worker nodes process actual logs and send the processed logs to destinations such as Splunk, S3, and other SIEM tools.

What needs improvement?

Cribl pricing is a concern. Cribl Streams is very powerful but costly as it scales with data volumes. For large and heavy systems, it becomes pricey compared to other similar tools. While it is flexible, it is not beginner-friendly. Pipeline routes and transforms can feel complex at first.

For how long have I used the solution?

I have been using Cribl for my business for the last 1.5 years.

What do I think about the stability of the solution?

Sometimes Cribl goes down, and we miss logs during that time, which is an issue. I experience downtime with Cribl, and this is the only issue I face. Otherwise, we do not have any other issues. When there is downtime, we cannot get logs into Splunk, and based on those logs, we get alerts and crypto triggering repeatedly, creating multiple incidents and sending emails to our customers, which is very problematic during downtime.

What do I think about the scalability of the solution?

Cribl is excellent for scalability. It is good overall for pipeline maintaining, horizontal scaling, distributed architecture, parallel pipelines, and load balancing. We handle real-time data with several GB of data per day and one TB of data, which is a very high volume of observability pipelines. Multiple pipelines run at once and different data sources process independently. There are no signal bottlenecks, and managing configuration is straightforward. Overall, it is long-lasting and good for stability and scalability.

Which solution did I use previously and why did I switch?

As of now, I do not use any alternative to Cribl.

How was the initial setup?

The initial setup is moderate. It is not too hard and not too easy. For experienced people, it is very easy. One person is enough for a Cribl deployment if you do not have a very large environment. Otherwise, you need different types of people at a large-scale environment. For beginners, it is moderate, neither too hard nor too easy. For experienced people, it is very easy because they have experience with it.

What about the implementation team?

All the nodes and components can be deployed from start to end within a certain timeframe. A quick setup following the official guide from the documentation takes approximately one hour. Normally, production setup takes one to three days. The breakdown is approximately two days for deployment and configuration, and the third and fourth days for pipelines and testing. A full enterprise deployment at a much higher level takes one to four weeks, depending on the difficulties and architecture involved.

What's my experience with pricing, setup cost, and licensing?

For the current user at a small level, the pricing is good. At a large level, it is not too heavy. The main model of pricing is based on data integrations at approximately $0.32 per GB for ST enterprise estimate. This is good and not too high or too low, falling within a medium-level range.


    Palak Kotak

Filtering has reduced daily data volumes and central routing now simplifies log management

  • March 27, 2026
  • Review provided by PeerSpot

What is our primary use case?

We work on Splunk, so we use Cribl. Our company works with a system where approximately 12 to 15 TB of data comes daily in Splunk. We don't store the data directly into Splunk; instead, we use Cribl first. By using Cribl, it removes unnecessary data and keeps the important data, which can reduce the size.

What is most valuable?

My favorite feature is that Cribl is connected with Splunk very easily and it routes the data. The filtering is the most important feature because it removes unwanted logs, and the central control manages everything from one place. Cribl provides pipelines, which process the data step-by-step, so all the features are very useful.

What needs improvement?

It is very difficult to learn as a beginner.

I sometimes experience downtime, and by that, we sometimes miss logs, which creates a problem, but not for a long time. Sometimes we face these issues.

For how long have I used the solution?

I have been using Cribl for four months.

What do I think about the stability of the solution?

I sometimes experience downtime, and by that, we sometimes miss logs, which creates a problem, but not for a long time. Sometimes we face these issues.

How are customer service and support?

I have a very good experience with customer support. When we are in trouble, they give us fast responses and good responses, which is very useful for us.

How was the initial setup?

The initial deployment when I first started using Cribl was not that difficult. As a beginner, I think it is a little difficult, not that much easy. However, once you start learning and become an experienced user, it is very easy. One person can handle the whole setup without needing a large team.

What other advice do I have?

Cribl's interface is very good, and it is easy to understand how to use Cribl. When I started to use Cribl, it wasn't that difficult to learn. I learned how to pass the data into Cribl, so it is easy. Cribl has a good user interface, which makes work easier for me. I would rate this product a 9 out of 10.


    Vansh Godhani

Data pipelines have reduced log noise and now route critical observability events efficiently

  • March 25, 2026
  • Review provided by PeerSpot

What is our primary use case?

My primary use case for Cribl is to manage and optimize observability data before sending it to different destinations, such as routing. I deal with a very large volume of logs coming from multiple sources, including large log systems. This includes system logs, application logs, and security-related logs. Using Cribl, I can filter unnecessary logs and transform that data as required, and I can route important data to the appropriate destinations. This is very helpful to me and helps me reduce data volume and improve performance. I also use pipeline configurations to control how logs flow through the entire system. This makes it very easy for me to maintain data consistency and manage large log systems across different environments.

What is most valuable?

The most valuable thing or feature for me in Cribl is data routing and pipeline flexibility. Cribl allows me to define how data should be processed, filtered, and routed to different destinations. One of the things I also find very useful is edge processing, which allows me to process data closer to the source, which helps reduce unnecessary data and improve performance. Overall, flexibility and control over observability data are the things I appreciate most about Cribl.

Cribl handles large logs very efficiently by using its pipeline-based architecture, which I find most useful. It allows me to transform data through routing and filtering before sending it to downstream systems. When dealing with large volumes of logs, I can define pipelines that drop unnecessary fields and remove duplicate logs. There can be so many duplicates and redundancies that filtering them out significantly reduces the overall data volume. Another helpful capability is routing, which helps me route different types of logs to different destinations and prioritize fields that I want. For example, critical logs can be sent to one destination while lowering the priority of other logs, which are stored elsewhere. This helps me in large-scale log environments very effectively. Cribl also supports horizontal scaling, where I can add more worker nodes to handle increasing log volumes. This ensures my performance remains stable, even as log ingestion increases.

I have seen a decrease in logs by using pipelines, which helps me decrease logs by filtering and optimizing data before sending it downstream. For firewall logs specifically, I have seen that it helps reduce volume by filtering unnecessary or repetitive events. When a firewall device generates a large number of logs or deny logs, many of which are repetitive or not always useful, Cribl filters out the low-priority logs such as allowed traffic and routine events. I remove the unnecessary fields from firewall logs, which reduces the log size.

What needs improvement?

The main downside of Cribl is that it is not very beginner-friendly. They could include tutorials or something more interactive for beginners. For experienced users, it works well. The learning curve is significant; learning Cribl from the initial stage for someone who doesn't have any background knowledge may be difficult. Since it offers lots of flexibility with pipelines and routing, it can take time for beginners to understand how everything works properly and to complete the configuration. The initial setup is also a little complex. Additionally, Cribl has limited built-in analytics compared to dedicated monitoring tools.

For how long have I used the solution?

I have been working with Cribl for more than one year or one and a half years.

How are customer service and support?

Technical support is very helpful. My experience with Cribl support has always been positive. They do not delay responses. The documentation covers almost everything for the use case, especially all the major features they include. For any issues I encounter, I was able to resolve them by using mostly documentation and community resources without needing to contact support directly. For technical clarification, if required, the available resources including guides and examples of best practices are quite helpful. The support ecosystem around Cribl is very good, and most issues are resolved quickly.

Which solution did I use previously and why did I switch?

I was previously using Splunk. Splunk was mostly used for storing, searching, and analyzing logs. Once I discovered Cribl, I found it more useful. Cribl helped me with managing, filtering, pipeline routing, and flexibility before sending data to destinations or monitoring tools. Cribl sits between a data source and an analytics tool, which helps me reduce my flow, save time, and optimize data volume. If I had to choose between Splunk and Cribl for filtering and routing, I would obviously choose Cribl. For analyzing and searching, I continue to use Splunk.

How was the initial setup?

The initial deployment of Cribl is not very user-friendly for beginners. For beginners, they might find that they have to first study and get to know everything about it. Once they get used to it, they will find that it is a very useful tool. It is not very beginner-friendly, but if the user is experienced or knows the relevant terms, then it will be very easy.

What's my experience with pricing, setup cost, and licensing?

For cost optimization, Cribl's pricing is moderate. I will not say it is too high or too low.

Which other solutions did I evaluate?

For something similar to Cribl, I have used Splunk.

What other advice do I have?

The maintenance for Cribl is relatively minimal. Most of the time, I focus on monitoring pipelines, which is manual work. I check the data flow and make small adjustments as I need them. For new log sources or adding anything, that is the manual work I have to do. I also review pipeline configurations to ensure logs are being filtered and routed correctly. If there are any changes in log formats or new data sources, I update the pipelines accordingly. Monitoring system performance and ensuring the worker nodes are running properly is something I always do. If the volume of logs increases, I scale the nodes to handle the load. Overall, maintenance from my side is minimal. Once the pipelines and configurations are done, Cribl runs very smoothly with very minimal manual intervention. I would rate this review as a nine out of ten.