Sign in Agent Mode
Categories
Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

Reviews from AWS customer

15 AWS reviews

External reviews

44 reviews
from

External reviews are not included in the AWS star rating for the product.


    Atharva Khadsare

Search in place has reduced log ingestion and enables faster deep investigations

  • April 21, 2026
  • Review provided by PeerSpot

What is our primary use case?

I am working in a PLM environment, which is product lifecycle management. We deal with lots of system logs and tool integrations. I used Cribl Search for debugging system errors quickly and searching logs stored in long-term storage. Instead of pushing all logs into expensive tools, we used Cribl Search to directly investigate issues from stored data.

I am currently using Cribl Search only. I have some experience with Cribl Stream, which we are using for our data pipeline solution.

We have just started using the Search in Place feature because one of our team members recommended it. There is a lot of room for improvement in the way we query the data and the whole data processing pipeline. We weren't using any other tool before.

What is most valuable?

I have been using Cribl Search for a long time now, and I think Search in Place is a very good feature in Cribl Search. Unify Search is also valuable, where you can search data from multiple sources in one place. Fast investigation reduces steps from multiple tools to a single workflow. Pre-built search packs save effort to configure the dashboards and write the queries. It also works well with other Cribl tools.

The traditional way for certain places is that logs are generated, then sent to SIEM tools like Splunk, and then stored again before you can search them. This has problems including data duplication and high storage costs. With Search in Place in Cribl Search, logs stay in storage such as S3, data lakes, or archives. You can directly run queries on that data without any movement, duplication, or reprocessing. Advantages include cost reduction and faster investigation.

Since we can directly query historical data where it is stored, there is an advantage of deep root cause analysis, which helps understand what happened in the past. This is useful for debugging recurring issues and is cost-efficient. It has helped me in faster troubleshooting because there is no need to reload old logs. We can investigate incidents after days, weeks, or even months. It has the ability to handle large data volumes, so there is no performance bottleneck.

We reduced unnecessary data ingestion by almost 40 to 50% using Search in Place. We could troubleshoot issues faster because data was already available for querying. It eliminates redundancy and keeps the architecture cleaner. As the data grows, we don't need to scale ingestion pipelines.

What needs improvement?

The user interface of Cribl Search can be more simplified because for non-technical users, it is quite difficult to grasp. There is a need for better beginner tutorials.

Cribl could have built-in guided queries for faster onboarding and better beginner tutorials. A more simplified UI would be better for non-technical people.

For how long have I used the solution?

I have been working with Cribl for eight to nine months.

What do I think about the stability of the solution?

Until now, we haven't had any downtimes. It has been working very well.

What do I think about the scalability of the solution?

It is pretty scalable horizontally. We started with one team member but now there are five to six people using it.

How are customer service and support?

We developers ask for support from our in-house IT team, but I don't know what conversation goes on between Cribl customer service and our IT team.

Which solution did I use previously and why did I switch?

We evaluated Splunk, but due to some reasons, we went with Cribl Search.

How was the initial setup?

Cribl Search was set up by the IT team, but they haven't complained about any issues or complexities that arose during the setup. I think the setup is pretty simple and not that complicated.

What about the implementation team?

The implementation was done by our internal IT team.

What was our ROI?

With Cribl, we have observed a 40 to 60% reduction in log volume hitting the firewall because Cribl filters unnecessary events and removes verbose fields.

There is reduced pipeline complexity and faster end-to-end workflow because data doesn't wait in ingestion queues. There is also optimized data processing cost because less data processed equals less compute plus storage cost. Other expensive tools are used only for critical data. There is a shift from processing to querying because traditional systems process first and query later, but Cribl stores data cheaply so we can query it when we need it.

Cribl has many filters to remove noise from the data and to remove verbose fields, which has been very good to work with.

Earlier, we had to process and store all logs in monitoring tools, which are very expensive, before analysis. After using Cribl Search, we streamlined the workflow by sending only critical data through pipelines and directly querying archive logs for investigation. This improved efficiency and reduced system load, which helped us indirectly optimize costs. We reduced the overall processing load by around 40%.

What's my experience with pricing, setup cost, and licensing?

I'd highly recommend other organizations to use Cribl Search because it did help us a lot with data processing and everything.

What other advice do I have?

Cribl Search was set up by the IT team, but they haven't complained about any issues or complexities that arose during the setup, so I think the setup is pretty simple and not that complicated. I would rate this review an 8 out of 10.


    Pal Mavani

Data routing has simplified high-volume security log management and supports flexible processing

  • April 17, 2026
  • Review provided by PeerSpot

What is our primary use case?

I use Cribl in a data management platform for IT security teams. My use cases include Stream, Edge, Search, and Lake.

What is most valuable?

I appreciate data routing the most about Cribl. I use it for data routing, data processing, and integration support. Cribl's ability to handle high volumes of diverse data types such as logs and metrics is impressive. It can easily handle logs because it is highly scalable and built to process millions of events per second, making it very easy to use.

What needs improvement?

What I dislike about Cribl are the documentation gaps and the setup complexity.

For how long have I used the solution?

I have been working with Cribl for one year.

What do I think about the stability of the solution?

Regarding stability, once the pipelines were properly set up, the ongoing maintenance was minimal and mostly involved small adjustments rather than major changes. Overall, Cribl is not maintenance heavy, but sometimes maintenance is needed.Cribl requires some maintenance on my end; it is relatively low compared to traditional log pipelines.

What do I think about the scalability of the solution?

Cribl provides high availability through distributed architecture, so we can achieve this by developing multiple workers and using load balancing to ensure continuous data flow even during failures in the pipeline.

How was the initial setup?

The initial deployment is medium because the setup is complex. It took me some time to set it up for the first time because my friend helped me, but I found it difficult.

What other advice do I have?

I have not seen a significant decrease in firewall logs while working with Cribl because it is highly scalable, so that much decrease has not occurred.


    Abhay Gor

Data routing has become efficient and log volumes are reduced while monitoring improves

  • April 15, 2026
  • Review provided by PeerSpot

What is our primary use case?

I am using Cribl Stream for data routing and data processing as part of my company's IT team. We primarily use it for monitoring and collecting data.

What is most valuable?

One of the best features is integration support because it offers more than 80 to 90 sources and destinations via Cribl packs. Additionally, the security is very good because they offer encryption and access control to protect sensitive telemetry data. The data processing and reduction is also excellent because it filters unwanted fields and removes redundant data.

I have seen a decrease in my firewall logs by 50 to 60%.

Cribl allows me to handle high volumes of diverse data, such as logs and metrics, and it helps manage them effectively.

It is helpful because it handles diverse data types and can process logs, metrics, event streams, JSON, text, structured and unstructured data.

What needs improvement?

The user interface is acceptable, but I think a person who is just starting to use it will need to go through documentation because there is a steep learning curve to become familiar with Cribl Stream. The setup is also complex, and configuring integrations and pipelines for a large environment requires significant effort.

The areas that have room for improvement are the complex setup and better documentation, such as a user guide.

For how long have I used the solution?

I have been using this product for six to eight months.

What do I think about the stability of the solution?

Cribl performs time-to-time updates and maintenance, and it must be managed effectively because we are using it daily and have not experienced any issues for a long time. The team maintaining it must be performing their job very well.

What do I think about the scalability of the solution?

Horizontally, it is quite scalable, so I rate that a ten.

How are customer service and support?

I rate the technical support a nine, and I rate the stability an eight.

Which solution did I use previously and why did I switch?

I have used Splunk, and what Cribl does is it does not replace Splunk; it optimizes the data before sending it to Splunk, reducing cost and load. Therefore, Cribl is not a direct alternative to Splunk; they are complementary to each other.

How was the initial setup?

The deployment was quite easy.

I do not know exactly how long it took to deploy because I was not the one who deployed it on the cloud, but the ones who deployed it told me that it was quite easy to deploy and there were no complaints from them.

What about the implementation team?

Roughly five to six users use the solution.

What was our ROI?

I checked out Cribl Search once, and it helped me directly search from S3 data lakes, and it did help me save time and cost.

I have not analyzed the exact amount, but in ballpark terms, it saves about 10 to 20%.

I think it is cost-efficient because overall, after using Cribl, it helps users save cost and time. If you look at the big picture, it is cost-effective.

It saves me about 30 to 40% in terms of time and cost.

Which other solutions did I evaluate?

I would highly recommend it because it is cost-efficient, helps reduce noisy logs, and filters unnecessary fields.

What other advice do I have?

I gave this review a rating of nine.


    HarshShah2

Cribl has improved real-time infrastructure observability and optimizes server resource costs

  • April 10, 2026
  • Review provided by PeerSpot

What is our primary use case?

Our use case for Cribl is observability from an infrastructure point of view; we use Cribl for getting the logs from our infrastructure. The metrics or logs which we require from our servers or containers, or the platforms where we have deployed our product, necessitate real-time data processing, so Cribl helps us in that regard.

What is most valuable?

I love Cribl Edge feature, which is an agent we can directly deploy at our servers; that is quite a good feature that helps in collecting data locally at the server level. Additionally, the search is good; we can search across all our data sources, and it is quite fast. Cost efficiency also helps in optimizing costs.

Cribl handles high volumes of diverse data types very well. We have around 200 to 250 in-house servers, and we require observability and visibility over those servers. We don't have a team that manages them, and we cannot hire too many people to manage 200 servers. Cribl provides visibility and helps in that regard; we get real-time metrics, allowing us to see when we need to increase the compute of our servers or when we have over-provisioned resources. It helps in optimizing costs at our infrastructure level, and Cribl is quite cost-efficient, helping in that aspect as well.

What needs improvement?

We haven't gone very deep into it, so we don't have a heavy use case, but most probably, as it helps us in optimizing costs, that is the best thing about it. Cribl's UI is quite simple and minimal, helping the developer and team get familiar with it earlier; however, it provides functionalities in a very deep way. Thus, it becomes difficult if we don't require some metrics or something for filtering, as Cribl has provided many functionalities to filter out metrics which we don't require with our lighter use case. That has created some hindrance for us; otherwise, everything is quite good.

The function section is quite messy and includes too many functionalities which are generally not required at an amateur level. If we advance at that level, then definitely it is required to get the precise logs that filter out unnecessary data when the data stream is quite big. At that time, definitely it is required, but at the initial level, it becomes quite difficult to get the proper data that is required.

For how long have I used the solution?

I used the solution about six months ago.

What do I think about the stability of the solution?

We haven't faced much regarding instability such as lagging or crashing; the backend team and support staff are quite nice, and we didn't encounter any significant issues with stability.

What do I think about the scalability of the solution?

Scaling with Cribl is very easy, both horizontally and vertically, so we don't have any hindrance in scaling the tool.

How are customer service and support?

My team has contacted technical support for some tasks they were facing issues with; they reported that the staff is quite nice, and the support is very good. However, we didn't require much support, only maybe twice or thrice.

Which solution did I use previously and why did I switch?

We used to utilize Node Exporter, Grafana, and Prometheus.

Cribl sits in between those tools; it does not replace any of them. Node Exporter helps collect the host metrics, Prometheus is responsible for scraping the metrics, and Grafana serves as a dashboard. Cribl assists with infrastructure observability without replacing any of the tools. We use all of them right now as well.

How was the initial setup?

Cribl's initial deployment is quite easy and nice; we didn't face any difficulties in doing that. Additionally, scaling it horizontally or vertically is very good.

What about the implementation team?

I lead my team; I don't set and manage deployment myself anymore. Initially, when we had a very small team, I started building it, but now my team handles all this.

What's my experience with pricing, setup cost, and licensing?

I'm not from the team that handles pricing; another department deals with that. However, the pricing appears to be good because I haven't been approached with concerns about why we are spending a particular amount. I think our pricing is fair.

What other advice do I have?

For our use case, I would give Cribl a score of 10 out of 10, but overall, if I rated it for a large organization that requires it, it would be fair to give an eight. I would rate this review as an 8 overall.


    reviewer2815500

Data pipelines have optimized log routing and currently reduce noise and monitoring costs

  • April 10, 2026
  • Review provided by PeerSpot

What is our primary use case?

I use Cribl for data integration, pipelining, data monitoring, scalability, and to check how my monitor is working. The main product we use is Cribl Stream, which we use for log routing, filtering, and transforming data before sending it to our SIEM platform. This is the core part of our log management pipeline. Through Cribl Stream, we mainly work with features such as data pipelining, routing rules, and data transformation functions to control how logs move between different systems. My hands-on experience is primarily with Stream, since that is the component we rely on most for processing and optimizing log data in our environment.

What is most valuable?

The main product we use is Cribl Stream, which we use for log routing, filtering, and transforming data before sending it to our SIEM platform. Through Cribl Stream, we mainly work with features such as data pipelining, routing rules, and data transformation functions to control how logs move between different systems. My hands-on experience is primarily with Stream, since that is the component we rely on most for processing and optimizing log data in our environment.

One of the biggest advantages for my organization is better control over log data. We can filter, transform, and route logs before they reach downstream systems such as the SIEM platform, which helps reduce noise and focus only on relevant data. Another key benefit is cost optimization. By dropping unnecessary logs and sending only important data, we significantly reduce ingestion and storage costs in tools such as Splunk. It also improves operational efficiency.

What needs improvement?

One key area is simplifying the user experience, especially for new users. Since it has multiple components such as metrics, traces, and detectors, making onboarding and navigation more intuitive would be beneficial. One area of improvement could be reducing the learning curve. Since it is a very flexible tool with powerful pipeline configuration, new users may take some time to fully understand how to design and optimize pipelines efficiently. Another improvement could be more pre-built templates or out-of-the-box integration of common data sources, which would help teams get started faster without building from scratch. I also think enhanced monitoring and troubleshooting visibility for pipelines would be helpful, especially in large environments where multiple data flows are being processed.

The main strength is its flexibility, scalability, and cost optimization benefits. It gives strong control over what data is processed and sent to downstream systems. The reason I would not give it a ten is mainly due to the learning curve and initial complexity, especially for new users. Some areas such as documentation or advanced troubleshooting could be improved.

For how long have I used the solution?

I have been working in the cybersecurity and security operations space for around one year.

What do I think about the stability of the solution?

Cribl is stable and reliable. I would rate stability and reliability at eight out of ten. In my experience, it is generally performing well.

What do I think about the scalability of the solution?

I would rate the scalability of Cribl at eight or nine out of ten. Its ability to handle a high volume of different data types would get a rating of eight or nine out of ten. It is designed to process large-scale telemetry data from multiple sources such as firewalls, cloud services, applications, and infrastructure. It can handle different formats such as JSON, syslog, and custom logs, and transform them within the pipeline with its distributed architecture. We can scale horizontally by adding worker nodes, which allows it to handle increased data volumes without major performance issues.

How are customer service and support?

We faced an issue with a pipeline dropping certain log events unexpectedly. We reached out to support, and they helped us analyze the pipeline configuration and logs. Initially, the response was general, but after sharing more details such as sample logs and pipeline rules, they were able to identify that the filter condition was incorrectly configured, which was causing the data to be dropped. They guided us on how to modify the rule and validate the data flow using a live preview, and we were able to resolve the issue very quickly. Overall, the support team was very helpful and knowledgeable, especially once the issue was clearly explained, and it helped us solve the problem without major downtime.

Which solution did I use previously and why did I switch?

Before Cribl, most log processing was handled directly within the SIEM platforms, mainly using tools such as Splunk native and sometimes Logstash for data processing. The limitation with that approach was that all the raw log data was first ingested into the SIEM, and then filtering or transformation were applied afterwards. This increased the data volume and cost complexity. We moved to Cribl to introduce a dedicated data pipeline layer before the SIEM, which allows us to filter, transform, and route data more efficiently before ingestion.

How was the initial setup?

As I am on the technical side, I was involved in the initial setup of Cribl. My role included configuring data sources, setting up pipelines, and defining routing and filtering rules based on our different requirements. I also worked on integrating Cribl with our SIEM platform, ensuring that only relevant and optimized data is forwarded. During the setup, we focused on designing efficient pipelines, testing data flow, and validating transformations to make sure everything was working correctly. Overall, the initial setup was not very complex, but it required proper planning to design the pipelines.

Which other solutions did I evaluate?

Other than this platform, it is more valuable. Before adopting Cribl, we did look at a few other approaches. Some of the evaluations were around using native capabilities within SIEM platforms such as Splunk, as well as open-source log processing tools such as Logstash for handling data pipelines. Those options can work for log collection and processing, but Cribl stood out because it provides a dedicated platform specifically designed for observability and security data pipelines. It offers more flexibility in routing, filtering, and transforming logs without heavily relying on the SIEM itself. The visual pipeline management and real-time visibility into data flow were also important factors that made Cribl a better fit for managing large volumes of log data across multiple systems. We saw other options, but by way of references, we determined that Cribl is more relevant for our work. So we chose Cribl.

What other advice do I have?

I would recommend starting with a few simple pipelines, then gradually expanding as you become more comfortable with the platform. I would rate Cribl eight out of ten. A few improvements in Splunk Observability Cloud could make it even better. Overall, I would give Cribl a rating of 8.5 out of ten.


    Tirth Dhanani

Log routing has cut storage costs and saves significant time in daily monitoring workflows

  • April 08, 2026
  • Review from a verified AWS customer

What is our primary use case?

I use Cribl for filtering service logs and reducing data volume before sending to Splunk to cut storage costs, and it is mostly for logs sharing while I am working in the PLM environment.

What is most valuable?

I have experience with Cribl Stream, and in that, I appreciate data routing, data processing, and reduction because it filters out unwanted fields, helps in removing redundant data, and has good integration support.

I have observed approximately 60% reduction in firewall logs.

Cribl was able to handle the volume of different data types, such as logs and metrics, and that is why I found it valuable. It is a good monitoring tool, and although there is a steep learning curve, once you gain hands-on experience, it is quite good.

I save roughly around 30 to 50% of operational time in log handling and everything.

I find it quite stable, and I would give it a nine.

Scalability is highly achievable with its distributed leader-worker architecture, so I would rate that a ten.

I would definitely recommend Cribl to other users because it has helped me reduce my log handling time by 40 to 50%, and it also reduces the log volume by 30 to 40%, which cuts storage and SIEM costs. Additionally, the good real-time data processing filters and transforms the data before sending it to the tools. I would definitely recommend it to new users or prospective users.

What needs improvement?

When I started using Cribl interface for managing log processing tasks, it was difficult for me to navigate because it took me a month or two to gain fluency with the software since I did not have hands-on experience initially, and I found that the documentation is not thorough enough to help users navigate how to use Cribl.

The areas that have room for improvement include the documentation because it can be improved, mostly the documentation. Otherwise, I appreciate Cribl Stream, and for new users, it should be easier to understand and learn how to use the tool and how it can help them.

For how long have I used the solution?

I have been using Cribl Stream for one year, 13 to 14 months.

What do I think about the stability of the solution?

I find Cribl quite stable, and I would give it a nine.

What do I think about the scalability of the solution?

Scalability is highly achievable with its distributed leader-worker architecture, so I would rate that a ten.

How are customer service and support?

I would rate the technical support an eight.

Which solution did I use previously and why did I switch?

I have used DataDog, and I find that Cribl is more about controlling the data before it reaches the tools, while DataDog is more about analyzing the data after it arrives, so there is a clear difference between both tools. However, it really depends on what you are using it for.

How was the initial setup?

It is not on-cloud; it is a hybrid model for deployment.

What about the implementation team?

Cribl does require maintenance, and that part is also maintained by one of our team members who handles the versioning, maintenance, and any new releases, so it is pretty taken care of, and I have not heard a complaint from him about anything, so it must be good.

What's my experience with pricing, setup cost, and licensing?

I do not know about the pricing because I have not purchased it, as it was given to me by my organization.

Which other solutions did I evaluate?

I have not used Cribl Search yet, which includes the new Search in Place technology.

What other advice do I have?

I have used Cribl Edge once; it is a data collection agent, but I have not used it that much as I mainly use Cribl Stream.

There are roughly three to four users using Cribl right now; it is a small team of people.

I would give this review an overall rating of nine.


    Darsh Patel

Centralized data pipelines have reduced daily log volumes and optimize observability workflows

  • March 30, 2026
  • Review provided by PeerSpot

What is our primary use case?

I use Cribl for optimizing Splunk data. For example, I have approximately 10 TB of daily data integrations. I route the data through Cribl, optimize it, and index it into Splunk, reducing it by 30 to 40 percent. For instance, at 10 TB of integrations, it becomes 5 TB after Cribl optimization. I use Cribl for firewall logs, event logs, Windows logs, metrics logs, and EDR logs.

What is most valuable?

The feature I appreciate is the connection between Splunk and Cribl, which is very useful for routing data and pipeline filtering. Cribl has a central management system that controls all data pipelines and configurations.

Cribl works centrally by using the main Cribl instance and managing configurations, pipelines, routing routes, and all worker nodes. The leader nodes act as a central node and manage pipelines, route packs, and configurations while distributing them to the worker nodes. The worker nodes process actual logs and send the processed logs to destinations such as Splunk, S3, and other SIEM tools.

What needs improvement?

Cribl pricing is a concern. Cribl Streams is very powerful but costly as it scales with data volumes. For large and heavy systems, it becomes pricey compared to other similar tools. While it is flexible, it is not beginner-friendly. Pipeline routes and transforms can feel complex at first.

For how long have I used the solution?

I have been using Cribl for my business for the last 1.5 years.

What do I think about the stability of the solution?

Sometimes Cribl goes down, and we miss logs during that time, which is an issue. I experience downtime with Cribl, and this is the only issue I face. Otherwise, we do not have any other issues. When there is downtime, we cannot get logs into Splunk, and based on those logs, we get alerts and crypto triggering repeatedly, creating multiple incidents and sending emails to our customers, which is very problematic during downtime.

What do I think about the scalability of the solution?

Cribl is excellent for scalability. It is good overall for pipeline maintaining, horizontal scaling, distributed architecture, parallel pipelines, and load balancing. We handle real-time data with several GB of data per day and one TB of data, which is a very high volume of observability pipelines. Multiple pipelines run at once and different data sources process independently. There are no signal bottlenecks, and managing configuration is straightforward. Overall, it is long-lasting and good for stability and scalability.

Which solution did I use previously and why did I switch?

As of now, I do not use any alternative to Cribl.

How was the initial setup?

The initial setup is moderate. It is not too hard and not too easy. For experienced people, it is very easy. One person is enough for a Cribl deployment if you do not have a very large environment. Otherwise, you need different types of people at a large-scale environment. For beginners, it is moderate, neither too hard nor too easy. For experienced people, it is very easy because they have experience with it.

What about the implementation team?

All the nodes and components can be deployed from start to end within a certain timeframe. A quick setup following the official guide from the documentation takes approximately one hour. Normally, production setup takes one to three days. The breakdown is approximately two days for deployment and configuration, and the third and fourth days for pipelines and testing. A full enterprise deployment at a much higher level takes one to four weeks, depending on the difficulties and architecture involved.

What's my experience with pricing, setup cost, and licensing?

For the current user at a small level, the pricing is good. At a large level, it is not too heavy. The main model of pricing is based on data integrations at approximately $0.32 per GB for ST enterprise estimate. This is good and not too high or too low, falling within a medium-level range.


    Palak Kotak

Filtering has reduced daily data volumes and central routing now simplifies log management

  • March 27, 2026
  • Review provided by PeerSpot

What is our primary use case?

We work on Splunk, so we use Cribl. Our company works with a system where approximately 12 to 15 TB of data comes daily in Splunk. We don't store the data directly into Splunk; instead, we use Cribl first. By using Cribl, it removes unnecessary data and keeps the important data, which can reduce the size.

What is most valuable?

My favorite feature is that Cribl is connected with Splunk very easily and it routes the data. The filtering is the most important feature because it removes unwanted logs, and the central control manages everything from one place. Cribl provides pipelines, which process the data step-by-step, so all the features are very useful.

What needs improvement?

It is very difficult to learn as a beginner.

I sometimes experience downtime, and by that, we sometimes miss logs, which creates a problem, but not for a long time. Sometimes we face these issues.

For how long have I used the solution?

I have been using Cribl for four months.

What do I think about the stability of the solution?

I sometimes experience downtime, and by that, we sometimes miss logs, which creates a problem, but not for a long time. Sometimes we face these issues.

How are customer service and support?

I have a very good experience with customer support. When we are in trouble, they give us fast responses and good responses, which is very useful for us.

How was the initial setup?

The initial deployment when I first started using Cribl was not that difficult. As a beginner, I think it is a little difficult, not that much easy. However, once you start learning and become an experienced user, it is very easy. One person can handle the whole setup without needing a large team.

What other advice do I have?

Cribl's interface is very good, and it is easy to understand how to use Cribl. When I started to use Cribl, it wasn't that difficult to learn. I learned how to pass the data into Cribl, so it is easy. Cribl has a good user interface, which makes work easier for me. I would rate this product a 9 out of 10.


    Vansh Godhani

Data pipelines have reduced log noise and now route critical observability events efficiently

  • March 25, 2026
  • Review provided by PeerSpot

What is our primary use case?

My primary use case for Cribl is to manage and optimize observability data before sending it to different destinations, such as routing. I deal with a very large volume of logs coming from multiple sources, including large log systems. This includes system logs, application logs, and security-related logs. Using Cribl, I can filter unnecessary logs and transform that data as required, and I can route important data to the appropriate destinations. This is very helpful to me and helps me reduce data volume and improve performance. I also use pipeline configurations to control how logs flow through the entire system. This makes it very easy for me to maintain data consistency and manage large log systems across different environments.

What is most valuable?

The most valuable thing or feature for me in Cribl is data routing and pipeline flexibility. Cribl allows me to define how data should be processed, filtered, and routed to different destinations. One of the things I also find very useful is edge processing, which allows me to process data closer to the source, which helps reduce unnecessary data and improve performance. Overall, flexibility and control over observability data are the things I appreciate most about Cribl.

Cribl handles large logs very efficiently by using its pipeline-based architecture, which I find most useful. It allows me to transform data through routing and filtering before sending it to downstream systems. When dealing with large volumes of logs, I can define pipelines that drop unnecessary fields and remove duplicate logs. There can be so many duplicates and redundancies that filtering them out significantly reduces the overall data volume. Another helpful capability is routing, which helps me route different types of logs to different destinations and prioritize fields that I want. For example, critical logs can be sent to one destination while lowering the priority of other logs, which are stored elsewhere. This helps me in large-scale log environments very effectively. Cribl also supports horizontal scaling, where I can add more worker nodes to handle increasing log volumes. This ensures my performance remains stable, even as log ingestion increases.

I have seen a decrease in logs by using pipelines, which helps me decrease logs by filtering and optimizing data before sending it downstream. For firewall logs specifically, I have seen that it helps reduce volume by filtering unnecessary or repetitive events. When a firewall device generates a large number of logs or deny logs, many of which are repetitive or not always useful, Cribl filters out the low-priority logs such as allowed traffic and routine events. I remove the unnecessary fields from firewall logs, which reduces the log size.

What needs improvement?

The main downside of Cribl is that it is not very beginner-friendly. They could include tutorials or something more interactive for beginners. For experienced users, it works well. The learning curve is significant; learning Cribl from the initial stage for someone who doesn't have any background knowledge may be difficult. Since it offers lots of flexibility with pipelines and routing, it can take time for beginners to understand how everything works properly and to complete the configuration. The initial setup is also a little complex. Additionally, Cribl has limited built-in analytics compared to dedicated monitoring tools.

For how long have I used the solution?

I have been working with Cribl for more than one year or one and a half years.

How are customer service and support?

Technical support is very helpful. My experience with Cribl support has always been positive. They do not delay responses. The documentation covers almost everything for the use case, especially all the major features they include. For any issues I encounter, I was able to resolve them by using mostly documentation and community resources without needing to contact support directly. For technical clarification, if required, the available resources including guides and examples of best practices are quite helpful. The support ecosystem around Cribl is very good, and most issues are resolved quickly.

Which solution did I use previously and why did I switch?

I was previously using Splunk. Splunk was mostly used for storing, searching, and analyzing logs. Once I discovered Cribl, I found it more useful. Cribl helped me with managing, filtering, pipeline routing, and flexibility before sending data to destinations or monitoring tools. Cribl sits between a data source and an analytics tool, which helps me reduce my flow, save time, and optimize data volume. If I had to choose between Splunk and Cribl for filtering and routing, I would obviously choose Cribl. For analyzing and searching, I continue to use Splunk.

How was the initial setup?

The initial deployment of Cribl is not very user-friendly for beginners. For beginners, they might find that they have to first study and get to know everything about it. Once they get used to it, they will find that it is a very useful tool. It is not very beginner-friendly, but if the user is experienced or knows the relevant terms, then it will be very easy.

What's my experience with pricing, setup cost, and licensing?

For cost optimization, Cribl's pricing is moderate. I will not say it is too high or too low.

Which other solutions did I evaluate?

For something similar to Cribl, I have used Splunk.

What other advice do I have?

The maintenance for Cribl is relatively minimal. Most of the time, I focus on monitoring pipelines, which is manual work. I check the data flow and make small adjustments as I need them. For new log sources or adding anything, that is the manual work I have to do. I also review pipeline configurations to ensure logs are being filtered and routed correctly. If there are any changes in log formats or new data sources, I update the pipelines accordingly. Monitoring system performance and ensuring the worker nodes are running properly is something I always do. If the volume of logs increases, I scale the nodes to handle the load. Overall, maintenance from my side is minimal. Once the pipelines and configurations are done, Cribl runs very smoothly with very minimal manual intervention. I would rate this review as a nine out of ten.


    Kasthuri Ganeshguru

Data routing has improved precision and flexibility while pricing and alerting still need work

  • March 24, 2026
  • Review provided by PeerSpot

What is our primary use case?

I use Cribl as our data ingestion source, with Cribl Edge agents installed across all servers. Cribl is used at the pipeline or routing level to send data to our SIEM platform.

Firewall logs are sent to Cribl, and Cribl routes specific logs to our SIEM tool while sending others to archive storage. This segregation and separation capability is not possible with any other tool, which makes me very satisfied. However, Cribl charges us for all firewall logs that it observes, not just what it processes and outputs.

What is most valuable?

Cribl performs parsing and field reduction exceptionally well, cutting down unnecessary fields and delivering only the right data. However, Cribl charges for everything it sees rather than just what it parses. We might ingest a large volume of data but only process about forty percent of it, yet we are charged for one hundred percent of the data ingested into Cribl.

The ability to bifurcate or trifurcate data and send it to multiple destinations is a feature we love. I have been a Splunk user for over eight years, and this is something Splunk did not have until Cribl introduced it specifically for this purpose.

Cribl handles logs, metrics, and various data sources really well. I have ingested up to fifty terabytes of data per day, and Cribl has never failed or caused trouble from that perspective. Cribl handles huge volumes of data exceptionally well.

What needs improvement?

A feature I would want Cribl to add in future releases is the ability to create a greater number of fleets. Currently, Cribl has a limitation on the number of fleets that can be created. In an enterprise environment, different types of servers belong to different applications and should be organized accordingly, as each has a different change management cycle and upgrade cycle. Cribl cannot be upgraded all at once, so we want to separate fleets so we can perform upgrades in batches rather than all in one shot. Increasing the number of fleets would be greatly appreciated.

Data cost is a concern, as Cribl charges for everything it sees rather than everything it processes. I do not see much cost-effectiveness from this approach. If we could do pre-processing before sending data to Cribl, then Cribl would be cheaper than other tools, but if we could do that, we would not need Cribl at all. This costing model has been concerning for a while. Better options based on user base, enterprise size, or data volume would be beneficial. More options to choose from for pricing tiers are needed, as the current offerings are very limited.

I have used Splunk previously and have been using Palo Alto XSIAM. Palo Alto XSIAM has integrated features from Cribl, Splunk, and Sentinel into one comprehensive tool, taking the best features from all three. Another concern is that there is not much default alerting available for Cribl metrics, and custom alerting is also difficult to configure. For example, backpressure monitoring has only very limited use cases available out of the box when monitoring Cribl environment health. Cribl could take steps to increase the number of use cases and add guardrails around how much volume can be ingested. Options to create custom alerting would be helpful, such as alerts when certain metrics go down or up, or when the catchall is filling up. These options exist but are very complicated to set up. Unlike users who have been using Splunk for ten years and transitioned to Cribl, I find it very difficult to navigate and create alerts in Cribl. The ease of use could be improved by providing default options that can be leveraged and customized as needed.

Cribl initial deployment was easy, but for large enterprise networks and big organizations, Cribl does not support operating systems earlier than 2012. This creates a problem, and a package should be available for anything below 2012 that works as expected. Currently, Cribl only approves packages for 2012 and above, but some organizations require applications to run on legacy servers. This option is not available, and we are unable to get Cribl installed without finding alternatives or going back to using Splunk to pull data and then stream it to Cribl. This causes significant operational challenges, and if this could be fixed with one version that supports everything below 2012, it would be greatly appreciated.

Cribl is deployed both on-premise and in the cloud. Cribl placed sample data in one of the YAML files that contained examples of personal data like social security numbers or credit card information. When this YAML file was included in Cribl package itself, vulnerability scanners detected it as a non-compliance or data loss concern, even though there was no actual personal information, API keys, or sensitive data present. These were just examples provided by Cribl. Cribl fixed this issue in the latest version after we brought it to their attention. Going forward, I would like Cribl to think about this from a bigger enterprise perspective, as endpoint security tools will detect all of these concerns. It is not just about processing data but also about the problems faced when deploying it in a large enterprise. This thought process needs to increase from Cribl's side.

For how long have I used the solution?

I have used Cribl for over a year.

How are customer service and support?

A dedicated support portal is available, and support cases are usually raised through a dedicated email. Responses are received at reasonable times, so this has not been a problem. I would give support a rating of seven out of ten.