
Overview
Cribl Product Overview
How telemetry data was managed over the last 10 years will not work for the next 10. Cribl is purpose built to meet the unique challenges IT and Security teams face.
Cribl.Cloud is the easiest way to try Cribl products in the cloud through a unified platform. Cribls suite of products gives flexibility and control back to customers. With routing, shaping, enriching, and search functionalities that make data more manageable, you can easily clean up your data, get it where it needs to be, work more efficiently, and ultimately gain the control and confidence needed to be successful.
Cribl Cloud suite of products includes:
Stream: A highly scalable data router for data collection, reduction, enrichment, and routing of observability data.
Edge: An intelligent, scalable edge-based data collection system for logs, metrics, and application data.
Lake: Storage that does not lock data in. Cribl Lake is a turnkey data lake makes it easy and economical to store, access, replay, and analyze data no expertise needed.
Search: A search feature to perform federated search-in-place queries on any data, in any form.
Getting Started
When you purchase your Cribl.Cloud subscription directly from the AWS Marketplace, you can experience a smooth billing process that you're already familiar with, without needing to set up a separate procurement plan to use Cribl products. Track billing and usage directly in Cribl.Cloud.
Enjoy a quick and easy purchasing experience by utilizing your existing spend commitments through the AWS Enterprise Discount Program (EDP) to subscribe to Cribl.Cloud. Get flexible pricing and terms by purchasing through a private offer. Purchase the Cribl Cloud Suite of offerings at a pre-negotiated price. Contact awsmp@cribl.io or a sales representative for flexible pricing for 12/24/36-month terms.
We are available in US-West-2 (Oregon), US-East-2 (Ohio), US-East-1 (Virginia), CA-Central-1 (Canada Central), EU-West-2 (London), EU-Central-1 (Frankfurt), and AP-Southeast-2 (Sydney) with more regions coming soon! Regional pricing will apply.
To learn more about pricing and the consumption pricing philosophy, please visit: Cribl Pricing - https://cribl.io/cribl-pricing/ Cribl.Cloud Simplified with Consumption Pricing Blog - https://cribl.io/blog/cribl-cloud-consumption-pricing/
Highlights
- Fast and easy onboarding - With zero-touch deployment, you can quickly start using Cribl products without the hassle, burden, and cost of managing infrastructure.
- Instant scalability - The cloud provides flexibility to easily scale up or down to meet changing business needs and dynamic data demands.
- Trusted security - Cribl knows how important protecting data is, and built all Cribl products and services from the ground up with security as the top priority. Cribl.Cloud is SOC 2 compliant, ensuring all your data is protected and secure. Cribl.Cloud is currently In Process for FedRAMP IL4.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Security credentials achieved
(3)



Buyer guide

Financing for AWS Marketplace purchases
Quick Launch
Pricing
Free trial
Dimension | Description | Cost/12 months |
|---|---|---|
Cribl.Cloud Free | Cribl.Cloud Suite Free Tier | $0.00 |
Cribl.Cloud Enterprise | Cribl.Cloud Suite Enterprise with 1TB Daily ingestion | $142,800.00 |
The following dimensions are not included in the contract terms, which will be charged based on your usage.
Dimension | Cost/unit |
|---|---|
Overage Fees | $0.01 |
Vendor refund policy
Cribl will refund prior payments attributable to the unused remainder of your purchase.
Custom pricing options
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Software as a Service (SaaS)
SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.
Additional details
Usage instructions
Cribl Cloud Trust IAM Role CloudFormation Template
This CloudFormation template creates an IAM role that allows Cribl Cloud to access specific AWS resources in your account. The role is designed to provide Cribl Cloud with the necessary permissions to interact with S3 buckets and SQS queues.
Template Overview
The template does the following:
- Creates an IAM role named CriblTrustCloud
- Configures a trust relationship with Cribl Cloud's AWS account
- Attaches a policy that grants access to S3 and SQS resources
- Outputs the role name, ARN, and an external ID for authentication
Parameters
- CriblCloudAccountID: The AWS account ID of Cribl Cloud (default: '012345678910')
IAM Role Details
Trust Relationship
The role trusts two specific roles in the Cribl Cloud account:
- arn:aws:iam::{CriblCloudAccountID}:role/search-exec-main
- arn:aws:iam::{CriblCloudAccountID}:role/main-default
These roles can assume the CriblTrustCloud role using the sts:AssumeRole, sts:TagSession, and sts:SetSourceIdentity actions.
Permissions
The role has a policy named CriblCloudS3SQSPolicy that grants the following permissions:
- S3 access:
- List buckets
- Get and put objects
- Get bucket location
- SQS access:
- Receive and delete messages
- Change message visibility
- Get queue attributes and URL
These permissions apply to all S3 buckets and SQS queues in the account.
Security Feature
The template includes a security feature that requires an external ID for authentication. This external ID is derived from the CloudFormation stack ID, providing an additional layer of security when assuming the role.
Outputs
The template provides three outputs:
- RoleName: The name of the created IAM role
- RoleArn: The ARN of the created role
- ExternalId: The external ID required for authentication when assuming the role
Usage
To use this template:
- Deploy it in your AWS account using CloudFormation
- Provide the resulting role ARN and external ID to Cribl Cloud
- Cribl Cloud can then assume this role to access your S3 and SQS resources
Remember to review and adjust the permissions as necessary to align with your security requirements and the specific needs of your Cribl Cloud integration1 2 3 .
<div style="text-align: center">⁂</div>Enable CloudTrail and VPC Flow Logging for Cribl Cloud
This document explains the resources that will be created when deploying the provided CloudFormation template. The template is designed to create an IAM role that trusts Cribl Cloud and sets up CloudTrail and VPC Flow logging to an S3 bucket.
Template Overview
The template automates the creation of AWS resources to enable centralized logging, specifically focusing on CloudTrail logs and VPC Flow Logs. It creates S3 buckets for storing these logs, SQS queues for triggering processes upon log arrival, and an IAM role to allow Cribl Cloud to access these logs.
Resources Created
Here's a breakdown of the resources defined in the CloudFormation template:
-
CriblCTQueue (AWS::SQS::Queue): Creates an SQS queue named according to the CTSQS parameter (default: cribl-cloudtrail-sqs). This queue will be used to trigger actions when new CloudTrail logs are written to the S3 bucket.
- Properties:
- QueueName: !Ref CTSQS - Sets the queue name to the value of the CTSQS parameter.
- Properties:
-
CriblCTQueuePolicy (AWS::SQS::QueuePolicy): Defines the policy for the CriblCTQueue, allowing s3.amazonaws.com to send messages to the queue. The policy includes a condition that the source account must match the AWS account ID in which the stack is deployed. This ensures only S3 events from the current AWS account can trigger the queue.
- Properties:
- PolicyDocument:
- Statement:
- Effect: Allow - Allows actions specified in the policy.
- Principal: Service: s3.amazonaws.com - Specifies the service that can perform the actions.
- Action: SQS:SendMessage - Allows sending messages to the queue.
- Resource: !GetAtt CriblCTQueue.Arn - The ARN of the SQS queue.
- Condition:
- StringEquals: 'aws:SourceAccount': !Ref AWS::AccountId - Restricts the source account to the account where the stack is deployed.
- Statement:
- Queues: !Ref CTSQS - Associates the policy with the SQS queue.
- PolicyDocument:
- Properties:
-
TrailBucket (AWS::S3::Bucket): Creates an S3 bucket used to store CloudTrail logs. The bucket is configured with a NotificationConfiguration that sends an event to the CriblCTQueue when a new object is created (specifically, a PUT operation). This will trigger processing when new CloudTrail logs are available.
- Properties:
- NotificationConfiguration:
- QueueConfigurations:
- Event: s3:ObjectCreated:Put - Specifies that the notification should be triggered when an object is created using a PUT operation.
- Queue: !GetAtt CriblCTQueue.Arn - The ARN of the SQS queue to send the notification to.
- QueueConfigurations:
- NotificationConfiguration:
- DependsOn: CriblCTQueuePolicy - Ensures that the queue policy is created before the bucket.
- Properties:
-
TrailBucketPolicy (AWS::S3::BucketPolicy): Defines the policy for the TrailBucket. This policy grants permissions to:
-
delivery.logs.amazonaws.com: Allows the AWS Logs service to write objects to the bucket, ensuring proper log delivery. It requires bucket-owner-full-control ACL.
-
cloudtrail.amazonaws.com: Allows CloudTrail to get the bucket ACL and put objects into the bucket. It also requires bucket-owner-full-control ACL.
-
A Deny statement that enforces the use of SSL for all requests to the bucket, enhancing security.
-
Properties:
- Bucket: !Ref TrailBucket - The name of the S3 bucket.
- PolicyDocument:
- Version: 2012-10-17 - The version of the policy document.
- Statement:
- Sid: AWSLogDeliveryWrite
- Effect: Allow - Allows the action specified.
- Principal: Service: delivery.logs.amazonaws.com - The AWS Logs service principal.
- Action: s3:PutObject - Allows putting objects into the bucket.
- Resource: !Sub '${TrailBucket.Arn}/AWSLogs/' - The S3 bucket and prefix to allow the action on.
- Condition: StringEquals: 's3:x-amz-acl': bucket-owner-full-control - Requires the bucket-owner-full-control ACL.
- Sid: AWSCloudTrailAclCheck
- Effect: Allow
- Principal: Service: cloudtrail.amazonaws.com
- Action: s3:GetBucketAcl
- Resource: !Sub '${TrailBucket.Arn}'
- Sid: AWSCloudTrailWrite
- Effect: Allow
- Principal: Service: cloudtrail.amazonaws.com
- Action: s3:PutObject
- Resource: !Sub '${TrailBucket.Arn}/AWSLogs/*/*'
- Condition: StringEquals: 's3:x-amz-acl': 'bucket-owner-full-control'
- Sid: AllowSSLRequestsOnly
- Effect: Deny
- Principal: * - Applies to all principals.
- Action: s3:* - Denies all S3 actions.
- Resource:
- !GetAtt TrailBucket.Arn
- !Sub '${TrailBucket.Arn}/*'
- Condition: Bool: 'aws:SecureTransport': false - Denies requests that are not using SSL.
- Sid: AWSLogDeliveryWrite
-
-
ExternalTrail (AWS::CloudTrail::Trail): Creates a CloudTrail trail. It is configured to:
-
Store logs in the TrailBucket.
-
Include global service events.
-
Enable logging.
-
Create a multi-region trail.
-
Enable log file validation.
-
Properties:
- S3BucketName: !Ref TrailBucket - The name of the S3 bucket where the logs will be stored.
- IncludeGlobalServiceEvents: true - Includes global service events.
- IsLogging: true - Enables logging.
- IsMultiRegionTrail: true - Creates a multi-region trail.
- EnableLogFileValidation: true - Enables log file validation.
- TrailName: !Sub '${TrailBucket}-trail' - Sets the name of the trail.
-
DependsOn:
- TrailBucket
- TrailBucketPolicy
-
-
CriblVPCQueue (AWS::SQS::Queue): Creates an SQS queue named according to the VPCSQS parameter (default: cribl-vpc-sqs). This queue will be used to trigger actions when new VPC Flow Logs are written to the S3 bucket.
- Properties:
- QueueName: !Ref VPCSQS - Sets the queue name.
- Properties:
-
CriblVPCQueuePolicy (AWS::SQS::QueuePolicy): Defines the policy for the CriblVPCQueue, allowing s3.amazonaws.com to send messages to the queue. Similar to CriblCTQueuePolicy, it restricts access to events originating from the same AWS account.
- Properties:
- PolicyDocument:
- Statement:
- Effect: Allow
- Principal: Service: s3.amazonaws.com
- Action: SQS:SendMessage
- Resource: !GetAtt CriblVPCQueue.Arn
- Condition: StringEquals: 'aws:SourceAccount': !Ref "AWS::AccountId"
- Statement:
- Queues: !Ref VPCSQS
- PolicyDocument:
- Properties:
-
LogBucket (AWS::S3::Bucket): Creates an S3 bucket used to store VPC Flow Logs. The bucket is configured with a NotificationConfiguration to send an event to the CriblVPCQueue when new objects are created.
- Properties:
- NotificationConfiguration:
- QueueConfigurations:
- Event: s3:ObjectCreated:Put
- Queue: !GetAtt CriblVPCQueue.Arn
- QueueConfigurations:
- NotificationConfiguration:
- DependsOn: CriblVPCQueuePolicy
- Properties:
-
LogBucketPolicy (AWS::S3::BucketPolicy): Defines the policy for the LogBucket. This policy grants permissions to:
-
delivery.logs.amazonaws.com: Allows the AWS Logs service to write objects to the bucket. It requires bucket-owner-full-control ACL.
-
Allows delivery.logs.amazonaws.com to get the bucket ACL.
-
Enforces SSL for all requests to the bucket.
-
Properties:
- Bucket: !Ref LogBucket
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Sid: AWSLogDeliveryWrite
- Effect: Allow
- Principal: Service: delivery.logs.amazonaws.com
- Action: s3:PutObject
- Resource: !Sub '${LogBucket.Arn}/AWSLogs/${AWS::AccountId}/*'
- Condition: StringEquals: 's3:x-amz-acl': bucket-owner-full-control
- Sid: AWSLogDeliveryAclCheck
- Effect: Allow
- Principal: Service: delivery.logs.amazonaws.com
- Action: s3:GetBucketAcl
- Resource: !GetAtt LogBucket.Arn
- Sid: AllowSSLRequestsOnly
- Effect: Deny
- Principal: *
- Action: s3:*
- Resource:
- !GetAtt LogBucket.Arn
- !Sub '${LogBucket.Arn}/*'
- Condition: Bool: 'aws:SecureTransport': false
- Sid: AWSLogDeliveryWrite
-
-
FlowLog (AWS::EC2::FlowLog): Creates a VPC Flow Log that captures network traffic information for the VPC specified in the VPCId parameter. The flow logs are stored in the LogBucket. The type of traffic to log is determined by the TrafficType parameter (ALL, ACCEPT, or REJECT).
- Properties:
- LogDestination: !Sub 'arn:${AWS::Partition}:s3:::${LogBucket}' - The ARN of the S3 bucket where the flow logs will be stored.
- LogDestinationType: s3 - Specifies that the destination is an S3 bucket.
- ResourceId: !Ref VPCId - The ID of the VPC to log.
- ResourceType: VPC - Specifies that the resource is a VPC.
- TrafficType: !Ref TrafficType - The type of traffic to log (ALL, ACCEPT, REJECT).
- Properties:
-
CriblTrustCloud (AWS::IAM::Role): Creates an IAM role that allows Cribl Cloud to access AWS resources.
- Properties:
- AssumeRolePolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Principal:
- AWS:
- !Sub 'arn:aws:iam::${CriblCloudAccountID}:role/search-exec-main'
- !Sub 'arn:aws:iam::${CriblCloudAccountID}:role/main-default'
- AWS:
- Action:
- sts:AssumeRole
- sts:TagSession
- sts:SetSourceIdentity
- Condition:
- StringEquals: 'sts:ExternalId': !Select - 4 - !Split - '-' - !Select - 2 - !Split - '/' - !Ref 'AWS::StackId'
- Description: Role to provide access AWS resources from Cribl Cloud Trust
- Policies:
- PolicyName: SQS
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
- Resource:
- !GetAtt CriblCTQueue.Arn
- !GetAtt CriblVPCQueue.Arn
- PolicyDocument:
- PolicyName: S3EmbeddedInlinePolicy
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Action:
- s3:ListBucket
- s3:GetObject
- s3:PutObject
- s3:GetBucketLocation
- Resource:
- !Sub ${TrailBucket.Arn}
- !Sub ${TrailBucket.Arn}/*
- !Sub ${LogBucket.Arn}
- !Sub ${LogBucket.Arn}/*
- PolicyDocument:
- PolicyName: SQS
- AssumeRolePolicyDocument:
- Properties:
Parameters
The template utilizes parameters to allow customization during deployment:
- CriblCloudAccountID: The AWS account ID of the Cribl Cloud instance. This is required for the IAM role's trust relationship.
- Description: Cribl Cloud Trust AWS Account ID. Navigate to Cribl.Cloud, go to Workspace and click on Access. Find the Trust and copy the AWS Account ID found in the trust ARN.
- Type: String
- Default: '012345678910'
- CTSQS: The name of the SQS queue for CloudTrail logs.
- Description: Name of the SQS queue for CloudTrail to trigger for S3 log retrieval.
- Type: String
- Default: cribl-cloudtrail-sqs
- TrafficType: The type of traffic to log for VPC Flow Logs (ALL, ACCEPT, REJECT).
- Description: The type of traffic to log.
- Type: String
- Default: ALL
- AllowedValues: ACCEPT, REJECT, ALL
- VPCSQS: The name of the SQS queue for VPC Flow Logs.
- Description: Name of the SQS for VPCFlow Logs.
- Type: String
- Default: cribl-vpc-sqs
- VPCId: The ID of the VPC for which to enable flow logging.
- Description: Select your VPC to enable logging
- Type: AWS::EC2::VPC::Id
Outputs
The template defines outputs that provide key information about the created resources:
- CloudTrailS3Bucket: The ARN of the S3 bucket storing CloudTrail logs.
- Description: Amazon S3 Bucket for CloudTrail Events
- Value: !GetAtt TrailBucket.Arn
- VPCFlowLogsS3Bucket: The ARN of the S3 bucket storing VPC Flow Logs.
- Description: Amazon S3 Bucket for VPC Flow Logs
- Value: !GetAtt LogBucket.Arn
- RoleName: The name of the created IAM role.
- Description: Name of created IAM Role
- Value: !Ref CriblTrustCloud
- RoleArn: The ARN of the created IAM role.
- Description: Arn of created Role
- Value: !GetAtt CriblTrustCloud.Arn
- ExternalId: The external ID used for authentication when assuming the IAM role.
- Description: External Id for authentication
- Value: !Select - 4 - !Split - '-' - !Select - 2 - !Split - '/' - !Ref 'AWS::StackId'
Deployment Considerations
- Cribl Cloud Account ID: Ensure the CriblCloudAccountID parameter is set to the correct AWS account ID for your Cribl Cloud instance. This is crucial for establishing the trust relationship.
- S3 Bucket Names: S3 bucket names must be globally unique. If the template is deployed multiple times in the same region, you may need to adjust the names of the buckets. Consider using a Stack name prefix.
- VPC ID: The VPCId parameter should be set to the ID of the VPC for which you want to enable flow logging.
- Security: Regularly review and update IAM policies to adhere to the principle of least privilege. Consider using more restrictive S3 bucket policies if necessary.
- SQS Queue Configuration: Monitor the SQS queues for backlog and adjust the processing capacity accordingly.
- CloudTrail Configuration: Confirm that CloudTrail is properly configured to deliver logs to the designated S3 bucket.
- VPC Flow Log Configuration: Verify that VPC Flow Logs are correctly capturing network traffic.
- External ID: The External ID is a critical security measure for cross-account access. Make sure it's correctly configured in both AWS and Cribl Cloud.
This detailed explanation provides a comprehensive understanding of the resources created by the CloudFormation template, enabling informed deployment and management. Remember to adapt parameters to your specific environment and security requirements.
Footnotes
Resources
Vendor resources
Support
Vendor support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
FedRAMP
GDPR
HIPAA
ISO/IEC 27001
PCI DSS
SOC 2 Type 2
Standard contract
Customer reviews
Data pipelines have reduced noise and now send controlled, optimized logs to security tools
What is our primary use case?
Cribl 's main use case in our company is log routing and data optimization before sending it into our SIEM platform. In our environment, we collect logs from multiple sources like endpoints, applications, and infrastructure, and Cribl helps us process the data in the pipeline before it reaches the SIEM . We can filter unnecessary logs, transform fields when needed, drop unnecessary fields, and add necessary fields from eval functions through pipelines, then route the data to different destinations depending on the use.
In our environment, for log routing and data optimization in our pipeline using Cribl, we were receiving firewall data from different parts of the country. The issue was related to time zone differences. We had to convert the time zone of all the firewall logs into GMT format. We used Cribl's pipeline to convert all the firewall logs, which were in different time zones, to GMT time zone, and then routed it to our main SIEM platform.
What is most valuable?
The best features Cribl offers include the ability to see the data flow right away when the data is flowing. Capturing live data was a very good feature. We get pretty much different functions to transform data in the pipeline. Another feature we really like is the pipeline-based processing, where we can easily create rules for parsing, masking, or modifying log fields.
Seeing the live data flow with Cribl has definitely been helpful. It makes it much easier to see how logs are moving through the pipeline in real-time and understand where transformations or routing are happening, or where the data is breaking, or where the error is coming from—whether it is from the source only or breaking at the pipeline. There was a situation where we were not seeing certain logs reaching our SIEM platform, even though the source system was generating them. Using the live data preview in Cribl, we were able to trace the logs through the pipeline and quickly identify that a filtering rule was unintentionally dropping some events. Because of that visibility, we could adjust the pipeline rule immediately and verify the fix in real-time. Instead of spending a lot of time troubleshooting across multiple systems, the transparency in the data pipeline really speeds up debugging and operational monitoring for us.
Cribl has had a positive impact on our organization mainly in terms of better control over our log data and improved efficiency in our log management pipeline. Before using a tool like Cribl, a lot of raw logs would directly go into SIEM, which could create noise and increase ingestion volume. With Cribl, we are able to filter unnecessary events, transform logs, and route data more intelligently before it reaches the SIEM. This helps ensure that the security team is working with more relevant and structured data, which improves analysis and detection workflow.
What needs improvement?
Cribl is a very capable platform, but one area where it could improve is the learning curve for new users. Since it offers a lot of flexibility in building pipelines and transformation, it can take some time for beginners to fully understand how to design efficient pipelines. Another platform we have used provides a workflow-like UI so you can directly configure the source, the pipeline, and the destination, which we think Cribl is lacking here. We know there is a Quick Connect option also, but it is not that much efficient in our perspective. Another improvement could be building more built-in templates or pre-configured pipelines for common log sources. That could help the team get faster, especially when integrating new data sources. Also, while the platform provides good visibility into data flow and enhanced troubleshooting and monitoring, insights for pipeline performance could make debugging even easier in larger environments.
One thing that Cribl could improve is the workflow creation of source, pipeline, and the destination, which we still feel is lacking in Cribl.
What do I think about the stability of the solution?
Cribl is generally a stable platform, especially when it's properly deployed and monitored. It is designed to handle large volumes of telemetry data like logs and metrics, and many organizations run it as a central data pipeline without major downtime issues.
What do I think about the scalability of the solution?
Cribl is quite scalable, especially for environments that handle large volumes of logs and telemetry data. The architecture allows you to scale both vertically and horizontally, depending on the workload. For example, you can scale up by adding more CPUs and memory to a single instance or scale out by adding more worker nodes to distribute the processing load across multiple systems. This distributed worker architecture helps handle increasing data volumes and more complex pipelines without significantly affecting performance. Another advantage is that the load can be balanced across worker nodes, which allows the platform to process very large streams of data efficiently and maintain high throughput. Cribl scales very well for enterprise environments where log volumes keep growing and multiple data sources need to be processed simultaneously.
How are customer service and support?
Cribl's customer support has been quite good whenever teams run into issues or need guidance with pipeline configuration or deployments. The support team is generally responsive and knowledgeable. Based on what we have seen and heard from other users as well, support tickets are usually handled quickly, and the team tends to understand technical problems well, which helps resolve issues efficiently.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
Before using Cribl, most of the log processing was handled directly within the SIEM platform itself, mainly using native parsing and filtering capabilities in tools such as Splunk. While that works, it means the raw logs first get ingested into the SIEM, and then you handle the transformation or filtering afterward. The reason for moving toward Cribl was mainly to introduce a dedicated data pipeline layer before the SIEM.
Before adopting Cribl, we did evaluate a few other approaches. Some of the evaluation was around using native capabilities within SIEM platforms like Splunk, as well as open-source log processing tools like Logstash for handling data pipelines. Those options can work for log collection and processing, but Cribl stood out because it provides a dedicated platform specifically designed for observability and security data pipelines. It offers more flexibility, routing, filtering, and transforming logs without heavily relying on the SIEM itself. That is why we chose Cribl over any other platform.
How was the initial setup?
In terms of the setup, the initial deployment was not very complicated, especially if you already have experience with log pipelines and SIEM integrations. Most of the effort usually goes into designing the pipeline and configuring the routing and transformation rather than licensing or installation itself. Overall, the model feels fairly aligned with modern observability tools, where you can scale usage based on your data volume and infrastructure needs.
What was our ROI?
We have seen a positive return on investment from using Cribl, mainly through better data control and operational efficiency. One of the biggest benefits is the reduction in unnecessary log ingestion into the SIEM. By filtering and routing logs through Cribl first, we avoid sending low-value or redundant data downstream, which helps optimize the storage and licensing costs.
One noticeable outcome from using Cribl has been better control over the volume of data being sent to the SIEM. By filtering unnecessary logs and routing only relevant events, we were able to reduce the overall log ingestion volume, which indirectly helps with storage and licensing costs. Another improvement is in operational efficiency because the data is already cleaned and structured in the pipeline, making it easier for analysts to search and investigate events in the SIEM, which can speed up investigations. The licensing cost is saved via Cribl.
What other advice do I have?
Another feature that we found very useful about Cribl is the ease of integration with multiple destinations. We just have to route the main pipeline to multiple destinations, and it will go to multiple destinations. Sometimes the data needs to be routed to different platforms for security monitoring, observability, or long-term storage. Cribl makes it very easy to send the same data to multiple destinations with different processing rules. We also like the flexibility in data transformation. If log formats change or we need to mask sensitive information or normalize fields, we can handle that directly in the pipeline without modifying the source system.
The pricing and the licensing model for Cribl seem quite flexible, although the purchasing was handled by our organization rather than by us directly. Our role has been more on the technical and operational side of using the platform.
Cribl can handle high volumes of diverse data types like logs and metrics quite well. In environments where you're collecting logs from many different sources, the platform is designed to process and route that data efficiently through pipelines. We found useful its ability to apply filtering, parsing, and transformations at scale, which helps manage large data streams without overwhelming downstream systems like SIEM platforms.
Another useful approach is to leverage the documentation and built-in pipeline functions because Cribl provides many ready-to-use processing capabilities that can save time.
Our advice would be to start by clearly understanding your data pipeline requirements before implementing Cribl. Since it is a very flexible platform, it works best when you know what data you want to keep, what data you want to filter out, and where the data should be routed. We would also recommend starting with a few simple pipelines first, then gradually expanding as you become more comfortable with the platform. We give this review a rating of eight out of ten.
Data workflows have become streamlined as I manage costs and parse diverse sources efficiently
What is our primary use case?
I use Cribl to move data and help with moving data, connecting different data sources to different destinations, which is what I mainly use it for.
I also use it to help parse the data as well.
What is most valuable?
Something that I really appreciate about Cribl is the preview feature. Whether it would be on the JavaScript I'm working on, it shows me the output in real time, which really helps with development.
I also appreciate the preview feature when it comes to data pipelines, as it shows me in real time how my pipeline would be working with the data. Additionally, I really appreciate the live capture feature as well to get an idea of how the data looks at different stages in Cribl environment.
I think Cribl is an excellent tool for helping to manage data cost and keep it down as well as manage complexity.
What needs improvement?
Cribl has come a long way. I've been using it for three years, but there are still a lot of other features that I would appreciate regarding new data sources. One example would be open WebSockets.
There's currently not a native feature for that, so that requires a lot of time in development. I would also appreciate better support for JWT tokens for a REST API collection. While sometimes it does work, it seems very janky and seems like a stitched-together solution. It would be nice if there was a more supported version to help with JWT.
For how long have I used the solution?
I've been working with Cribl for a long time, at least three years, maybe more.
What do I think about the stability of the solution?
Cribl is very robust. It's not perfect, but very good stability.
What do I think about the scalability of the solution?
Cribl is very scalable. The product itself lends itself well to being scaled. Any issues I've had with scaling have mainly just been human issues of people not wanting to scale, but the product itself is very capable of scaling.
How are customer service and support?
The speed was fast. The quality, however, there wasn't a solution just because I think it was a bug and it was never fixed as far as I know. The speed was nice, but there was never a solution provided.
How would you rate customer service and support?
Negative
Which solution did I use previously and why did I switch?
I use Splunk.
What was our ROI?
From what I understand, I'm mainly on the engineering side, not the sales side, but the pricing is very competitive. Although the pricing can be a little bit high, I know that Cribl as a product helps save a lot of money by reducing data storage. The pricing is offset by the money I save by using Cribl.
What's my experience with pricing, setup cost, and licensing?
Cribl does require maintenance, especially if I'm deploying it on-premises. If I'm deploying on-premises on my machines, I've just got to make sure that they're being provisioned well, that they're being updated successfully, and that they're constantly balancing the worker processing across them.
Which other solutions did I evaluate?
I definitely prefer Cribl more, mainly for the UI and the preview feature that I mentioned about being able to see in real time my in and out for development. I think that speeds things up a lot.
However, I do like Splunk a lot too.
I think Splunk is better tailored for visualizations and presenting to clients, especially around metrics. I think I can do some visualizations and presentations of metrics in Cribl, but it's not as robust as Splunk.
What other advice do I have?
Definitely for large corporations, they would see the most benefit, but I think small and medium businesses could also benefit as well.
Log pipelines have reduced daily data volume and now simplify traffic analysis
What is our primary use case?
We generally use Cribl for dropping or optimizing our logs and data. We optimize logs using Cribl pipelines, then we route it to Splunk. That was our primary use case.
Our primary goal with using Cribl was to reduce our Cisco firewall logs where we are dropping the logs which are not necessary in our traffic-related logs, or the logs which generally only show a connection status. Those types of logs we were dropping using Cribl.
What is most valuable?
What I like most about Cribl is the overall pipeline structure and easiness. It is very easy to use and it also provides all the necessary features which are required in data processing. We do not need to learn so many things to do complex tasks. That's what I really appreciate about it. It's doing a simple process where you just need to know about your logic and that thing may be pre-built on Cribl. Cribl provides packages and all the features.
I would say Cribl provides you the value of your money. It provides you a good user interface where you manage all your data. You don't need to worry about your backend. Specifically, I'm talking about Cribl Cloud, as I have mostly been working with Cribl Cloud. It's very cost-optimized, or I can say whatever I'm paying, I'm getting all out of that.
What needs improvement?
Overall, the pipelines and all the features are good with Cribl. The UI is good. Just sometimes, when I actually started using Cribl, I faced the issue where I was not able to connect the nodes. The pipeline is structured in a certain way, then the data will be routed to there, and something of that nature. I was very much confused about their whole products, such as Data Lake and pipelines. It's possible that at that time I didn't take any university courses, which is why I did not know much. But if they can give an intro on how we can connect nodes, or they can provide simple use cases showing what you can do with Cribl, it would help. If you just need to add the source and the destination and pre-build some proper workflow, then it will be easy for new customers to navigate through Cribl.
For how long have I used the solution?
I have been working with Cribl for around one and a half years.
What do I think about the stability of the solution?
I don't feel Cribl has any issue with handling high volumes of diverse data types. We were ingesting around 10 TB of data daily, and we were reducing it to around six or five and a half terabytes. So it is pretty efficient. We have not faced any major issues with our ingestion or anything of that nature. It has the capability to catch up according to the data ingestion rate.
I have not seen any lagging, crashing, or downtime in Cribl at any particular time. But if I speak about lagging or anything, I faced some issue while capturing the log on the live source. Whenever I tried to capture the logs, I was a bit confused about whether logs are getting captured or if I was doing anything wrong. Because it does not show any error if my configuration is missing or something of that nature. Otherwise, I don't have any issue regarding Cribl performance or anything.
What do I think about the scalability of the solution?
I don't think there is any issue regarding scalability with Cribl. As we were ingesting around 10 terabytes of data every day and it didn't affect or cause any issue on any day.
Which solution did I use previously and why did I switch?
I would not say I have tried an alternative to Cribl properly. We tried to implement the same use case using Splunk Ingest Processor or Edge Processor, which is the recent product of Splunk. It is not that straightforward as Cribl. We must play in a restricted environment where we have limited support of the Splunk command. So I cannot say that it is actually similar to Cribl or something of that nature, and I have not used any others.
What other advice do I have?
I was able to create one simple pipeline with Cribl which was just dropping the data in around eight to twelve hours total. In which I basically understood what routes and pipelines are. I was playing with the UI and how the functions are working, how the pipeline flows the data, how can I duplicate the data, how can I drop, how can I null queue, and things of that nature.
Log management has cut costs and now routes diverse data to multiple destinations efficiently
What is our primary use case?
As a Splunk administrator, I was using Splunk for everything from collecting logs to filtering them and viewing whatever I required, including searching queries. The Splunk license was costing me millions of dollars, so I wanted a tool where input data I did not require could be transformed to churn out meaningful data that I actually needed, with only that data being ingested into Splunk. Cribl played a very important role in this regard. It not only helped me with cost optimization but also transformed the data, and it was user-friendly. I used to have a specific regex query on my indexers, but those were removed once I introduced Cribl . In that way, I am using Cribl for cost optimization.
My sources and destinations are now being taken care of, whereas before, if I wanted to route my data to any specific destination, I had to configure it manually on the Splunk side. With Cribl, one source can have multiple destinations, and it is all UI friendly. This helps me considerably.
My core purpose in using Cribl is to get insight into login logs, including user login, log out, and all those sorts of logs. I use it for that purpose and have never come across anything such as a firewall.
What is most valuable?
When managing log processing tasks, my experience with Cribl's user interface is extremely smooth, quick, and very user-friendly. If I want to monitor my incoming data, I just have to go to that specific panel and click on monitoring. I can capture the live logs and make minute changes just to view how my output would look without needing to do anything on the back end. In that way, I would say it is very user-friendly, covering most of the available standard sources and destinations without needing additional plugins. If I want to source CrowdStrike or integrate it with Kafka, all that is available right on the UI.
From my perspective, I like Cribl Edge very much. Until now, I had to collect the data using a universal forwarder as an agent installed on the source side, but with Cribl Edge, you do not require any installation. You simply set up the source on the Cribl Edge side, and it starts collecting the data. Unlike traditional forwarders where you have to manually install the agent, Cribl Edge simplifies that process. Cribl Stream is also one of the best features. If I want to perform any transformation, I can create multiple routes and perform operations on the incoming data based on my output configuration. I can have my login routes into specific dashboards based on transformations. I am using both Stream and Edge.
Cribl Edge's centralized fleet management has saved a lot of my time and effort and has also helped with cost optimization. As a core Splunk administrator, I used to manually install the Splunk universal forwarder on my source site. Since using Cribl Edge, I just set up my source and do some networking tweaks to include it in my parameters, and then the agent starts collecting the required logs for me without the traditional installation process.
What needs improvement?
I think Cribl should enhance its visualization side, similar to Splunk or Grafana , where things can be visualized more accurately or presentably. Adding features for trending data lines and predictive analysis would be a beneficial addition.
For how long have I used the solution?
I have been working with Cribl for probably more than a year, maybe around fifteen to sixteen months.
What do I think about the stability of the solution?
Regarding stability and scalability, I have not faced any crashes, downtimes, or performance issues. I would rate it ten out of ten as it has been smooth overall. However, in tools like Splunk, you often have a free limit, but in Cribl, you need a production license to process anything.
How are customer service and support?
I am aware of Cribl's technical support. I can raise a case via email or use on-demand support. I am familiar with it but have not needed to reach out recently, though I am aware there is twenty-four seven support with a dedicated email ID.
I would rate the customer service or technical support team very high, around eight or nine. They are quick to respond, have a service-level agreement, and I have not encountered a time when it was breached. You can also provide your mobile number if something is urgent, and they will call you directly.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
Before choosing Cribl, I did not really evaluate other options. We were predominantly relying on Splunk, and aside from it, we relied on primitive AWS agents. Choosing Cribl as an independent tool offered a major advantage since it is platform-independent and can integrate with any cloud environment.
How was the initial setup?
My experience with the initial setup and deployment process was straightforward. Cribl provides training, including free certifications called Cribl University. Anyone without a background in data processing can go through those certifications to understand how to install and use Cribl for their cases. Since I come from a similar background, I faced no challenges.
What about the implementation team?
Everything was done in-house. My leadership took care of procurement, and we managed the deployment, creating the topology and using it by ourselves.
What was our ROI?
The return on investment with Cribl is huge. My enterprise would have ended up paying a lot of money for similar types of work before Cribl was introduced, so the return is quite good.
What's my experience with pricing, setup cost, and licensing?
Regarding Cribl's pricing aspect, I find it very nominal. It seems to be a startup, and from an engineering enterprise perspective, it is price-friendly and not competitive. The price-to-benefit ratio shows high benefits compared to a comparatively low price.
Which other solutions did I evaluate?
I am using the software version, not working with it on the AWS cloud.
I bought the Cribl product directly from Cribl. I reached out to my leadership, and they facilitated getting the Cribl license and everything directly from cribl.io.
What other advice do I have?
Cribl handles high volumes of diverse data types, such as logs and metrics, very well. It is a stable platform; even with high input data ingestion, it does not slow down. My experience shows it is quite stable regardless of how large the amount of data being processed.
Cribl Search has helped me in a good way regarding long-term log retention and historical investigations. However, I have not explored that area much. My prime area was to reduce the costs associated with Splunk, which costs around seventy-five million dollars yearly due to many redundant logs. Cribl helped me filter those logs for cost optimization.
Unified management has absolutely helped me and saved me a lot of time. During situations concerning a major incident, I was able to get required results in less time, saving a lot of application downtime. Using Cribl on Kubernetes and Docker shows everything regarding the health of my underlying servers, making it easy to maintain. The core purpose I am using it for is cost optimization, and it has helped reduce incident time or downtime of my application, widely assisting me in areas where I needed it.
With Cribl Search's ability to search data in place, I can troubleshoot easily. I am using Cribl Stream with configured sources and destinations. If there is an error event, I can log in to the Cribl UI and type a query, such as the index name, to see all related events. It is helping me troubleshoot on the Cribl UI.
I do not think my wisdom or tech understanding is superior to offer advice. The tool itself is promising, but given the evolution of AI and similar technologies, it would be beneficial if Cribl could provide intelligent suggestions for configuration or search, similar to Visual Studio. I would rate this review an eight overall.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Data workflows have become streamlined as I transform complex security telemetry with confidence
What is our primary use case?
My use cases for Cribl include ETL: Extract, Transform, Load.
What is most valuable?
One thing that I like the most about Cribl is parsing data and parsing data sets for security. I would say automation use cases and detections are also great aspects.
My favorite feature of Cribl is that the UI is pretty intuitive, and they have a very good open-source platform.
What needs improvement?
One challenge that I find with Cribl is that it's nuanced, so if you're not familiar with how to do specific data transactions, it's going to be a difficult solution for someone to use. You have to be educated to a specific degree and understand data communication from beginning to end, alongside understanding the tool itself and how it operates; it can be confusing and challenging for some people if you don't understand how to use it.
I can't sit here and say that I've physically witnessed a decrease in firewall logs with Cribl, but certainly, there probably is one because of the way the redundancy is used for extracting that data. It should be something that's common-sensical or intuitive with the solution if you're utilizing it correctly, meaning you wouldn't upload gigabytes of duplicate telemetry.
My thoughts on Cribl's ability to contain data costs and complexity is that it's an accurate assessment, given that the person behind Cribl utilization is knowledgeable, but there is a steep learning curve. If you're a customer who has no idea how to use Cribl and just buy it hoping to solve your problems, it doesn't work that way. You must have some understanding of ETL in general or just source data, root data, and then what you're actually looking to transform. Just buying Cribl hoping it will solve all your problems is far from the truth. Although Cribl is a great product, you wouldn't give a Ferrari to your sixteen-year-old son right when they get their driver's license; that's the best analogy I can give. Cribl is a Ferrari for data analytics and monitoring, but you don't hand over the power or weaponize that tool for someone who doesn't know how to use it. A customer can definitely do all the things that Cribl claims, but it comes at a steep learning curve and that intuitive cost.
For how long have I used the solution?
I have been using Cribl in my career for probably over seven years, maybe longer, and I can't recall the first time, but it's been years though. I would say close to a decade.
What do I think about the stability of the solution?
I haven't personally witnessed any instability with Cribl, and any instability I have seen was caused by user error. This means performing a function within Cribl and then getting error outputs because of something, such as how the data transaction was communicated. I have heard of an issue where too much data gets backed up, but I can't think of the specific term Cribl uses for it. Such issues are fairly common.
What do I think about the scalability of the solution?
Cribl is good for scalability, making it a good product for any organization looking to do data transformation, whether small to medium businesses or large corporations.
How are customer service and support?
I have contacted customer support for Cribl, but it wasn't for anything operational; it was for some knowledge base articles. Their customer support is extremely responsive and very communicative.
If I were to put their support on a scale from one to ten, I would probably give them an eight.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
There are plenty of alternatives out there.
The closest one in terms of quality and tools that comes to mind for data management is BindPlane, but those two are not comparable. There are other solutions as well, but there's really nothing Cribl. Other solutions such as Axiom also come to mind, but again, you're talking about comparing Ferraris to Volkswagens or some other vehicle. Comparatively speaking, I can't really think of a solution that operates as well.
How was the initial setup?
A capable engineer should be able to deploy Cribl with ease. As I stated before, the open-source knowledge base is extremely thorough, and one with an engineering background shouldn't have a problem standing up Cribl; it should be pretty easy. The nuance comes with doing data transformation within Cribl, using pipelines, packs, and their specific solutions, which might present a learning curve. However, standing up the solution operationally is pretty straightforward.
What about the implementation team?
Regarding whether one person can do the deployment or if a team is needed, the answer isn't straightforward. In a small to medium business environment, I would say one person can do it. However, for organization-wide deployment, it depends on how efficient, effective, and optimized you want to be. You can't just respond with a direct answer; you have to ask what kind of outcomes and timelines you're looking to achieve. If you're asking me straightforwardly if one person can do it, I would say it's possible, but it's a very misleading answer.
What's my experience with pricing, setup cost, and licensing?
For pricing, I would say that Cribl is pretty standard across any of these other organizations, and it's pretty comparative depending on the ingest. Some people have different licensing models, and you have to consider ingest, scale, and what you're taking in and putting out. For instance, a license for Cribl would be five hundred thousand plus your ingest costs for your datasets, such as all your syslog and your third-party data sources. That being said, there are other organizations that have different pricing models, so it's hard to do a straightforward comparison. Axiom, for example, might have an all-inclusive licensing model around two hundred fifty thousand to three hundred thousand. To do a proper comparison, you would have to look at all the caveats. Overall, the pricing model for Cribl is pretty standard and straightforward.
What other advice do I have?
Cribl does require maintenance from the user. You need to ensure that you're updating, including comments, service versions, and that sort of regular operational maintenance. It depends on specific endpoints and end-of-life considerations, but the general answer would be that you definitely need to maintain Cribl. You can't just deploy it and say you're done.
